Blog
Lessons Learned from Building Baseline-Driven Detections in Splunk
Reflections on what I learned while building baseline-driven detections in Splunk, including handling false positives, separating real risk from noise, and designing dashboards that actually help analysts.
Why I Started Thinking About Baselines Differently
When I first started building detections in Splunk, my instinct was to look for obvious bad behavior. Failed logins, privilege escalation, spikes in traffic. The problem was that everything looked suspicious when I did not fully understand what normal actually looked like.
It became clear pretty quickly that without a baseline, detections were either too sensitive or completely unhelpful. Alerts fired constantly, dashboards felt noisy, and it was hard to trust anything I was seeing.
That was the moment I realized that baselines are not a nice-to-have. They are the foundation.
Why Baselines Matter More Than Signatures
Signatures are useful, but they assume the attacker behaves in a known way. Baselines flip that idea around. Instead of asking what an attack looks like, you ask what normal looks like and then pay attention when something drifts.
In Splunk, this meant spending real time understanding daily patterns. Login frequency, service behavior, HTTP request volume, and even which users normally generate noise. Once those patterns were clear, unusual activity stood out naturally.
- Normal behavior is contextual and environment-specific
- Static thresholds fail quickly without historical context
- Baselines help reduce alert fatigue before alerts even exist
False Positives Are a Design Problem
One of the biggest lessons I learned is that false positives are usually not a data problem. They are a design problem.
Early on, I wrote detections that technically worked but produced alerts that nobody would want to triage. Failed logins triggered alerts even during known maintenance windows. HTTP spikes fired every time a service restarted.
By introducing baselines, time-based comparisons, and contextual filters, alerts became quieter and more meaningful. When something fired, it actually deserved attention.
Privilege Escalation Versus Everyday Noise
Privilege escalation was one of the hardest areas to tune. Admin activity exists in most environments, and treating all elevated actions as suspicious quickly breaks trust in detections.
Instead of alerting on the action alone, I started focusing on deviation. Who normally escalates privileges. How often it happens. At what times. When those patterns changed, the signal became much clearer.
This approach made alerts feel less like guesses and more like informed questions.
Designing Dashboards for Humans
Dashboards were another unexpected lesson. It is easy to build dashboards that look impressive and still fail their purpose.
What worked best was simplicity. Clear trends over time. Minimal color usage. Panels that answer one question instead of ten. A dashboard should help someone understand the environment in seconds, not minutes.
- Trends matter more than raw counts
- Too many panels hide the important ones
- Dashboards should guide investigation, not replace it
If everything is an alert, nothing is an alert.
What This Changed for Me
Building baseline-driven detections changed how I think about defensive security. I now approach detections as living systems that evolve with the environment rather than static rules.
It also reinforced that good security tooling respects the analyst. If alerts are trustworthy and dashboards are readable, defenders can spend more time thinking and less time reacting.
That mindset has carried into every project I have worked on since.
Posted by Davis Burrill • January 4, 2026
← Back to all posts