The app for independent voices

𝐄𝐯𝐞𝐫𝐲 𝐝𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐲𝐨𝐮'𝐯𝐞 𝐞𝐯𝐞𝐫 𝐰𝐫𝐢𝐭𝐭𝐞𝐧 𝐰𝐚𝐬 𝐚 𝐦𝐞𝐬𝐬𝐚𝐠𝐞 𝐭𝐨 𝐚 𝐡𝐮𝐦𝐚𝐧.

The title, severity, description, or runbook were all designed to transfer context to an analyst as quickly as possible. But when an AI agent is the first responder, what makes a detection "good" fundamentally changes.

Traditional detections guide human judgment through step-by-step procedures. Agents don't need procedures. They need context: what adversary behavior this rule surfaces, why it matters, what legitimate activity looks like, and what criteria distinguish risky from routine activity.

The SOC is shifting from human-led to human-guided. Analysts aren't disappearing; they're moving from confirming "is this alert legitimate?" to "how should an agent judge this type of alert?" Threat modeling, coverage decisions, and compliance stay deeply human. Execution increasingly belongs to agents.

But this also optimizes agent reasoning: The prompt attached to a detection changes what the agent recognizes as meaningful, not just the tools called. Runbooks written as decision frameworks produce fundamentally better analysis than runbooks written as procedures. Give agents goals, not scripts.

How are you adapting your detections for AI agent-led triage?

What Happens to Detections When Agents Do the Work
Mar 9
at
4:59 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.