Ninety-nine percent is not a statistic you expect to see in a security report. But that’s the finding from a new survey of 500 U.S. CISOs: 99.4% of organizations experienced at least one security incident tied to their SaaS or AI ecosystem in 2025. Only three respondents reported zero incidents. Three.
The survey, conducted by Consensuswide, covered companies ranging from 500 to 10,000 employees across all major industry verticals. It asked 17 questions about security posture, tooling, incidents, and preparedness. These organizations were running an average of 13 dedicated security tools each when those incidents happened. Financial services firms, the most security-invested sector in the survey, averaged 15.6 tools—and still experienced SaaS supply chain attacks at 26% above the cross-industry rate.
The Threat Has Moved
I had an opportunity to chat with Amir Khayat, co-founder and CEO of Vorlon, about what the data reveals. His explanation starts with how enterprise workflows have fundamentally changed—and why security monitoring hasn’t kept up.
Traditional SaaS automation is deterministic—if this, then that. It breaks the moment a variable changes. AI agents work differently. They use large language models to interpret intent, handle edge cases on the fly, and select tools and APIs based on real-time goals rather than hard-coded paths. That creates a monitoring problem that security tools weren’t designed for.
“When behavior is deterministic, you can define normal and alert on deviation,” Khayat said. “When an agent is reasoning its way through a workflow, establishing a behavioral baseline becomes a fundamentally different problem.”
Most enterprise security architecture was built around what Khayat calls the front door: user logins, credential validation, permission audits, and network perimeter controls. That covered two distinct entrances—human users coming through browsers, and service-to-service APIs at the infrastructure level. Tools like CASBs, WAFs, and cloud security posture management were built for those patterns. The behavior was predictable enough to define normal and detect deviation.
The engine room is a different situation entirely. An AI agent resolving a routine IT ticket might autonomously touch identity systems, permissions, and configurations across Okta, Slack, GitHub, DocuSign, and payroll platforms—all in minutes, with no human involved. Each system logs its own slice. Nobody sees the full picture. The agent isn’t following a known pattern because it’s deciding the pattern as it goes. That doesn’t look like a suspicious login. It doesn’t trigger a configuration alert.
Asking the Wrong Questions
The tools most enterprises are running were built to answer specific questions: what are the configurations, who has what permissions, is anything misconfigured? Those are valuable questions. They’re just not the right questions when an AI agent is moving data through a legitimate OAuth-authorized integration.
The questions that matter in that scenario are: what is this agent actually doing, what data is it touching, and is that behavior consistent with what it was authorized to do. As Khayat put it: “You can have 15 of them running and still be blind to that activity.”
When CISOs were asked to rate their tools across 11 specific capability limitations, between 83% and 87% of organizations reported some level of limitation on every single one. The range spans only four percentage points across all 11. That’s not evidence that some vendors are outperforming others—it’s evidence that the entire category was built around the same assumptions, and those assumptions don’t hold for the agentic layer.
Confidence Versus What Actually Happened
Nearly 90% of CISOs surveyed claimed strong or comprehensive OAuth token governance. But 27.4% were breached through compromised OAuth tokens or API keys that same year. About 79% claimed comprehensive, real-time data flow mapping across SaaS and AI. But 86.8% said they can’t actually see what data AI tools are exchanging with SaaS applications. Those numbers can’t simultaneously be true.
Khayat traces that back to the difference between configuration-layer governance and runtime governance. Most organizations know which tokens exist, can audit permissions, and can revoke tokens manually. What they don’t have is visibility into whether active tokens are being used consistently with their intended scope, or whether a token’s behavior has drifted. Knowing a token exists isn’t the same as knowing what it’s doing right now.
ITDR platforms that track non-human identity activity run into the same wall—they typically stop at the authentication layer. They can tell you an agent is logged in. What they can’t tell you is what that agent did with data once it was inside: what it queried, what it moved, where it sent it, and whether any of that was within scope. 83.4% of CISOs said distinguishing between human and non-human behavior is a current limitation of their tools. That number should be part of every conversation about enterprise AI security right now.
More Budget, Same Architecture
More than 86% of organizations plan to increase SaaS security spending in 2026. 84% plan to increase AI security spending. Budget directed at the same tool categories will produce the same results. The 99.4% breach rate happened at 13 tools on average. Adding a 14th tool that monitors the front door won’t change anything in the engine room.
Khayat’s argument is that the layer itself needs to change—from configuration auditing to runtime monitoring. Behavioral baselines built around data interaction rather than login patterns. Real-time token governance tied to actual usage, not just inventory. And the ability to reconstruct a forensic timeline of agent activity across every connected system after something goes wrong. When a supply chain attack executes through a SaaS integration, the blast radius extends to every system that the token was authorized to access. Without that reconstruction capability, scoping remediation and meeting regulatory disclosure timelines get harder than they should be.
- The Browser Was Already a Problem – Now Add a Billion AI Agents - April 10, 2026
- Web Application Firewalls Are Broken, and Everyone Knows It - March 30, 2026
- When Billions of Devices Need to Trust Each Other - March 26, 2026