Most security teams have more data than they know what to do with. Alerts, dashboards, telemetry feeds—all of it pointing at things that need attention. The problem isn’t that they can’t see the risks. It’s that seeing them and actually fixing them are two completely different things.
Known vulnerabilities sit unresolved for months. Orphaned accounts linger in identity systems. Cloud resources get spun up and forgotten. Certificates expire on assets nobody remembers owning. Security teams largely know about all of it. They just can’t move fast enough to do much about it.
I had a chance to talk with Yair Grindlinger, co-founder and CEO of Surf AI, about why that gap exists and what it takes to close it. He made a point that stuck with me: “20 years ago, you had to deal with a narrow set of assets. Today, you have multiple clouds and folders and buckets and 1,000 different SaaS applications. It’s like the universe is expanding. What we used to do 20 years ago doesn’t work at all now.”
And yet a lot of enterprise security programs are still built like it’s 20 years ago—or at least, built around tools that treat fixing problems as a side effect of finding them.
The Operational Problem Nobody Talks About
When you look at where security programs actually get stuck, it’s usually not detection. It’s everything that happens after detection. Who owns this asset? What breaks if I change it? Who has to approve this? Which team does this ticket go to?
Those questions sound simple. In a large enterprise, they’re anything but. Unclear ownership, cross-system dependencies, legacy infrastructure that nobody fully understands anymore—all of that creates friction that slows remediation to a crawl. Known problems pile up because resolving them requires coordination that organizations just aren’t set up to do at scale.
AI is making the underlying exposure worse. More identities, more permissions, more non-human accounts running automated processes—and more ways for attackers to find the gaps that haven’t been cleaned up. The riskiest exposures are often the quiet ones: dormant accounts, over-privileged service credentials, misconfigured cloud settings. They rarely trigger a high-priority alert. They just sit there.
Large enterprises can have tens of thousands of tokens and service identities spread across systems. Managing that manually—tracking down ownership, validating whether accounts are still active, coordinating remediation across teams—isn’t realistic. The exposure exists not because anyone is negligent, but because the scale of the problem outpaced what human processes can handle.
What Actually Has to Change
The piece that’s missing in most environments is context—not more data about what’s wrong, but the connective tissue that tells you who’s responsible, what depends on what, and what happens if you touch something.
Right now, a security tool will tell you an asset has a problem. It won’t tell you who actually owns that asset, whether it’s still in use, what the downstream impact of changing it might be, or who needs to sign off before anything happens. You have to go figure all of that out manually. By the time you do, you’ve already burned time that most teams don’t have.
Building that context layer requires pulling from a lot of sources at once—identity systems, cloud environments, HR data, ticketing systems, and communication channels. And it has to stay current, because ownership changes, people leave, and resources move around. A snapshot of an environment at a single point in time isn’t enough. You need a continuous, evolving picture.
Account ownership is a good example of how hard this gets. The last person who touched an asset isn’t necessarily the owner. The most frequent person isn’t necessarily the owner, either. You have to cross-reference HR records, look at ticket history, and factor in whether someone is on leave or has changed roles. It’s a lot of signal to synthesize—and it’s exactly the kind of work that doesn’t scale with human analysts alone.
AI Agents for Execution, Not Just Detection
There’s been a lot of focus on using AI for threat detection. Less attention has gone to the remediation side—the actual work of closing vulnerabilities, disabling accounts, enforcing policies, and keeping the environment clean on an ongoing basis.
The model that makes sense here is specialized agents, each with a narrow job. One collects information about an asset. Another updates the CMDB. Another contacts the account owner to confirm whether something should be removed. Another escalates to a manager if needed. Each one has a defined set of actions it can take and no more. Consistency comes from keeping each agent’s scope small and well-defined rather than building one agent that tries to do everything.
The audit question comes up immediately with any kind of automated remediation. If you’re running thousands of actions, who’s checking them? The practical answer is: you don’t review everything, but you audit everything. The full log is there. You can sample, spot-check and intervene when something looks off. But requiring a human to review every automated action defeats the purpose of automation in the first place.
That’s a mindset shift as much as a technical one. Grindlinger put it plainly: “You want to audit everything, and you want to sample and get involved if necessary, but you can’t follow every action. So how do you maintain consistency?” The answer is tight guardrails on what each agent can do, combined with full transparency into what it did.
Vendors Are Starting to Address This Differently
Vendors are starting to take a new approach to addressing this challenge. For example, Surf AI is built specifically around the gap between understanding risk and acting on it. Rather than surfacing problems and generating tickets, the platform focuses on closing the loop—building a context graph that links assets, identities, ownership, and dependencies across identity, cloud, security, and business systems, then using specialized AI agents to coordinate and execute remediation workflows with human approvals and full audit logging built in by default.
Early deployments have focused on identity hygiene: disabling dormant accounts, resolving duplicate identities, and enforcing access policies at enterprise scale. The company, which just emerged from stealth with a $57 million funding round led by Accel, with participation from existing investors Cyberstarts and Boldstart Ventures, says clients have recovered excess SaaS license spend, cleared thousands of orphaned accounts, and automated identity enforcement workflows that previously required manual coordination across multiple teams. Customers Cushman & Wakefield and VetCor are among the early adopters already running the platform in production.
Surf AI is not alone in recognizing this gap. The broader shift happening across the security industry is away from tools that help analysts manage work and toward platforms that do the work—with humans setting policy, reviewing exceptions, and handling escalations rather than processing every remediation step manually.
The Question Worth Asking
Organizations have lived with months-long remediation cycles on known exposures because it was simply too expensive to do it differently. AI changes that cost equation. What wasn’t practical to automate a couple of years ago is practical now.
The security programs that figure out how to close the loop between finding problems and fixing them—continuously, at scale—are going to look very different from the ones still relying on analysts to manually chase down tickets. The direction is clear. The question is how long it takes to get there.
- When AI Agents Go Rogue the Problem Starts at Runtime - April 15, 2026
- How Capsule Is Approaching the Security Risks of AI Agents - April 15, 2026
- How Cayosoft Is Pushing Identity Security ‘Left’ of the Attack - April 13, 2026



