AI coding agents are writing and shipping code in enterprise environments right now—often without anyone on the security team knowing exactly what those agents have access to, what tools they’re invoking, or what they’ve already pushed to production. It’s not a fringe problem. Snyk’s 2026 State of Agentic AI Adoption Report found that for every AI model enterprises deploy, they introduce nearly three times as many untracked software components. Organizations that thought they had a handle on their AI footprint found out they didn’t.
At RSAC 2026 this week, Snyk announced the general availability of Evo AI-SPM and a new Agent Security solution built on top of it. I had an opportunity to chat with Manoj Nair, chief innovation officer at Snyk, ahead of the show to get a better picture of what the company is seeing from customers and what they’re trying to solve.
Governance Policies That Nobody Enforces
Most organizations have some kind of AI governance board or center of excellence. They’ve put together a list of approved models. The problem, according to Nair, is that those policies tend to live in a Confluence page or a PDF doc, and there’s no real mechanism to verify they’re being followed. A new model version ships, a developer upgrades, and whatever guardrails existed on paper no longer reflect what’s actually running in the codebase. When an auditor asks what AI tools the organization is using, many companies can’t answer that question at a moment’s notice—and that’s a governance problem.
The code quality issue compounds things. Nair said back-end data from Snyk shows AI-generated code is producing somewhere between two and ten times more security issues than human-written code. And agents tend to produce more business logic and authorization vulnerabilities specifically—the kind that are harder to catch with static analysis and tend to be more dangerous when they’re exploited. There’s also the matter of what models are actually being used. Nair pointed out that there are more than two million models available to download. Developers upgrade automatically when new versions drop, and in some cases, organizations have ended up running models from countries that their own governance policies explicitly prohibit.
The MCP and Skills Problem
Agent skills and MCP servers add another layer to this. Skills are what allow agents to actually do things—move beyond generating text and take action in real systems. Snyk did research across public skill registries and found that roughly a third of what’s out there had security issues. Seven percent was actual malware. Developers are pulling in agent skills the same way they’ve always pulled in open source packages, without necessarily knowing what’s inside them.
Traditional security tools mostly miss this. Cloud and runtime security platforms see AI after it’s deployed—they can flag misbehavior in production, but they don’t catch what’s introduced earlier in development, in the code, in the CI/CD pipeline, in the third-party components agents pull in. As Nair put it: “Agentic architectures turn governance into a software supply chain problem.” That framing positions this as an extension of something the security industry already understands—knowing what’s in your software and whether it can be trusted.
What Evo AI-SPM Does
Evo AI-SPM is built around three automated agents. A Discovery Agent scans code repositories to generate a live AI Bill of Materials—an inventory of models, datasets, agent frameworks, MCP servers, and plugins. A Risk Intelligence Agent enriches that inventory with security context, including hallucination and bias metrics and vulnerability signals. A Policy Agent takes governance rules written in plain language and converts them into machine-enforceable guardrails that run natively in CI pipelines. The goal is to give security teams a real-time picture of what AI components exist in their environment and whether those components are actually complying with policy.
One thing Nair and I got into was the verification problem. When agents produce code—or make architectural decisions nobody explicitly specified—you can end up with outputs that look fine but are difficult to audit. Static checking alone isn’t enough. You also have to understand what environment the agent is running in, what skills and MCP servers it has access to, and then dynamically test the result. Snyk’s API and Web testing capability, which also hit GA this week, handles that piece—probing deployed applications for authorization flaws like BOLA and IDOR that turn up often in AI-generated code and become more dangerous in agentic contexts.
Early Access Results
WEX, a global payments and workflow company, was among the early access participants. In Snyk’s announcement, Jason Langston, director of product security at WEX, said: “It only took an afternoon to set it up and less time to pull a report and have full visibility. Being able to put our arms around the full breadth of what was actually in place was a super helpful foundation to start from.” Basic visibility into what AI components are actually running in your environment sounds like a modest goal, but based on what Snyk is seeing across customers, a lot of organizations are starting from scratch on that question.
What Is Available Now
Evo AI-SPM and API and Web testing are generally available. Agent Scan and Agent Red Teaming—which runs autonomous agents against AI applications to probe for prompt injection vulnerabilities, data exfiltration paths, and multi-step attack vectors—are in open preview. Agent Guard, which monitors live agent behavior and blocks risky tool calls at runtime, is still in private preview. A fair portion of the full platform is still being built out, which is worth knowing if you’re trying to put a comprehensive governance architecture in place today.
Planting a Flag in San Francisco
Snyk also opened a San Francisco innovation hub this week—positioned in the same part of downtown as Anthropic, Cursor, Cognition, and other companies building the AI development stack. Nair made the point that when Jensen Huang laid out his vision for the five layers of AI at Davos, security wasn’t on the list. Being physically embedded in the ecosystem where the AI stack gets built is part of how Snyk wants to change that. The space is intended to be open to AI engineers generally, with regular technical sessions and hackathons—not just a corporate outpost for Snyk employees.
The AI-SPM category is crowded and getting more so. But the problem Snyk is targeting is real. Autonomous agents that write, modify and deploy code at machine speed have outpaced the governance models most organizations have in place. I think getting visibility into what your agents are actually doing—and enforcing policies where the code is written rather than after it ships—is the right approach. How well Snyk and the rest of the market execute on it remains to be seen.
- When AI Agents Go Rogue the Problem Starts at Runtime - April 15, 2026
- How Capsule Is Approaching the Security Risks of AI Agents - April 15, 2026
- How Cayosoft Is Pushing Identity Security ‘Left’ of the Attack - April 13, 2026