Security operations teams are working harder than ever. The tools are better than ever. And yet most SOCs are still missing the majority of their alerts every single day.
Ponemon Institute just published the 2026 State of SecOps Report—commissioned by Crogl—that looks at the real state of security operations in 2026: how effective SOC teams actually are, where AI is helping, where it isn’t, and what separates the organizations doing well from everyone else. Crogl sells an AI-powered platform for security operations, so that sponsorship is worth keeping in mind. But the data in this report lines up with what I’ve heard from security practitioners for years, and it’s worth looking at closely.
The Alert Volume Problem Is Worse Than People Admit
The average organization in the study generates 4,330 security alerts per day. Seven IT staff members are responsible for managing them. Those analysts end up investigating just 37% of what comes in.
So on an average day, roughly two out of every three security alerts go completely uninvestigated. Not because the team doesn’t care. Not because the tools aren’t running. The volume is just unworkable at human speed.
Only 43% of organizations rate themselves as effective at identifying and responding to alerts.
AI Is in the SOC—But the Integration Is Broken
62% of organizations in the study have adopted AI in some form. Of those with a SOC, 57% have already deployed AI within it. This isn’t a future conversation anymore.
But there’s a real gap between deploying AI and deploying AI that works at scale. When respondents were asked about the biggest barriers to AI in the SOC, 50% pointed to difficulty integrating AI into existing workflows. Another 49% said their data was too dispersed and hard to normalize.
Those two answers are really the same problem. AI struggles to integrate into SOC workflows because the underlying data is fragmented—scattered across systems that were never designed to share information. You can’t build a functional AI pipeline on top of that without doing months of data normalization work first. And that normalization project starts breaking the moment new data sources come online.
Only 44% of respondents said AI alone is highly effective at reducing threats. For technology that’s been positioned as the answer to alert overload, that’s a pretty underwhelming number.
Analysts Are Still the Most Important Variable
52% of respondents rated human analysts as highly effective as the final line of defense in an AI-powered SOC. Only 44% said the same about AI alone. AI is faster and better at processing volume, but it isn’t built to exercise judgment on ambiguous signals or make high-stakes decisions with incomplete information. Those are still human capabilities, and this survey reflects that.
The most cited benefit of AI in the SOC was speed—67% said it helps resolve alerts faster, and 57% said it frees up analyst bandwidth for higher-priority work. That’s where the value is.
Monzy Merza, CEO of Crogl, noted the tension directly. “Everyone’s asking whether AI will replace the SOC analyst,” he said. “The data shows human analysts are still the most effective element in an AI-powered SOC. Attackers are using agentic systems. SOCs need agentic systems and new processes. The real question is whether your AI is good enough to make your analysts better.”
Consistency and Visibility Are Bigger Issues Than Most Deployments Acknowledge
63% of respondents said consistency in how AI behaves is very or highly important. That’s understandable. When an AI system marks a case as resolved and the analyst can’t see why, trust breaks down—and when trust breaks down, analysts work around the AI instead of with it. Consistency is what determines whether the tool actually gets used.
The governance issue runs deeper than the SOC floor. A separate report from IANS, Artico Search and The CAP Group found that 95% of CISOs regularly update their boards on cybersecurity, but only 30% of boards describe that relationship as strong and collaborative. Nearly half of board respondents—47%—said reporting on AI-driven risk specifically needs improvement.
That has practical consequences. If boards aren’t getting a clear picture of AI risk, CISOs are less likely to get the budget and mandate to govern it well. And if AI governance in the SOC is weak, the tools meant to help can create new exposure.
On that point—61% of Ponemon respondents said they’re highly concerned about third-party AI vendors using their security data to train or enrich AI services. Security data is sensitive in a specific way. It exposes infrastructure topology, detection logic, response playbooks—things that tell an attacker where the gaps are. Once that data enters a third-party AI pipeline, the organization doesn’t control it anymore.
45% of SOC environments in the study operate on air-gapped networks. For those teams, cloud-hosted AI tools aren’t a tradeoff decision. They’re simply not an option.
What the Better-Performing Organizations Actually Do
34% of respondents report consistently strong security postures. A few things stand out about that group. They’re more likely to keep SecOps in-house—47% vs. a 34% average. They’ve deployed AI in the SOC at a higher rate (68% vs. 46%). And 72% said consistency in AI use is very or highly important, compared to 54% of everyone else.
They haven’t replaced analysts with AI. They’ve used AI to make their analysts more effective.
The threat environment context makes all of this more urgent. BlackFog’s 2025 State of Ransomware Report found that publicly disclosed ransomware attacks jumped 49% year over year, hitting a record 1,174 incidents. Even that figure understates it—BlackFog estimates 86% of ransomware attacks in 2025 went undisclosed, with organizations quietly negotiating rather than surfacing incidents publicly. And 2025 saw the first documented large-scale AI-enabled ransomware campaign, where attackers used an AI model to autonomously handle reconnaissance, exploitation and data theft.
SOC teams are already investigating a fraction of the alerts they receive. Attackers are running AI-enabled campaigns at machine speed. The organizations that figure out how to close that gap—by giving analysts better tools and better data, not by trying to remove them from the equation—are the ones that will be in a defensible position when it counts.
- What Agentic Pentesting Says about the Threat Landscape - April 21, 2026
- The Microsoft Enterprise Recovery Problem AI Can’t Fix - April 20, 2026
- When AI Agents Go Rogue the Problem Starts at Runtime - April 15, 2026



