The Gap Between Buying Security and Actually Having It

There’s a version of this conversation that happens inside almost every organization, every year. The security team presents to leadership. There’s a slide about the threat landscape. Another slide about what’s been deployed — endpoint protection, identity management, SIEM, maybe something AI-assisted. The budget number is up from last year.

Everyone nods. Cyber is serious. The company takes it seriously. The meeting ends.

What doesn’t come up is whether any of those tools are configured to actually do what the organization believes they’re doing. Whether the detection rules someone wrote 18 months ago still make sense in the current environment. Whether the security architecture has been tested against how a real attacker would actually move through it.

That’s the gap Kroll set out to measure with their new cyber resilience research — and the data is pretty uncomfortable reading.

The numbers don’t add up

The survey covers 1,000 cybersecurity decision-makers across 10 countries, conducted by Sapio Research in late 2025. Ninety-four percent view cybersecurity as a core or top organizational risk. Eighty percent increased their security budgets for 2026. Almost all of them have an incident response plan on paper.

And 72% still frequently experience misalignment between cybersecurity priorities and actual business decisions. Only 10% reach what the research classifies as very high cyber maturity. The average financial impact of a cyber incident is more than $20.9 million.

Adam Malone runs Kroll’s incident response, intelligence, and managed services portfolio. Before that, he spent years as an FBI special agent working cyber — the Middle East desk, large enterprise takedowns, economic espionage cases. When I asked him what the data looks like against actual investigations, he didn’t sugarcoat it.

“I can’t tell you how many incidents I’ve worked where, when we figure out how an attacker got as widespread as they did, the first answer was, ‘I didn’t know that account existed. I didn’t know that account had that many permissions. Oh, that’s for an old development project from two years ago, and it’s still there.'”

That’s not a spending problem. That’s a know-what-you-have problem.

You bought the potential, not the capability

Malone referenced data from Kroll’s work with CrowdStrike — specifically, that somewhere in the range of 30 to 40 percent of CrowdStrike’s own customers haven’t configured the platform to the vendor’s recommended best practices. These organizations purchased a best-in-class tool, deployed it, and are running a materially weaker version of it than they think. Their executives almost certainly have no idea.

This is a common pattern. And the survey data reflects it at scale.

Organizations rank newer and more advanced technologies as the top thing they want to add to their security programs. Red and purple teaming — the activities that would actually tell you whether the tools you already own are doing their job — sits at the bottom of investment growth priorities. Twenty-four percent of organizations cut red and purple team spending last year.

There’s a reason for this, even if it’s not a good one. Red teaming finds problems. New technology investments can be demonstrated, announced, and reported as progress. One looks good in a board presentation. The other produces a list of things that aren’t working, which is a harder conversation to have when you’ve spent the last 12 months explaining how much you’ve invested.

The confidence gap

Ninety-six percent of surveyed organizations say they’ve quantified the financial impact of cyber risk. Sixty percent of those say the number is a rough estimate. That distinction does a lot of work that tends to get glossed over.

A rough estimate built on qualitative self-assessment feels real enough to anchor decisions without actually connecting to how the environment is configured or how fast it could be compromised.

At the executive and board level, most cyber risk assessment is still qualitative — interview a few dozen people, collect some documents, arrive at a position. “Think about the difference between a red team and a penetration test,” Malone said. “I’m making investments in this application ecosystem. I hire a team to come in and test it quantitatively. But when we talk about the strategic enterprise, a lot of times it’s done in an entirely qualitative fashion.”

That gap is exactly where the overconfidence lives. The CrowdStrike 2026 Global Threat Report puts the average e-crime breakout time in 2025 at 29 minutes — up 65% from the year before. The fastest on record was 27 seconds. The Kroll survey finds 72% of organizations think they can respond to a serious attack within 24 hours. That’s not a minor discrepancy.

Security drift

Security controls degrade over time, even when nothing obviously goes wrong. Detection rules written for a previous environment go stale when the environment changes. Permissions granted for a project stick around long after the project ends. Configurations that reflected vendor best practices 18 months ago no longer do. Nobody made a bad decision. The environment moved, and the controls didn’t move with it.

“Security drift is real,” Malone said. “You have to make sure controls work and continue to work if the environment changes.”

Continuous red and purple teaming catches this before an attacker does. The fact that only 3% of organizations update their incident response plan after an actual incident fits the same pattern — most are running on calendar schedules and internal assumptions rather than anything learned from watching their environment get tested under real conditions.

What the maturity data actually shows

Ten percent of surveyed organizations hit the highest maturity tier. The outcome differences aren’t marginal. Among organizations at the lowest maturity levels, 89% experienced a security incident involving AI applications or models in the past two years. At the highest tier, that drops to 54%, and 46% reported no AI-related incidents at all. Very-high-maturity organizations are six times more likely to spend more than 20% of their AI budget on testing security controls.

The difference isn’t how much money high-maturity organizations spend overall. It’s that they allocate a meaningful portion of their AI budget to confirming that what they’ve deployed actually works.

AI is the same problem at higher speed

76% of respondents experienced a security incident involving AI applications or models in the past 24 months. 48% have little to no governance over how employees adopt AI tools. Organizations spend an average of 13% of their AI initiative budget on security testing.

If the deployment-to-validation ratio was already broken before AI entered the picture, adding AI without fixing it doesn’t create new risks so much as it accelerates the existing ones. The same stale-controls problem applies, just faster.

I’ve covered enough of these surveys to recognize the pattern. Awareness is high, preparedness is overstated, spending points in directions that don’t map to actual attack vectors. What the Kroll research adds is a specific accounting of the mechanism — the gap between purchase and configuration, between deployment and validation, between the number in the board presentation and what would actually happen if someone tested it. The problem isn’t new. The specificity is useful.

Whether organizations do anything about it is, as always, a different question.

Tony Bradley: I have a passion for technology and gadgets and a desire to help others understand how technology can affect or improve their lives. I also love spending time with my wife, 7 kids, 4 dogs, 5 cats, a pot-bellied pig, and sulcata tortoise, and I like to think I enjoy reading and golf even though I never find time for either. You can contact me directly at tony@xpective.net. For more from me, you can follow me on Threads, Facebook, Instagram and LinkedIn.
Related Post