Kroll digital forensics incident response Max Henderson

What the Breach Reveals That the Budget Never Did

There’s a pattern that shows up in incident response work that nobody talks about in the vendor briefings. You bring in forensics after something goes wrong, and somewhere in that process, you find a tool — already deployed, already licensed, sometimes running for years — that had the data to catch what happened. Nobody was looking at it. In some cases, it wasn’t even turned on the right way.

Max Henderson runs global digital forensics and incident response at Kroll. He’s seen this enough that it’s not a surprise anymore. That’s part of what makes him a useful person to talk to about Kroll’s new cyber resilience research — he’s not reading a survey and drawing conclusions. He’s comparing it against what he actually finds on cases.

I had him on the TechSpective Podcast, and we started where I always start with someone who’s close to research like this: not the findings, but what surprised him. His answer goes somewhere I didn’t expect, and it reframes a lot of what follows. It’s not about a specific attack type or a new threat category. It’s about a structural problem in how organizations think about security investment — one that keeps showing up regardless of how much they’ve spent.

The report itself covers 1,000 decision-makers across 10 countries. The headline numbers are familiar in their frustration — 94% treat cybersecurity as a top risk, budgets are up, nearly everyone has an incident response plan. And yet 72% still report misalignment between security priorities and business decisions. That gap has a real explanation, and Max gives it one that makes more sense than the usual “leadership doesn’t get it” framing.

We spent some time on the confidence problem. Organizations consistently overestimate their readiness — not because they’re being dishonest, but because of how the question gets asked internally and who’s answering it. The gap between saying you can quantify cyber risk and actually being able to do it when something happens is significant. Max has watched that gap reveal itself in real time during incidents, in rooms with executives who are hearing for the first time how long they might be down.

The speed problem isn’t getting better. Kroll’s data on outbreak times is uncomfortable, and the percentage of organizations that feel equipped to respond within that window is even more uncomfortable. AI is part of why timelines are compressing — but not in the way most people fixate on. The most effective attacks Max is seeing right now don’t involve sophisticated AI-enhanced exploits. They involve someone picking up the phone. The gap between where organizations focus their security investment and where they’re actually getting hit is one of the more consistent findings across Kroll’s casework.

The AI discussion goes a few directions. There’s the attacker side, which is getting more attention. But there’s also what happens when organizations build out powerful AI infrastructure internally and what that looks like as a target. Max made a point about MCP servers specifically that I hadn’t heard framed that way before — the security risk isn’t necessarily about abusing the AI itself, it’s about what you’ve handed to whoever can get onto that system. There’s also a thread on agentic AI and the forensic problems it creates that I think is going to become a much bigger conversation.

I asked him at the end where he’d tell an organization to start. One priority, 80% of the way there. The answer connects back to where we opened.

Full episode on YouTube and wherever you get podcasts.

Scroll to Top