Sumo Logic AI SecOps cybersecurity

Is it Possible to Fight AI and Win?

Attackers are using the latest in AI automation, while defenders remain stuck patching the past and just trying to keep up.

At this point, most security teams can detect bad things that are happening in the network. The problem is that a select few have the ability to connect the dots between isolated alerts, behavior anomalies, and threat intelligence. Without that ability to correlate with real-time context, organizations end up with a pile of alerts and no story. That costs precious time, and in some cases, it means that teams cannot act when they absolutely must.

What I worry about is the constancy of the attack vectors. Security teams are in a constant battle every single day. Most of the fight is in the form of subtle stuff. Think low-and slow attacks, the behavioral drift, and the signals that don’t trigger alarms but still mean that something is not normal. Companies need to invest heavily in ways to understand normal behavior, surface the outliers, and build a real-time picture that makes sense before real damage is done.

What’s the most important thing security teams need to figure out? Organizations must stop talking about AI like it’s a death star of sorts. AI is not a single, all-powerful, monolithic entity. It’s a stack of threats, behaviors, and operational surfaces and each one has its own kill chain, controls, and business consequences. We need to break AI down into its parts and conduct a real campaign to defend ourselves. Right now, most companies are not running a campaign. They are running pilots, proofs of concept, adding an administrative policy, calling it governed and hoping nothing bad happens.

Trustworthy AI

It’s true that AI will continue to take the oxygen from the room for years to come. There will be some creative “next-gen” threat or “next-gen” defensive capability that will likely be the next trend. But as humans move further outside the loop of AI, we’ll be asking harder questions, such as: how do we verify what the model did?, who’s accountable if it’s wrong?, and how do we audit that?

Our next conversation will and should be focused on building trustworthy AI. And that will start to reshape everything from procurement to policy.

Security professionals, IT professionals, regular employees, and an organization’s board all have to understand that AI is not going to solve all of our problems, nor are we even close to a world where AI is THE answer. The most important job will be an organization’s ability to react to AI in a responsible and speedy manner to safeguard the crown jewels, e.g., PII, the company’s IP, or any asset that defines the company’s success. It’s ironic, but we now have an excellent potential of a weapon system (AI) to defend ourselves against things that are being used against us (AI).

What we should be doing

The next evolution of cybersecurity demands adversarial training not just for people, but for our models. AI must learn to fight AI. Not in a Terminator fantasy of clashing robots, but in real-time detection and response systems intelligent enough to anticipate how both malicious and accidental misuse will unfold.

The security question has to be “How do we operationalize AI safely?”

It starts with the data and ensuring that the data we allow in and the systems that we connect to all have that data layer protected in the right way. Security vendors and their customers have to build in safety so that the human errors that happen can be mitigated or eliminated to an acceptable degree. If your controls only work when people do the right thing, they are not controls.

If AI is going to be operationalized inside your business, it should be treated like a business function. Not a feature or experiment, but a real operating capability. When you look at it that way, the approach becomes clearer because businesses already know how to do this. There is always an equivalent of HR, finance, engineering, marketing, and operations. AI has the same needs.

There has to be a concept of onboarding and offboarding, access and policy, even if those policies are technical in nature. There has to be a financial and risk lens that asks what exposure exists and whether the organization is willing to pay not just for the technology, but for the controls required to use it responsibly. There has to be clarity around adoption so that people understand what the capability is for, how it should be used, and how it should not be used. Engineering has to be intentional about integration and use case development so data flows and system connections are understood before something goes wrong. And operations have to enable, support, monitor, and own it day to day.

When AI is structured this way, it stops being something people play with and starts becoming something the business can trust.

The future belongs to faster learners

If organizations don’t learn how to defend against AI or deploy a solid defensive strategy, negative consequences will begin to spring up. Organizations will face rising insurance premiums, stricter compliance audits, increasing privacy fines that will multiply, and evaporating trust in a company’s brand.

Quick fixes aren’t enough in the AI era. The bad actors are innovating at machine speed, so humans must respond at machine speed with appropriate human direction and ethical clarity. AI is a tool. And the side that uses it better will win.

If that isn’t enough, AI will force another reality that organizations need to prepare for. Security and compliance will become an on-demand model. Customers will not wait for annual reports or scheduled reviews. They will click into a dashboard and see your posture in real time. Your controls, your gaps, and your response discipline will be visible when it matters, not when it is convenient.

In that world, there is no time to prepare for the audit, the questionnaire, or the customer conversation. You are either operating safely every day, or you are not. AI will not replace security teams. But poorly governed AI, exposed in real time, will absolutely replace companies…fast.

Latest posts by Roland Palmer (see all)
Scroll to Top