Automated traffic on the internet grew nearly eight times faster than human traffic in 2025. The more important shift isn’t the volume—it’s what that automation is actually doing now.
For years, the bot problem was mostly a nuisance. Scrapers grabbed pricing data. Crawlers hoovered up content. Credential stuffers hammered login pages. Those are still real problems. But the nature of automated traffic has changed, and most organizations’ security thinking hasn’t caught up.
AI agents aren’t just reading the web anymore. They’re transacting on it.
A Different Kind of Automation
A new benchmark report from HUMAN Security, which analyzed more than one quadrillion interactions across its customer base in 2025, puts numbers to the shift. Monthly AI-driven traffic volumes grew 187% from January to December. Agentic AI traffic—systems that browse, fill forms, manage accounts, and complete purchases on behalf of users—grew 7,851% year over year.
An AI agent completing a checkout isn’t just browsing. It’s making a financial decision on behalf of a human user, interacting with payment systems and account infrastructure. The security implications are fundamentally different from a scraper reading your product pages.
I had an opportunity to chat with Todd Thiemann, cybersecurity industry analyst with Omdia, about what that shift means for security teams. His framing was direct: “AI agents hold the promise of improving efficiency and productivity, but those new identities need to be managed and secured for compliance reasons, for cybersecurity reasons, and to facilitate growth of the business.”
AI agents aren’t just another traffic type to classify. They’re a new category of entity that can act, decide, and commit—and most enterprise identity frameworks weren’t built with them in mind.
The Wrong Question
Security teams have spent years asking one question: Is this traffic from a bot or a human? That framing made sense when bots were mostly adversarial, and humans were mostly legitimate. It doesn’t hold anymore.
An AI agent browsing product pages, logging into an account, and completing a purchase is doing exactly what a sophisticated bot attack looks like. The behavior is functionally identical. The difference is intent—and intent doesn’t show up in a user-agent string.
Across all the interactions analyzed, only half of one percent separates benign automation from malicious automation. Organizations that block all automation will turn away legitimate agentic commerce. Those that allow it unchecked absorb fraud. The real question isn’t whether traffic is automated—it’s whether a given interaction is trustworthy.
Threat Actors Are Following the Same Playbook
Threat actors are targeting the same surfaces where agentic AI operates: product pages, account management flows, and checkout. That overlap isn’t coincidental.
Post-login account compromise attempts more than quadrupled in 2025, averaging 402,000 per organization. Login-point defenses have improved enough that attackers now wait until after authentication, abusing session tokens and exploiting weak step-up controls rather than forcing their way through the front door.
Scraping attacks now account for nearly 20% of global web traffic at the median — nearly double the rate in 2022. For heavily targeted organizations, it exceeds 60%. Carding volume is up 250% over the same period.
Researchers have already documented AI agents executing carding attacks—cycling through card additions and payment attempts via agentic browsers, mirroring established fraud workflows without manual effort. The same tools built to help consumers shop are proving equally useful for fraud.
Declared Identity Is Already Unreliable
The spoofing problem compounds this. Attackers masquerade as recognized AI crawlers—claiming to be ChatGPT, Mistral, or Perplexity bots—to exploit the trust organizations extend to those names. Whitelisting based solely on user-agent strings grants access to actors who aren’t who they claim to be. And the same company can operate crawlers, scrapers, and agentic systems simultaneously, so operator-level access decisions don’t map cleanly to behavior. Declared identity is the starting point, not the answer.
The Architecture Gap
The tools built for a human-centric internet weren’t designed for this. Bot detection assumes most legitimate traffic is human. CAPTCHAs and rate limits assume humans have a natural ceiling on request volume. None of those assumptions holds when a legitimate shopping agent might browse 200 product pages in a minute before completing a purchase.
What’s needed is the ability to understand the intent behind every interaction and apply trust dynamically across the full session lifecycle—not just at the point of login. That means knowing which agents are operating, what they’re authorized to do, and whether downstream actions carry appropriate permissions.
Thiemann put the defender’s challenge plainly: “From a defender perspective, you need to consider human identities, non-human identities, and now AI agents that can make decisions and take action. Teams need to manage and secure AI agents to avoid data breaches, fraud, and other mischief, and need to do it efficiently to accelerate their businesses.”
Most organizations don’t have that architecture yet. They’re running 2018 defenses against 2026 traffic.
Where This Goes Next
More than 95% of AI-driven traffic flows through retail and e-commerce, streaming and media, and travel and hospitality. OpenAI alone accounts for roughly 69% of all observed AI bot traffic. Organizations in those sectors are already living in this environment. Early 2026 data suggests the momentum hasn’t slowed, and the policy decisions they make now—who gets access, under what conditions, with what verification—will shape risk exposure and revenue for years.
The internet has already crossed the threshold. The majority of traffic is automated. AI agents are buying things. Fraud follows the same surfaces that legitimate automation does. The security question has changed.
The tools most organizations are running haven’t.
- How Cayosoft Is Pushing Identity Security ‘Left’ of the Attack - April 13, 2026
- The Browser Was Already a Problem – Now Add a Billion AI Agents - April 10, 2026
- The Internet Is No Longer Built For Humans - March 30, 2026



