Most organizations have gotten reasonably good at managing digital trust for people. A user logs in, credentials get verified, access gets granted or denied. That model has worked well enough in enterprise IT for a long time.
It was not built for the billions of embedded devices now running inside cars, aircraft, industrial control systems and critical infrastructure.
Those devices have to authenticate software, validate updates and protect communications—and there is no human at a keyboard to help. The identity frameworks built for people do not translate well to a connected sensor on a factory floor or an onboard computer in a commercial aircraft. That gap has existed for years. It is harder to ignore now.
I had an opportunity to chat with David Sequino, CEO of OmniTrust—formerly known as Integrity Security Services—about what this problem looks like in practice. OmniTrust has spent roughly 15 years doing embedded systems security work that most of the industry never saw: provisioning device identities into silicon, signing software images before they ship, managing cryptographic keys across automotive, aerospace, defense and industrial environments. The company says it has provisioned more than two and a half billion devices and signed over three billion software images. That is a lot of infrastructure most people have never thought much about.
Trust Has to Start in the Hardware
“We secure every embedded device where it matters,” Sequino told me.
That framing is worth sitting with. Most security conversations in the enterprise world start at the application layer—what software is running, what the network traffic looks like, whether credentials check out. Embedded security has to start earlier. Trust gets established before the OS boots, before any application runs, before any data moves. If the silicon is already compromised, everything built on top of it is suspect.
That is a fundamentally different problem than patching enterprise software. You can push an update to a fleet of laptops and mostly trust it will take. You cannot do the same with an embedded controller on an assembly line, a flight management system, or a defense system certified to DO-178 that has to stay that way. The update itself has to be authenticated. The device receiving it has to be verified. The software image has to be checked before it runs. None of that works without a hardware-anchored trust root.
For most of its history, this kind of work happened out of sight. The companies doing it sold into automotive OEMs and aerospace primes and defense contractors and quietly kept things running. That is starting to change.
AI Is Pushing the Problem Into the Physical World
One thing that keeps coming up when I write about AI is how fast deployment is running ahead of security thinking. Organizations are adopting AI tools without fully understanding what data those tools are consuming or where it goes.
Sequino put it plainly: “The world has changed forever… but we have to put some guardrails on it.”
The bigger issue he raised is not just data volume—it is data authenticity. When an autonomous system is making decisions, the data it relies on has to be verifiably untampered. Timestamped. Traceable back to a source that has actually been verified. That is harder to guarantee than most people realize.
Think about what it means to run an AI agent on a physical device—a robot, a vehicle, a drone. That agent makes decisions based on what it ingests. If someone can manipulate that data, inject a compromised software update, or spoof the device identity, the consequences are not a data breach in the usual sense. They are physical. Sequino pointed to Russian interference with Ukrainian drone operations as a real-world example of what happens when device-level trust breaks down. Not hypothetical. Happening now.
He framed the AI agent problem simply: “An agent running on a device is nothing more than another application… it’s just smarter.” That is actually a useful way to think about it. The security principles that matter for embedded systems—verify before you trust, anchor trust to hardware, maintain chain of custody from silicon to execution—apply just as directly to an AI agent running on a drone as they do to any other piece of code on a device.
Regulators Are Catching Up
Embedded security was largely voluntary outside of specific sectors like defense for a long time. That is changing. The FDA now has cybersecurity requirements under Section 524B of the Federal Food, Drug and Cosmetic Act that give it authority to block medical device shipments if manufacturers do not meet baseline standards. In automotive, ISO/SAE 21434 and UN Regulation 155 are reshaping how carmakers handle software security across the vehicle lifecycle. These are not draft proposals.
The challenge with any compliance framework is that it describes the threat landscape as it was understood when the standard was written. The attack surface moves faster than the regulatory process. That is especially true when the threat involves AI systems making decisions at machine speed on physical infrastructure.
A Pattern Worth Paying Attention To
I have watched this play out with enough technologies that it starts to feel familiar. A new capability shows up. Everyone moves fast to adopt it. Security gets treated as something to sort out later—something to bolt on once the thing is already running. That approach has caused a lot of problems with enterprise software. With physical systems running AI, getting it wrong is more consequential.
You cannot patch a drone mid-flight. You cannot undo a decision a robot already made. As Sequino noted, once data is out, it is out. The standard playbook of deploy now and secure later does not hold when the thing you deployed is making physical decisions in the real world.
The vendors who have spent the past decade or two quietly solving the embedded trust problem—device identity, code signing, secure boot, cryptographic key management at scale—are not names most CISOs drop in conversation. But as AI pushes intelligence further into vehicles, infrastructure and industrial systems, that work becomes foundational to what digital trust actually means. The question is whether the broader security industry starts paying attention before something goes badly wrong.
- When AI Agents Go Rogue the Problem Starts at Runtime - April 15, 2026
- How Capsule Is Approaching the Security Risks of AI Agents - April 15, 2026
- How Cayosoft Is Pushing Identity Security ‘Left’ of the Attack - April 13, 2026



