embedded security connected devices digital trust

When Billions of Devices Need to Trust Each Other

There’s a number that keeps coming up in conversations about connected devices: 15 billion. That’s roughly how many IoT endpoints are active today, and by some estimates, it could triple within a decade. Every one of those devices needs to boot securely, accept legitimate software updates, and reject everything else.

Most organizations haven’t really figured out how to do that at scale. It’s easy to manage device trust when you’re talking about a few thousand endpoints. It gets harder fast when you’re talking about hundreds of millions of embedded systems spread across cars, aircraft, industrial controllers, and medical devices—systems that run for years or decades and can’t always be easily patched or replaced.

David Sequino, co-founder and CEO of OmniTrust (formerly INTEGRITY Security Services), has been working on this for about 15 years. The company has provisioned more than 2.5 billion devices and signed over 3 billion software images. Roughly 60% of that work is in automotive, with the rest spread across aerospace, defense, industrial controls, and medical. These aren’t laptops or phones. They’re systems where a compromised identity or a bad software update can have real physical consequences.

“The business was born in decades of securing safety-critical infrastructure,” Sequino told me. “We focused on hard-to-solve problems.”

Embedded Security Is a Different Problem

Ask most enterprise security teams about device identity, and they’ll talk about certificates and PKI. That’s a reasonable answer for managing endpoints on a corporate network. It’s not a complete answer for embedded environments.

In those environments, the standard enterprise approach doesn’t map cleanly. Devices may have no persistent internet connection. The software running on them has to be trusted from the moment the device powers on, not authenticated after the fact. And unlike a laptop you can wipe and reimage, an embedded controller in a vehicle or on an aircraft may be in service for a very long time.

OmniTrust’s core platform—what they call DLM, for Device Lifecycle Management—handles PKI, device identity, code signing, and over-the-air updates across the major embedded CPU architectures. The goal is a continuous chain of trust that starts at the hardware and runs through every layer: secure boot, OS, and application. Sequino describes it as going from “silicon to the cloud.”

It’s a narrowly focused problem that much of the security industry has overlooked; a gap that has helped the company triple in size over the past two years as executive awareness grows.

Why the Rebrand

The company recently moved away from the ISS name—a name that, as Sequino freely admits, didn’t fully reflect its broader mission—in favor of OmniTrust. The change reflects a real expansion in what the company does.

Beyond embedded devices, OmniTrust is now moving into enterprise identity and lifecycle management. Their ILM product line covers certificate lifecycle management, but goes further—managing keys, digital signatures, and secrets across an organization’s network. The argument Sequino makes is that most CLM tools only cover part of the problem. There are far more passwords and cryptographic credentials on a typical enterprise network than certificates, and most organizations have limited visibility into all of it.

“CLM isn’t enough,” he said. “There are 100 times more passwords than certificates on your corporate network.”

At RSA, the company is also formally announcing Trust AI—a set of products aimed at the problem of AI agents operating across enterprise environments and on physical devices. This includes a browser extension for monitoring data exposure, a gateway that sits between enterprise applications and AI frontier models, and runtime enforcement tied to hardware provisioning. The through-line across all of it is the same principle that’s driven the company from the beginning: trust has to be verified at the hardware layer, and it has to extend through every piece of software that runs on top of it.

The Physical AI Problem

One thing Sequino raised that I think deserves more attention is what happens when AI is embedded in physical systems, making real-time decisions.

We’ve gotten somewhat comfortable with the idea that software gets compromised and patched. Data gets leaked, someone gets notified, and the vulnerability gets addressed. It’s not great, but there’s at least a response cycle. When AI is making decisions on a drone, an autonomous vehicle, or a robot on a factory floor, that cycle doesn’t exist in the same way.

“If you apply AI to the physical layer—a robot, a car, a plane, a drone—and that is going to make decisions on data it has,” Sequino said, “there must be data provenance, and there must be authenticity.”

He pointed to something concrete: Russian forces have been compromising Ukrainian drones by targeting the platform they run on. The drone executes instructions—just not from the right source. It’s not a hypothetical. It’s a live demonstration of why software authenticity and device identity matter in ways that go beyond compliance.

“We must at least timestamp it and authenticate it before it runs on that drone,” he said.

Getting the Boardroom to Understand It

One challenge Sequino is candid about is the communication gap. The technical case for what OmniTrust does is clear to hardware engineers and embedded developers. Getting that message through to CISOs and board members is a different conversation.

He drew a comparison to 1999, when companies scrambled to get a web presence without a clear sense of why or what they’d do with it. “The boardroom needs to know that letting users run around with any agent they want, or putting agents on any device they want, will put them out of business.”

It’s not a subtle argument, but the underlying point is fair. The pattern in security has always been that new technology gets adopted first, and the security questions get asked later. That worked reasonably well when the worst-case outcome was a data breach you could eventually contain. It works less well when the technology is making autonomous decisions on physical systems, or when the data it’s trained on is irreversible once it’s out.

As Sequino put it, once the data is gone, it’s gone. There’s no getting it back. And when the device acting on that data is a vehicle or a weapon system, that’s not a problem you can patch your way out of.

OmniTrust exhibited at RSAC 2026 this week, where the company formally announced its rebrand and introduced its Trust AI product line.

Scroll to Top