Podcast: Play in new window | Download
Subscribe: Apple Podcasts | RSS
Artificial intelligence may be the headline, but data is the story.
In this episode of the TechSpective Podcast, I sat down with Todd Moore, VP of Data Security at Thales, to unpack the newly released 2025 Thales Data Threat Report. Our conversation explored the increasingly complicated intersection of data, AI, and cybersecurity—and why enterprises may be sprinting into transformation before securing their foundation.
Spoiler: It’s all about the data.
GenAI Is Booming—And So Are the Risks
According to the report, one-third of organizations are already in the integration or transformation phase of GenAI adoption. And while that sounds like progress, Todd and I both agreed it mirrors past tech hype cycles—cloud, Wi-Fi, mobile—where enthusiasm far outpaced security planning.
“The horse has left the barn,” Todd said. And that urgency to keep up with AI adoption is creating a familiar blind spot: data security.
In fact, the fast-evolving GenAI ecosystem ranked as the top concern among respondents (69%), followed closely by risks to data integrity (64%) and trustworthiness (57%). Enterprises are waking up to the reality that AI isn’t just a new technology—it’s a new attack surface.
Shadow AI, Prompt Injection, and Data Leakage
One recurring theme from our conversation was the rise of “shadow AI”—where employees use public tools like ChatGPT without guardrails. While it might boost productivity, it also introduces serious risk if sensitive internal data gets fed into public models.
We talk about how many organizations are adopting internal LLMs to mitigate this, but we acknowledge that enforcement is tough. The reality is that just like with shadow IT, if you don’t give people an approved tool that meets their needs, they’ll find workarounds.
That’s where security posture management becomes crucial. Visibility into who’s using what data—and where it’s going—is no longer optional.
Data Classification: Still a Work in Progress
You can’t protect what you don’t know you have. Yet the report found that only one-third of organizations can fully classify their data, while 61% are juggling five or more data discovery tools.
The inconsistency leads to fragmented policies, conflicting controls, and ultimately, more exposure. Todd and I agreed: classification has to be automated and context-aware. AI can help here—ironically—by understanding not just what a file says, but what it means based on surrounding data.
Still, as Todd pointed out, AI is also the biggest creator of new data. “It’s a feedback loop,” he said. “AI is creating more unstructured data than ever before, which just makes the classification challenge even bigger.”
Quantum Computing Is Closer Than You Think
Another headline from the report—and our conversation—was the growing urgency around post-quantum cryptography (PQC). The threat of “harvest now, decrypt later” is very real, especially for regulated industries that store data long-term.
Thales found that 63% of organizations are already concerned about future decryption of today’s data, and many are beginning to prototype PQC solutions. Todd emphasized that we now have a deadline: NIST and other global bodies are calling for a deprecation of classical algorithms by 2030.
“This isn’t Y2K,” Todd warned. “We don’t know when Q-day will arrive. But when it does, if you haven’t prepared, it’s already too late.”
Check It Out
This episode dives deep into AI, PQC, classification, and the cultural challenges of balancing innovation with risk. If you’re a CISO, security leader, or just trying to make sense of the data security landscape in 2025, you won’t want to miss it.
- Why Data Security Is the Real AI Risk - June 30, 2025
- Why Being Bold Matters in Cybersecurity—and Branding - June 3, 2025
- Gear Tested and Approved: My Top Picks for Dads and Grads This Season - May 28, 2025