When it comes to cybersecurity, we need to consider the good, the bad, and the ugly of artificial intelligence. While there are benefits of how AI can strengthen defenses, cybercriminals are also using the technology to enhance their attacks, creating emerging risks and consequences for organizations.
The Good: AI’s Role in Enhanced Security
AI represents a powerful opportunity for organizations to enhance threat detection. One emerging opportunity involves training machine learning algorithms to identify and flag threats or suspicious anomalies. Pairing AI security tools with cybersecurity professionals reduces response time and limits the fallout from cyberattacks.
A prime example is automated red teaming, a form of ethical hacking that simulates real-world attacks at scale, so brands can identify vulnerabilities. Alongside red teaming, there’s blue teaming, which simulates defense against attacks, and purple teaming, which validates security from both vantage points. These AI-powered approaches are critical given the vulnerability of enterprise large language models to security breaches.
Previously, cybersecurity teams were limited to available datasets for training their predictive algorithms. But with GenAI, organizations can create high-quality synthetic datasets to train their system and bolster vulnerability forecasting, streamlining security management and system hardening.
AI tools can be used to mitigate the increased threat from AI-powered social engineering attacks. For example, AI tools can be used in real-time to monitor incoming communications from external parties and identify instances of social engineering. Once detected, an alert can be sent to both the employee and their supervisor to help ensure this threat is stopped prior to any system compromise or sensitive information leak.
However, defending against AI-powered threats is only part of it. Machine learning is a vital tool for detecting insider threats and compromised accounts. According to IBM’s Cost of a Data Breach 2024 report, IT failure and human error made up 45% of data breaches. AI can be used to learn what your organization’s “normal” state of operation is by assessing your system logs, email activity, data transfers, and physical access logs. AI tools can then detect events that are abnormal compared to this baseline to help identify the presence of a threat. Examples of this include: detecting suspicious log-ins, flagging unusual document access requests, and keying into physical spaces not typically accessed.
The Bad: AI-Driven Security Threats Evolution
Simultaneously, as organizations are reaping the benefits of AI proficiency, cybercriminals are leveraging AI to launch sophisticated attacks. These attacks are broad in scope, adept at evading detection, and capable of maximizing damage with unprecedented speed and precision.
The World Economic Forum’s 2025 Global Cybersecurity Outlook report found that 66% of organizations across 57 countries expect AI to significantly impact cybersecurity this year, while nearly half (47%) of respondents identified Gen AI-powered attacks as their primary concern.
They have reason to be worried. Globally, $12.5 billion was lost to cybercrime in 2023— a 22% increase in losses over the previous year, which is anticipated to continue rising in the coming years.
While it’s impossible to predict every threat, proactively learning to recognize and prepare for AI attacks is critical to putting up a formidable fight.
Deepfake Phishing
Deepfakes are becoming a bigger threat as GenAI tools become more commonplace. According to a 2024 survey by Deloitte, about a quarter of businesses experienced a deepfake incident targeting financial and accounting data in 2024, and 50% expect the risk to increase in 2025.
This rise in deepfake phishing highlights the need to transition from implicit trust to continuous validation and verification. It’s as much about implementing a more robust cybersecurity system as it is about developing a corporate culture of threat awareness and risk assessment.
Automated Cyber Attacks
Automation and AI are also proving to be a powerful combination for cybercriminals. They can use AI to create self-learning malware that continually adapts its tactics in real-time to better evade an organization’s defenses. According to cybersecurity firm SonicWall’s 2025 Cyber Threat Report, AI automation tools are making it easier for rookie cybercriminals to execute complex attacks.
The Ugly: High Cost of AI-Powered Cyber Attacks and Crime
In a high-profile incident last year, an employee at multinational engineering firm, Arup, transferred $25 million after being instructed during a video call with AI-generated deepfakes impersonating his colleagues and CTO.
But the losses aren’t just financial. According to the Deloitte report, around 25% of business leaders consider a loss of trust among stakeholders (including employees, investors, and vendors) as the biggest organizational risk stemming from AI-based technologies. And 22% worry about compromised proprietary data, including the infiltration of trade secrets.
Another concern is the potential of AI disrupting critical infrastructure, posing severe risks to public safety and national security. Cybercriminals are increasingly targeting power grids, healthcare systems, and emergency response networks, leveraging AI to enhance the scale and sophistication of their attacks. These threats could lead to widespread blackouts, compromised patient care, or paralyzed emergency services, with potentially life-threatening consequences.
While organizations are committing to AI ethics like data responsibility and privacy, fairness, robustness, and transparency, cybercriminals aren’t bound by the same rules. This ethical divide amplifies the challenge of defending against AI-powered threats, as malicious actors exploit AI’s capabilities without regard for the societal implications or long-term consequences.
Building Cyber Resilience: Combining Human Expertise with AI Innovation
As cybercriminals become more sophisticated, organizations need expert support to close the gap between the defenses they have in place and the rapidly emerging and evolving threats. One way to accomplish that is working with a trusted, experienced partner that has the ability to fuse human intervention with powerful technologies for the most comprehensive security measures.
Between AI-enhanced tactics and advanced social engineering, like deepfakes and automated malware, companies and their cybersecurity teams entrusted to protect them face a persistent and increasingly sophisticated challenge. But by better understanding the threats, embracing AI and human expertise to detect, mitigate, and address cyberattacks, and finding trusted partners to work alongside, organizations can help tip the scales in their favor.