Deepfakes Are Having a Deeper Impact on These Three Industries

image from Pixabay

Artificial intelligence (AI) is having a profound impact on business, and it hasn’t all been positive. Aside from automating business processes and providing better business intelligence, AI has also given criminals a new arsenal for cyberattacks. The same generative AI technology that makes business more efficient is being used to create deepfakes to perpetrate fraud, identity theft, and other crimes. Three industries that have become primary targets for deepfake attacks are financial services, insurance, and media.

Generative AI makes deepfakes easier to create, more realistic, and harder to identify. Deepfakes can take many forms, such as false IDs, forged documents, and executive impersonations. Deepfakes are being used to infiltrate organizations with sophisticated phishing messages and to impersonate celebrities promoting investments on the web. The FBI’s Internet Crime Complaint Center received almost 900,000 complaints in 2023, an increase of 22% over 2022, with potential losses of $12.5 billion. For security experts to combat deepfakes, they need to understand how cybercriminals use AI, what to look for, and what tools are available to combat deepfakes.

To elaborate on how deepfakes impact business, let’s take a closer look at how deepfakes are being used to attack financial institutions, insurance companies, and the media and what AI tools are available to combat them.

Deepfakes Scam Financial Services

The financial sector is a prime target for deepfake cyberattacks. According to a report from Samsung, fintech saw an increase of 700% in deepfake attacks in 2023. Deepfake attacks are becoming increasingly sophisticated and taking multiple forms.

Perhaps one of the most extreme and complex examples is a finance worker for Arup in Hong Kong who was duped into transferring $25 million to cyber criminals. The scam started with a phony email instructing the worker to make a clandestine funds transfer. Although he suspected the email from the UK-based CFO was fake, he was convinced when he attended a video conference call with his colleagues. It turned out that all those on the call were deepfake impersonations, and before the fraud was uncovered, he transferred 200 million Hong Kong dollars ($25.6 million U.S.).

This example shows the power of deepfakes to gain anyone’s trust. Deepfakes are increasingly used for phishing attacks. More than half of the security professionals polled by Abnormal Security reported AI-generated phishing emails in their systems.

Other forms of deepfake fraud are also on the rise. A survey by Regula reported that 37% of global businesses experienced deepfake voice fraud, and 29% were victims of deepfake videos. Forty-six percent of companies surveyed say they experienced some form of “Frankenstein,” combining real and fake information to create new identities.

To address the problem, financial services firms are investing in employee and customer education, showing them what to look for and when to be suspicious to prevent deepfake fraud. More financial institutions are also adopting AI-powered technology that uses self-learning algorithms to detect anomalies that indicate fraud.

Falsifying Insurance Claims

Deepfakes are having a huge impact on the insurance industry. The digitization of insurance processes has made it easier to commit fraud. The Coalition Against Insurance Fraud reports that fraudulent insurance claims cost more than $308.6 billion annually. Many of those fraudulent claims use falsified images and documents. According to Allianz, incidents of image, video, and document fraud increased by 300% between 2021-22 and 2022-23.

Fraudsters can fabricate accidents using AI, including photographs with metadata and certified eyewitness testimony. Deepfakes can forge medical records, test results, medical bills, and doctor’s notes. AI is also being used to generate phony death certificates.

Insurance companies are increasingly using digital claims processes, making it easier to use falsified paperwork. To combat the problem, more underwriters are using digital fingerprinting and document authentication as part of automated claims processing.

Fooling the Media

Deepfakes are ideal for media misinformation and propaganda. For example, a deepfake video of President Volodymyr Zelensky telling Ukrainian soldiers to lay down their arms is just one example of how deepfakes can be used for propaganda. Many of these deepfake images are especially prevalent on social media and used to spread misinformation.

The biggest worry is about deepfake audio and video going viral to influence an election or sabotage candidates. For example, in last year’s elections in Slovakia, a faked audio recording was “leaked” mimicking Michal Šimečka of the Progressive Slovakia party and Monika Tódová of the Denník N daily newspaper discussing how to rig the election. The deepfake audio was released just before the election and could have had a profound impact.

In today’s internet age, when viral images reach millions in seconds, the media struggles to balance the need to get the news first and get it right. More diligent fact-checking and verification are required to prevent deepfake misinformation. If the media is going to maintain its credibility, it must use better analysis tools, scrutinizing images, videos, and audio for anomalies.

Fighting Deepfakes with Better AI

As cybercriminals harness generative AI for more sophisticated deepfake attacks, businesses must apply the same technology to fight those attacks.

New AI technology can analyze and authenticate documents, images, voice, and video media. Forensic scanning looks for anomalies in the source data to see if they have been altered. For example, falsified images may show blurred lines or inconsistent shadows, or the metadata may be modified, indicating it has been changed. Documents can be analyzed for textual changes or changes to embedded images.

Validated documents can be fingerprinted to validate their authenticity later. The fingerprints can be stored using blockchain technology to ensure they can’t be changed. Many authentication systems use tamper scoring for risk profiling and to gauge the trustworthiness of digital materials.

As AI technology advances, deepfakes will become harder to identify, and financial service companies, insurance companies, and media outlets will need more sophisticated AI tools to detect them. With deepfakes becoming so prevalent, building authentication protocols for every financial transaction, insurance claim, and media post makes sense. The fraudsters aren’t going to go away, but we can all be more diligent in verifying digital content and materials rather than accepting them at face value.

Nicos Vekiarides:
Related Post