social engineering artificial intelligence cybersecurity AI

Social Engineers are No Match for Artificial Intelligence

We are still in the early days of artificial intelligence, but it is quickly becoming an essential part of how organizations defend themselves. Using advanced algorithms, enterprises are improving incident response, monitoring for potential threats and deciphering red flags before they take effect. It can also be used to help identify vulnerabilities that a human may have overlooked.

These are all essential functions that can elevate cyber defense systems above the reactionary—and time-consuming—strategies of the past. However, many organizations have yet to take advantage of AI’s most important application in cyber defense: its lack of sympathy.

Preying on your trust

Initiated and executed by a human with excellent phone or email skills, social engineering does not involve the high-tech hacks that dominate the headlines. Instead, a hacker games a customer service agent or anyone else with information that a hacker is trying to acquire, such as the cell phone number involved in two-factor authentication or the password to a lucrative gaming account. According to the FBI’s Internet Crime Complaint Center, business email compromise (BEC), a popular form of social engineering, caused more than $1.2 billion in losses in 2018 alone.

While banks and other high-value entities may be the primary target, no one is immune. One woman lost $30,000 in cryptocurrency after hackers successfully persuaded her wireless carrier to activate a new SIM card, which was then used to breach her two-factor authentication for various financial accounts. In another incident, Ubiquiti Networks lost $39.1 million after hackers impersonated a senior staffer and convinced the finance department to complete a massive transfer of funds.

Even something as innocuous as a rewards program could draw the interest of a malicious threat actor. Businesses currently deal with this problem by training employees, using a set of protocols that employees should follow at all times.

Unfortunately, the weakest link in a cybersecurity chain isn’t the firewall—it’s often the human. They are in the business of customer service and are eager to please. They are also terrible at following a script, and they are not impervious to a clever scammer who knows all the right things to say. If the hacker claims he lost his hotel rewards card and would like another sent to his home, then asks which address is on file, the call center agent might oblige. Now a stranger has obtained the private information of a particular customer.

Stopped in their tracks

The outcome would have been very different if the customer agent had been preceded by artificial intelligence – specifically conversational AI. One of the largest gaming companies in the world experienced this firsthand when the company decided to use conversational AI as a front-end chat agent. Known for many successful video game franchises, the company implemented Amelia to field straightforward gaming issues for customers. The company thought Amelia could cut down on the time a customer spent waiting for a resolution to their problem, and she did. However, after a while on the job, Amelia also detected phishers by noticing that callers would sometimes ask for access to accounts without the correct identifying information. In the world of gaming, where real financial instruments are baked into gameplay and financial account information is attached, the stakes are high. Human customer service agents were being gamed into handing over account credentials. Amelia, no sap for a sob story, was not.

Programmed to make decisions with company policy in mind, Amelia doesn’t fall for social engineering techniques. She just wants to prove that you are who you say you are, and she does that by sticking to the script and by introducing new forms of authentication when behavior appears risky.

The initial rollout was a success, at least partially. Amelia was able to resolve customer complaints in less time than a human agent, and she reduced fraudulent account use on top of that—but customer satisfaction declined. How can consumers be upset by an AI that could prevent their account information from being stolen?

When we viewed the transcripts of what transpired between Amelia and customers during normal and escalated chats, the reasons became clear. She stuck to the process and would not allow herself to be fooled by the hackers, who in turn gave her a low customer satisfaction rating while the gaming company call center gave Amelia rave reviews.

There’s no time like real-time

Even when we take conversational AI outside the customer service setting, it gives threat assessment professionals a competitive advantage. Instead of scanning data or monitoring threats periodically and offline, an advanced conversational AI can analyze data and actions in real-time, on top of grasping the context of the interactions. If a user wants his balance, an advanced conversational AI will grasp the right meaning—after authenticating his identity based on the information he provides. This allows the technology to have a real, one-on-one conversation with a human that feels natural and lifelike in the front-end, as data is processed online in the backend.

AI has made it possible to evaluate a conversation as it’s happening. It has the ability to process every utterance for risk assessment, which makes it a very powerful tool. If the alleged customer is behaving so far outside the norm that it constitutes risk, conversational AI will recognize it and take appropriate action. This could be as simple as adding additional steps for identity validation.

Without AI, it would only be possible to examine a discussion for signs of fraud after it had concluded. By then a malicious actor might have taken the money and run. But if AI was having that conversation instead of a real person, it would have seen everything that had occurred, analyzed every word and made real-time decisions around threats, risks and security.

Taking the lead

Social engineering is as much of a source of hacking as is brute force algorithm-based software attacks. Whether targeting hotels, game publishers or any other institution, hackers are searching for the information that’s going to get them inside. But they don’t need to execute a highly sophisticated attack to get what they want; all they need is a good story, a bit of persistence and a call center agent who’s willing to break protocol.

By allowing conversational AI to take the lead as the front-end chat agent, enterprises can mitigate the potential damage caused by social engineering. This can and should be implemented with as few customer disruptions as possible. In fact, your customers shouldn’t even think about who the person is, or isn’t, on the other end of the chat. They should simply be able to enter their query, get what they want and move on. That convenience, after all, is how we define the future.

Latest posts by Ergun Ekici (see all)

Comments are closed.

Scroll to Top