Back to All Research

Generative AI Enables Threat Actors to Create More (and More Sophisticated) Email Attacks

June 14, 2023

Anyone who has spent time online in 2023 has likely heard about ChatGPT and Google Bard, two of the more popular platforms that harness generative artificial intelligence (AI) to complete written commands. In the few months since their release, they’ve already had a profound impact on various aspects of our digital world.

By leveraging advanced machine learning techniques, generative AI enables computers to generate original content including text, images, music, and code that closely resembles what a human could create. The technology itself has far-reaching implications, many of which can be used for both personal and professional good. Artists and authors can use it to explore new creative directions, pilots and doctors can use it for training and real-world simulation, and travel agents can ask it to create trip itineraries—among thousands of other applications.

But like anything else, cybercriminals can take advantage of this technology as well. And unfortunately, they already have. Platforms including ChatGPT can be used to generate realistic and convincing phishing emails and dangerous malware, while tools like DeepFaceLab can create sophisticated deepfake content including manipulated video and audio recordings. And this is likely only the beginning.

New Email Attacks Generated by Artificial Intelligence

Security leaders have worried about the possibilities of AI-generated email attacks since ChatGPT was released, and we’re starting to see those fears validated. Abnormal has recently stopped a number of attacks that contain language strongly suspected to be written by AI.

Here are three real-life examples stopped for our customers, along with our analyses of how these attacks were deemed likely to be AI-generated. They showcase a variety of attack types, including credential phishing, an evolution of the traditional business email compromise (BEC) scheme, and vendor fraud.

AI-Generated Phishing Attack Impersonates Facebook

Users have long been taught to look for typos and grammatical errors in emails to understand whether it is an attack, but generative AI can create perfectly-crafted phishing emails that look completely legitimate—making it nearly impossible for employees to decipher an attack from a real email.

This email sent by “Meta for Business” states that the recipient’s Facebook Page has been found in violation of community standards and the Page has been unpublished. To fix the issue, the recipient should click on the included link and file an appeal. Of course, that link actually leads to a phishing page where if the user were to input their credentials, attackers would immediately have access to their Facebook profile and associated Facebook Page.

Facebook account attack

In the entirety of the email there are no grammatical errors, and the text sounds nearly identical to the language expected from Meta for Business. The fact that this email is so well-crafted makes it more difficult to detect by humans, who are more likely to click on the link than if it had been riddled with grammatical errors or typos.

How to Determine AI-Generated Text

The simplest way to detect whether an email was written by AI is to use AI. We run these email texts through several open-source large language models in the Abnormal platform to analyze how likely each word would be predicted given the context to the left. If the words in the email are consistently high likelihood (meaning each word is highly aligned with what an AI model would say, more so than in human text) then we classify the email as possibly written by AI.

Here is the output of that analysis for the email example above. Green words are judged as highly aligned with the AI—in the top 10 predicted words—while yellow words are in the top 100 predicted words.

Facebook email ai

[Source: http://gltr.io/ ]

After detecting emails that show indicators of generative AI, we validate these assumptions with two external AI detection tools: OpenAI Detector and GPTZero.

It is important to note that this method is not perfect and these tools may flag some non-AI generated emails. For example, emails created based on a template—such as marketing or sales outreach emails—may contain some sequences of words that are nearly identical to those that an AI would generate. Or, emails containing common phrases—such as copy-pasted passages from the Bible or the Constitution—may result in a false AI classification.

However, these analyses do give us some indication that an email may have been created by AI and we use that signal (among thousands of others) to determine malicious intent.

Employee Impersonated in AI-Created Payroll Diversion Scam

Unfortunately, use of generative AI to create malicious emails has moved beyond phishing and to business email compromise. While phishing emails contain a malicious link to aid in detection, these text-based emails lack those traditional indicators of compromise.

In this real-world attack, an employee’s account has been impersonated and the attacker has emailed the payroll department to update the direct deposit information on file. Again, the email is free of grammatical errors or typos, and is written very professionally—as the payroll specialist likely expects it to be.

Direct deposit attack

Other than the impersonated sender name, there is nothing here to indicate an attack, showcasing just how dangerous generative AI can be in the wrong hands.

Here is the result of our generative AI-likelihood analysis:

Direct deposit ai

[Source: http://gltr.io/ ]

AI-Generated Vendor Compromise and Invoice Fraud

And it’s not just traditional BEC either, as attackers are also using ChatGPT-like tools to impersonate vendors. Vendor email compromise (VEC) attacks are among the most successful social engineering attacks because they exploit the trust that already exists in relationships between vendors and customers. And because discussions with vendors often involve issues around invoices and payments, it becomes harder to catch attacks that mimic these conversations—especially when there are no suspicious indicators of attack like typos.

This vendor fraud email involves an impersonation of an attorney, requesting payment for an outstanding invoice.

Vendor fraud attack

Similar to the two examples above, this email also shows no grammatical errors and is written in a tone that’s expected from an attorney. The impersonated attorney is also from a real-life law firm—a detail that gives the email an even greater sense of legitimacy and makes it more likely to deceive its victim.

Again, our analysis determined a high likelihood that this email was generated by AI.

Vendor fraud ai

[Source: http://gltr.io/ ]

Fighting Bad AI with Good AI

A decade ago, cybercriminals created new domains to run their attacks, which were quickly identified by security tools as malicious and subsequently blocked. In response, threat actors changed their tactics and began using free webmail accounts like Gmail and Outlook, knowing that security tools could not block these domains since they are often used to conduct legitimate business.

Generative AI is much the same. Employees can use ChatGPT and Google Bard to create legitimate communications for normal, everyday business, which means that security tools cannot simply block every email that appears to be generated by AI. Instead, they must use this as one indicator of potential attack, alongside thousands of other signals.

As these examples show, generative AI will make it nearly impossible for the average employee to tell the difference between a legitimate email and a malicious one, which makes it more vital than ever to stop attacks before they reach the inbox. Modern solutions like Abnormal use AI to understand the signals of known good behavior, creating a baseline for each user and each organization and then blocking the emails that deviate from that—whether they are written by AI or by humans.

These three examples are only a small percentage of the email attacks generated by AI, which Abnormal is now seeing on a near-daily basis. Unfortunately, as the technology continues to evolve, cybercrime will evolve with it and both the volume and the sophistication of these attacks will continue to increase. Now more than ever, it’s time to take a look at AI, both good and bad, and understand how good AI can stop the bad—before it’s too late.

Discover more about the rise of AI-generated email attacks in our new CISO Guide to Generative AI Attacks.

Gen AI

See How Abnormal Stops Emerging Attacks

See a Demo

Get the Latest from Abnormal Intelligence

Subscribe to our monthly newsletter to receive the latest insights from our team directly in your inbox.