AI Now Constitutes the Predominant Source of Spam and Malicious Emails

مقالات

Over half (51%) of malicious and spam emails are now generated using AI tools, according to a study conducted by Barracuda in collaboration with researchers from Columbia University and the University of Chicago.

The research team analyzed a dataset of spam emails detected by Barracuda from February 2022 to April 2025. They employed trained detectors to automatically identify whether a malicious or unsolicited email was generated using AI.

The study identified a steady rise in the proportion of spam emails generated by AI from November 2022 until early 2024. November 2022 marked the launch of ChatGPT, the first publicly available large language model (LLM). In March 2024, a significant spike in the rate of AI-generated scam emails was identified, which fluctuated before peaking at 51% in April 2025.

Asaf Cidon, Associate Professor of Electrical Engineering and Computer Science at Columbia University, noted that no clear factor has been identified for the sudden spike. “It’s hard to know for sure but this could be due to several factors: for example, the launch of new AI models that are then used by attackers or changes in the types of spam emails that are sent by attackers, increasing the proportion of AI-generated ones,” he explained.

Additionally, the researchers observed a more gradual increase in the use of AI-generated content in business email compromise (BEC) attempts, comprising 14% of all attempts in April 2025. This may be due to the precise nature of these attacks, which typically impersonate a specific senior individual within an organization for financial requests. Consequently, AI may currently be less effective in these scenarios.

However, Cidon expects that AI will be increasingly utilized in BEC attempts as technology continues to advance. In particular, with the rise of effective and affordable voice cloning models, attackers may incorporate voice deepfakes into BEC attacks, allowing them to better impersonate specific individuals, such as CEOs.

Attackers Primarily Using AI to Bypass Protections

The researchers identified two main factors driving the use of AI in email attacks: bypassing email detection systems and enhancing the credibility of malicious messages to recipients. AI-generated emails typically exhibited higher levels of formality, fewer grammatical errors, and greater linguistic sophistication compared to human-written emails, enabling them to bypass detection and present a more professional appearance.

“This is beneficial when the attackers’ native language differs from that of their targets. In the dataset, most recipients were from countries where English is widely spoken,” the researchers noted.

Furthermore, attackers were observed using AI to test various wording options to determine which would be more effective at evading defenses, akin to A/B testing techniques employed in traditional marketing.

The study found that LLM-generated emails did not significantly differ from human-generated ones in terms of the urgency conveyed, a common tactic in phishing attacks designed to provoke rapid emotional responses. This indicates that while AI is being leveraged primarily to improve penetration rates and plausibility, it does not signify a fundamental shift in tactics.