FBI: U.S. Officials Compromised by Voice Deepfake Attacks Since April
The FBI has issued a warning regarding recent voice phishing attacks targeting U.S. officials using AI-generated audio deepfakes, which have been occurring since April 2025. This alert serves as a reminder of the evolving tactics employed by cybercriminals and aims to enhance awareness about these threats.
Malicious actors have been impersonating senior U.S. officials to manipulate individuals, including current and former government officials and their contacts. The FBI advises that any message purporting to be from a senior official should be treated with caution, as its authenticity cannot be assumed.
These cybercriminals utilize both text messages and AI-generated voice messages—commonly known as smishing and vishing, respectively—to create a false sense of legitimacy. They aim to establish rapport with their victims, using deceptive tactics to gain access to personal accounts. The attackers send links disguised to continue the conversation on different messaging platforms, thereby facilitating the compromise of accounts belonging to U.S. officials.
Once such accounts are breached, cybercriminals can acquire contact information for other government officials. They then leverage social engineering techniques to impersonate the compromised officials, leading to further theft of sensitive information and financial fraud.
This warning aligns with earlier notifications from the FBI, highlighting the increasing sophistication of deepfakes in cyber and foreign influence operations. A report from Europol in 2022 raised similar concerns, indicating that deepfake technology has the potential to become a common tool among cybercriminals, particularly in schemes involving impersonation.
Recently, the U.S. Department of Health and Human Services also cautioned about cybercriminals targeting IT help desks through social engineering attacks, employing AI voice cloning to mislead their targets. In a notable incident, LastPass reported that attackers had used deepfake audio to impersonate its CEO in an attempted voice phishing attack on an employee.
As cyber threats continue to evolve, organizations must remain vigilant and implement robust security measures to mitigate risks associated with AI-generated impersonation and other malicious activities.