U.S. officials warned about the growing threat of impersonation through text messages and AI-generated voice cloning
In a concerning development, the use of AI-based voice cloning for malicious purposes has surged by an astonishing 442% between the first and second halves of 2024, according to recent reports. These messages are part of a sophisticated social engineering campaign, similar to spear-phishing, using smishing (malicious SMS) and vishing (malicious voice memos).
The FBI has highlighted the activities of the Scattered Spider group, which has been using advanced social engineering techniques, including impersonating employees or contractors to deceive IT help desks and bypass multi-factor authentication (MFA). While the focus is on airlines and insurance sectors, the group is known to impersonate top executives or high-ranking employees, aligning with vishing or voice-based social engineering.
Cybercriminals are leveraging AI, including generative models similar to GPT-4, to orchestrate highly targeted and personalized phishing campaigns. These AI-generated messages are crafted with perfect grammar and style, making them extremely convincing. This rise in AI-driven phishing attacks includes email scams but increasingly involves voice phishing or vishing, where synthetic or AI-generated voice messages impersonate senior officials to deceive victims into divulging sensitive information or authorizing fraudulent transactions.
Campaigns now combine multiple vectors—phishing emails, social media lures, fake sites, smishing, and vishing—to maximize impact. Cybercriminals impersonate high-level U.S. officials or executives to build trust quickly, exploiting human factors and organizational vulnerabilities. The use of AI allows these campaigns to scale with higher success rates.
While the search results do not provide explicit, detailed ongoing incidents solely focused on smishing or AI-generated voice messages impersonating senior U.S. officials, the convergence of Scattered Spider's social engineering with AI-enhanced phishing clearly marks the current threat landscape. These campaigns are highly targeted, often aiming at help desks, third-party vendors, and executives to gain unauthorized access and conduct fraud or ransomware attacks.
Organisations must remain alert to multifaceted social engineering tactics combining smishing, vishing, and AI-powered impersonations to protect sensitive data and infrastructure. Microsoft has warned of a threat actor, Storm-1811, using Microsoft Teams to impersonate IT help desk workers in 2024. The messages are impersonating senior U.S. officials, and the goal is to establish a rapport and gain access to personal accounts.
Aaron Rose, a security architect at Check Point Software Technologies, has stated that public available audio can be used to create legitimate voice clones. Threat actors are now using AI voice cloning tools to create realistic impersonations of public figures, according to Rose. The weaponization of voice cloning has been ongoing for at least five years, with Leah Siskind, director of impact and AI research fellow at the Foundation for Defense of Democracies, stating this information.
In response to these threats, cybersecurity firms are conducting red-team exercises to test their defences. Mandiant, for instance, has conducted exercises where employees used AI-based voice spoofing to gain access to a client's internal network. The red team was able to train an AI model based on a natural voice sample and bypass Microsoft Edge and Windows Defender SmartScreen.
As the use of AI in social engineering campaigns continues to grow, it is crucial for organisations to stay vigilant and invest in robust cybersecurity measures to protect against these increasingly sophisticated threats.
- The rise in AI-driven phishing attacks includes not only email scams but also voice phishing or vishing, where synthetic or AI-generated voice messages impersonate senior officials to deceive victims.
- Cybercriminals are using AI voice cloning tools to create realistic impersonations of public figures, emphasizing the need for organizations to remain alert to such threats.
- To counter these advanced threats, cybersecurity firms are conducting red-team exercises, using AI-based voice spoofing to test their defenses and invest in robust cybersecurity measures.