State-sponsored entities perceived to employ AI technology
In the rapidly evolving digital landscape, concerns about the potential abuse of generative AI by malicious threat groups have been raised. These groups, linked to Russia, Iran, North Korea, and the People's Republic of China, are reportedly using OpenAI's large language models for precursor tasks such as open source queries, translation, searching for errors in code, and running basic coding tasks.
The threat activity uncovered by OpenAI and Microsoft is a precursor for state-linked and criminal groups to rapidly adopt generative AI to scale their attack capabilities. This adoption could potentially allow these groups to execute attacks beyond the capability of network defenders to respond effectively.
One of the key ways these groups leverage generative AI is by crafting personalized social engineering and phishing lures. AI helps produce highly convincing and tailored messages that bypass multi-factor authentication and exploit IT support systems to escalate privileges quickly, sometimes using only built-in tools without malware.
Another concern is the development of deepfakes and AI-created media. Nation-state actors use AI-generated images, videos, and voice-changing technologies to impersonate legitimate individuals, enhancing phishing, influence operations, and disinformation campaigns.
Furthermore, large-scale AI-driven efforts are automating influence and disinformation campaigns. These efforts create fake social media profiles, generate false information, and manipulate AI language models themselves to amplify state narratives and sow discord in targeted populations.
The widespread, often unauthorized use of generative AI tools by employees also contributes to data exposure risks, increasing attack surfaces that threat actors can exploit with AI-assisted zero-hour phishing attacks and fake GenAI websites.
To counter this, there is a push for highly secure AI facilities, improved regulatory frameworks, and advanced AI-specific defense measures spearheaded by entities like the U.S. Department of Defense and intelligence communities.
In a recent move, OpenAI has terminated accounts of five state-affiliated threat groups, including Russia-linked Forest Blizzard, North Korea-linked Emerald Sleet, Iran-linked Crimson Sandstorm, China-linked Charcoal Typhoon, and China-linked Salmon Typhoon. This disruption was done in collaboration with Microsoft threat researchers.
Brandon Pugh, director of cybersecurity and emerging threats at R Street Institute, emphasized the need for cyber defenders to leverage AI's benefits in cybersecurity and further innovate to stay ahead of adversaries. Avivah Litan, VP distinguished analyst at Gartner, stated that GenAI gives attackers a significant advantage, allowing them to scale and spread attacks more quickly.
Microsoft has not yet observed any uniquely novel methods or significant attacks using large language models, but is tracking the activity and will issue alerts for any misuse of the technology. The threat activity disclosed by OpenAI and Microsoft seems to confirm widespread concerns about the potential abuse of generative AI.
Cybersecurity professionals, such as Brandon Pugh, are advocating for the use of AI in cybersecurity to counteract the growing threat posed by state-linked and criminal groups who leverage artificial-intelligence capabilities, including generative AI, to execute attacks beyond the capabilities of network defenders and scale their attack capabilities in the field of cybersecurity. At the same time, Avivah Litan, a VP distinguished analyst at Gartner, has expressed concern that GenAI gives attackers a significant advantage, allowing them to spread attacks more quickly and create deeper infiltrations in targeted systems.