Nation-state threat actors are making use of generative AI tools, including large language models (LLMs) like ChatGPT, in their cyber operations, new research by Microsoft and OpenAI has confirmed.
Threat groups from Russia, China, North Korea and Iran are leveraging generative AI to support campaigns rather than using these tools to develop novel attack or abuse techniques.
Attackers are “probing” AI’s current capabilities and security controls, with Microsoft and OpenAI stating they will continue to monitor how threat actors’ use of these tools evolves.
The primary uses of OpenAI services by threat actors include:
- Querying open-source information
- Translation
- Finding coding errors
- Running basic coding tasks
LLMs also offer assistance for common tasks performed during cyber campaigns. These include reconnaissance, such as learning about potential victims’ industries, locations, and relationships.
The language support provided by LLMs “is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships,” Microsoft noted.
Five Nation-State Threat Actors Weaponizing AI
The Microsoft study highlighted the use of generative AI by five nation-state threat actors that represent the tactics, techniques and procedures (TTPs) the cybersecurity industry needs to better track:
Forest Blizzard – Russia
The Russian military intelligence-linked actor targets government and critical infrastructure organizations in the US, Europe and the Middle East. It has been “extremely active” in conducting operations supporting Russia’s invasion of Ukraine.
The group, also tracked as APT28, has used LLMs for reconnaissance purposes to understand satellite communication protocols, radar imaging technologies, and specific technical parameters.
In addition, Forest Blizzard has engaged LLMs to assist in basic scripting tasks including file manipulation, data selection, regular expressions, and multiprocessing.
Emerald Sleet – North Korea
This group is known to conduct social engineering campaigns to gather intelligence from prominent individuals with expertise on North Korea.
Emerald Sleet’s primary use of LLMs has been to support these campaigns, and conduct research into think tanks and experts on North Korea that it may look to impersonate.
The threat actor has used LLMs for basic scripting tasks and has also engaged these tools to better understand publicly reported vulnerabilities.
Crimson Sandstorm - Iran
This threat actor, which is linked to the Islamic Revolutionary Guard Corps (IRGC), has been observed targeted critical sectors to deliver malware.
Its main uses of LLMs have been to improve the quality of phishing emails, learning how to develop code to evade detection, and enhancing scripting techniques, such as generating code snippets that appear intended to support app and web development.
Charcoal Typhoon - China
This threat actor predominantly focuses on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, particularly those institutions and individuals who oppose Chinese government policies.
Charcoal Typhoon’s interactions with LLMs has revolved around how these tools can augment their technical operations. This includes supporting tooling development, scripting and generating content to socially engineer targets.
The group has also undertaken reconnaissance activities utilizing LLMs, such as understanding specific technologies, platforms and vulnerabilities.
Salmon Typhoon – China
This Chinese-affiliated threat actor has previously been observed targeting US defense contractors, government agencies, and entities within the cryptographic technology sector, often deploying malware to maintain remote access to compromised systems.
Its exploratory use of LLMs has focused on sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.
The group has also used these tools to assist in identifying and resolving code techniques and to translate computing terms and technical papers.
Microsoft and OpenAI said that all accounts and assets associated with these groups have been disabled.
How to Combat AI-Powered Cyber Operations
The researchers noted that the current use of AI aims to enhance existing attacks that rely on social engineering and finding unsecured devices and accounts.
Therefore, cyber hygiene best practices such as multifactor authentication (MFA) and zero trust architecture remain key to combatting nation-state actors.
OpenAI added that it will take lessons from how these threat actors are abusing its services to improve its approach to safety.
“Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards,” the firm wrote.
Commenting on the research, Gerasim Hovhannisyan, CEO and co-founder of EasyDMARC, said attackers use of AI to scale attacks provides an opportunity for security teams to push for investment in advanced security measures.
"By harnessing new technology in the same way hackers do, companies can develop advanced systems capable of combatting threats before they can cause damage.
“It's no longer sufficient to have static defense solutions – cybersecurity must be proactive and continuously adapt, complying with regulations and staying up-to-date with which evolving security solutions will prove most effective in combatting modern threats," stated Hovhannisyan.
Alastair Paterson, CEO of Harmonic Security, believes the findings should serve as a warning to organizations to prevent employees uploading sensitive data to LLMs like ChatGPT.
“It's great that they were identified and disabled in this case, but it will be a case of whack-a-mole as it often is in security," he commented.
“Of course, this issue extends beyond OpenAI and into all the other public GenAI models that train on the data put into them, accessible to employees and attackers alike. As ever, we need to be cautious about where our corporate data goes, and in this case, how it may be used against us,” added Paterson.