Malicious AI use will “almost certainly” drive an increase in the volume and impact of cyber-attacks over the next two years, especially ransomware, the UK’s National Cyber Security Centre (NCSC) has warned.
The agency, part of GCHQ, revealed the news in a threat assessment published today: The near-term impact of AI on the cyber threat.
The analysis was compiled from classified intelligence, industry knowledge, academic material and open source intelligence, and lists the probability of specific events or developments occurring.
AI Already Used by Cybercriminals
The NCSC claimed AI is already being used by threat actors, with generative AI (GenAI)-as-a-service offerings being developed on the cybercrime underground.
This is a concern because it could empower less well-resourced threat actors to launch effective attacks.
Currently, only state actors, commercial spyware companies and established criminal groups have enough exploit training data and resource to launch sophisticated AI-powered attacks, the NCSC claimed.
However, in the meantime, publicly available AI models are driving lower sophistication attacks such as spear-phishing, the report noted.
Over the next two years, such capabilities will continue to lower the barrier to entry for novice cybercriminals, hacktivists and hackers-for-hire – enabling initial access, reconnaissance and improved targeting of victims, it said.
This will increase cyber-risk around ransomware, which has in the past been described by the NCSC as the UK’s “most immediate threat.”
Training Data Is Key
In the near term, AI use for developing malware/exploits, researching vulnerabilities and achieving lateral movement will be restricted to more capable threat actors such as nation states, due to a lack of training data, the NCSC said.
However, this will not be the case forever.
“To 2025, training AI on quality data will remain crucial for its effective use in cyber operations. The scaling barriers for automated reconnaissance of targets, social engineering and malware are all primarily related to data,” the report explained.
“But to 2025 and beyond, as successful exfiltrations occur, the data feeding AI will almost certainly improve, enabling faster, more precise cyber operations.”
This in turn will have a major impact on network defenders’ cyber-resilience efforts, as bugs are exploited more rapidly after patches are released and it becomes harder to tell real from fraudulent emails, the NCSC warned.
The one bright spot is that AI will also help cyber-defense, the report claimed.
“The emergent use of AI in cyber-attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” argued NCSC CEO, Lindy Cameron.
“As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organizations and individuals to follow our ransomware and cybersecurity hygiene advice to strengthen their defences and boost their resilience to cyber-attacks.”