The development of artificial intelligence (AI) continues to dominate the conversation around technology.
While use cases are emerging where generative AI can be useful to businesses, it is a double-edged sword with cybersecurity concerns rising.
Infosecurity has been covering the impact GenAI has had on cybersecurity from how the models can be compromised and what use attackers are putting AI to, to how businesses can safely and securely implement AI into their business workflows.
Here are our top 10 AI cybersecurity news stories in 2024.
NSA Launches Guidance for Secure AI Deployment
The US National Security Agency, in collaboration with six government agencies from the US and other Five Eyes countries, published new guidance on how to deploy AI systems securely. The guidance provides a list of best practices in the three main steps of AI deployment.
White House Issues AI National Security Memo
In October, the White House issued a National Security Memorandum (NSM) on AI, setting out key actions for the federal government to advance the safe, secure and trustworthy development of the technology in the interests on US national security. The NSM included steps to track and counter adversary development of AI.
Understanding NullBulge, the New AI-Fighting 'Hacktivist' Group
A new threat actor, named NullBulge, emerged in Spring 2024 and claimed to target AI-centric games and applications. It was the little-known threat actors that claimed to have stolen and leaked over a terabyte of data from Disney’s internal Slack channels in July. The group said its motivation behind the attack was to protect artists around the world against AI. It claimed to be uninterested in profit, though some threat analysts said they had observed some malicious activities that suggested otherwise.
UK Signs Council of Europe AI Convention
The Council of Europe AI Convention is the first legally binding international agreement on AI, which the UK signed on September 5, 2024, after the convention was officially adopted by 46 members of the Council of Europe in May. The text outlines a joint effort to oversee AI development and safeguard the public from potential harm caused by the use and misuse of AI models and AI-powered tools.
Microsoft, OpenAI Confirm Nation-States are Weaponizing Generative AI in Cyber-Attacks
Research by Microsoft and OpenAI confirmed that large language models (LLMs) like ChatGPT are being used by nation-state threat actors. The research noted that threat groups from Russia, China, North Korea and Iran are leveraging generative AI to support campaigns that rely on social engineering and finding unsecured devices and accounts. However, they are not yet using these tools to develop novel attack or abuse techniques.
Man Charged in AI-Generated Music Fraud on Spotify and Apple Music
In September, a North Carolina man was charged with stealing royalties by using AI to generate fake songs and fake listeners on streaming platforms. This is the first criminal case which involved AI-generated music. The man was accused of producing hundreds of thousands of songs with AI, which he published on several streaming platforms and fraudulently streamed using automated accounts commonly known as bots.
AI Threat to Escalate in 2025, Google Cloud Warns
While AI threats may not have had a catastrophic impact in 2024, Google Cloud researchers believe that the AI threat will worsen in 2025. Cybercriminals will continue to use AI and LLMs to develop and scale sophisticated social engineering schemes including phishing campaigns. Google Cloud researchers also noted that cyber espionage actors and cybercriminals will continue to leverage deepfakes for identity theft and fraud.
Cybersecurity Teams Largely Ignored in AI Policy Development
Just 35% of 1800 cybersecurity professionals surveyed by ISACA said they are involved in the development of policies governing the use of AI in their enterprises. 2024 has seen governments across the globe working towards governance and regulation of AI, however it appears that cybersecurity is not necessarily being embedded into those governance systems.
AI Chatbots Highly Vulnerable to Jailbreaks, UK Researchers Find
Researchers from the UK AI Safety Institute (AISI) found that four of the most used generative AI chatbots are highly vulnerable to basic jailbreak attempts. Jailbreaking involves removing software restrictions, allowing unauthorized features to be used. The models tested actioned harmful responses in between 90% and 100% of cases when the researchers performed the same attack patterns five times in a row.
AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments
During the virtual AI Seoul Summit, the second event on AI safety co-hosted on May 21-22 by the UK and South Korea, 16 global AI companies have signed new commitments to safely develop AI models. Signatories to the Frontier AI Safety Commitments included Amazon, Anthropic, Google, IBM, Microsoft and OpenAI.
OpenAI image credit: JarTee / Shutterstock.com