AI has continued to be the talk of the town in the cybersecurity industry and community over the past year.
There wasn’t a single cybersecurity or tech event that didn’t have its AI-focused talks and panels. Of course, many of these debates centered on generative AI, the latest innovation, but the global adoption of large language models (LLMs) and other GenAI technologies have shone a light on more established AI systems that have powered security products for years.
However, the rapid advancement and adoption of AI and GenAI technologies also brought significant drawbacks. These include the potential for misuse, such as generating deepfakes, automated cyber-attacks and the spread of misinformation.
Additionally, these technologies pose new security and privacy risks, as they can be exploited to bypass traditional security measures and access sensitive data. Consequently, 2024 saw a surge in AI regulation and governance initiatives worldwide, aiming to establish ethical guidelines, enhance transparency and ensure the responsible use of AI in cybersecurity and beyond.
Infosecurity has selected the top cybersecurity AI trends of 2024 that will inform the future of AI and security for 2025 and beyond.
Top Five Cyber AI Trends of 2024
Security Providers Launch AI-Powered Security and Security-Driven LLMs
In 2024, the cybersecurity landscape witnessed a significant shift as security providers increasingly integrated AI into their products and workflows. This trend is driven by the need to enhance threat detection, automate responses, and improve overall security posture.
Several companies have made headlines for their innovative use of AI in cybersecurity.
For example, CrowdStrike, which claims to have long utilized AI in its Falcon platform for endpoint protection, has recently introduced Charlotte AI, a generative AI tool designed to boost productivity and effectiveness for security analysts.
Another security vendor, Recorded Future, launched Recorded Future AI in October 2024. This platform includes a generative AI-based assistant that helps security teams access critical threat intelligence via a natural language interface, allegedly enhancing their ability to respond to sophisticated threats.
In December, Trend Micro launched its AI Brain to automate threat defenses and predict attacks.
Furthermore, AI is not just enhancing products but also transforming workflows within security companies. Fortinet, Netskope and GitLab are among the firms that have publicly announced the integration of AI into their threat intelligence activities, security operations and DevSecOps processes.
According to Mike Woodard, VP of Product Management for Application Security at Digital.ai, threat monitoring is the security subdomain where generative AI will establish itself as a key tool.
“AI-aided threat monitoring will become the norm. Security operations center managers have the unenviable job of searching mountains of data for actionable information. AI-aided threat monitoring, such as pattern recognition, anomaly detection, and general classification of data, will become necessary for security teams to surface the most urgent threats so that proper mitigation steps can be taken in a timely manner,” he said.
The rise of LLMs has also impacted the cybersecurity market. Recently, researchers from several teams, including Google Project Zero, Google DeepMind and Google's Open Source Security Team, have also found their first real-world vulnerabilities using LLM-powered vulnerability research techniques.
Finally, in December, Hudson Rock was one of the many security vendors that launched a security-focused LLM tool. This AI chatbot, called CavalierGPT, is designed to provide comprehensive intelligence on infostealer malware.
Organizations and Security Teams Adopt AI, But Concerns Remain
Security vendors were not the only ones to have integrated AI and GenAI into their security workflows in 2024. Security teams across all industries have also started doing so.
According to a CrowdStrike December survey, 64% of cybersecurity and IT professionals are either researching GenAI tools or have already purchased one. Additionally, 70% of respondents said they intend to make a GenAI purchase within the next 12 months.
Cybersecurity specialists are globally enthusiastic about GenAI, with 46% of respondents of a survey by Ivanti viewing the technology as a net positive for cybersecurity, while 44% were neutral. Just 6% said the technology was a net negative for cybersecurity.
However, behind the flagrant interest, the CrowdStrike survey also revealed that security leaders are still early in their GenAI integration journey, with only 6% having implemented such a tool and just 18% actively testing one.
When it comes to choosing the right GenAI tool, almost eight in ten security leaders (76%) favored tools purpose-made for cybersecurity over domain-agnostic tools.
In 2024, security pros still expressed many concerns about using LLMs in the wider workplace, including potential sensitive data exposure and AI hallucinations.
Moreover, aside from the known unknowns, security and IT experts must also deal with the unknown unknowns, with over one-third of companies admitting in a Strategy Insights survey they are still grappling with the issue of shadow AI, unauthorized AI tools and AI applications being used without the knowledge or approval of IT and security teams.
This phenomenon poses significant risks, including data breaches and compliance violations, as these shadow AI tools often lack proper security measures and oversight.
Looking ahead, 2025 will likely see many organizations introduce thorough AI governance guidelines. These guidelines will aim to ensure the responsible use of AI, mitigate risks associated with shadow AI, and address data privacy and security concerns.
Johan Oosthuizen, Digital Marketing Manager at Strategy Insights, said organizations must build an effective monitoring framework that tracks how, where and why employees use AI.
“A balanced approach, incorporating regular audits of employee devices and networks, can help organizations keep track of non-approved AI tools while respecting user privacy,” he added.
“Leaders at the roundtable recommended deploying network monitoring systems and establishing company-wide policies on acceptable AI tool usage to prevent unauthorized data sharing and potential security breaches.”
AI Boosts Fraud, But Not Malware Development (Yet)
In 2024, AI’s impact on cyber threats has been notable, but not as significant as some had predicted. While AI has significantly boosted fraud activities, its influence on malware development remains limited, at least for now.
The FBI has reported a surge in AI-driven financial fraud, with cybercriminals leveraging generative AI to create compelling phishing emails and deepfake audio and video to deceive victims. These sophisticated techniques have made it easier for fraudsters to impersonate trusted individuals and organizations, leading to substantial financial losses.
Europol highlighted the growing threat of AI in cybercrime, describing it as a “treasure trove” for criminals. AI tools are being used to automate and enhance various aspects of cybercrime, from social engineering to the creation of malicious software. However, the focus has primarily been on fraud rather than the development of new malware strains.
AI has also been instrumental in facilitating cryptocurrency scams. An AI-powered library has been used to deliver fraudulent cryptocurrency schemes, tricking investors into parting with their digital assets. These scams often involve complex AI-generated content that mimics legitimate investment opportunities, making it difficult for victims to discern the fraud.
Looking ahead, experts, including those at Google Cloud, predict that AI’s role in cyber threats will escalate in 2025. While AI has not yet significantly advanced malware development, the potential for AI to be used in creating more sophisticated and adaptive malware is a looming concern.
Dan Lattimer, Area VP, Semperis, warned that in the future, overestimating the power and influence of AI might actually work in favor of threat actors.
“While we see cybercriminals increasingly trying to harness AI, many of those attacks will still be basic and clunky. And sadly, with everyone talking about AI, there is a risk that some of its really impactful applications will get lost in the general noise,” he said.
AI Regulation and Governance Efforts Intensify
In 2024, the global push for AI regulation and governance saw significant advancements. One of the most notable developments was the introduction of the AI Act in Europe, which came into force on August 1, 20241. This landmark legislation aims to foster responsible AI development and deployment across the European Union, addressing potential risks to citizens' health, safety, and fundamental rights.
The AI Act introduces a risk-based approach, categorizing AI systems into minimal, specific transparency, and high and unacceptable risk levels, each with corresponding requirements and obligations.
In a collaborative effort to enhance AI safety, the US and UK established AI Safety Institutes. These institutes are designed to work seamlessly together, focusing on research, safety evaluations, and developing robust AI safety guidelines. This partnership underscores the commitment of both nations to align their scientific approaches and accelerate the development of comprehensive AI safety measures.
The year also saw a series of AI Safety Summits which aimed to foster international cooperation on AI governance. The AI Safety Summit in the UK at Bletchley Park in November 2023 set the stage, followed by the AI Seoul Summit in May 2024, co-hosted by South Korea and the UK. These summits brought together global leaders, industry experts, and academics to discuss AI safety, innovation and inclusivity.
In response to the evolving threat landscape, OWASP updated its Top 10 list of LLM security risks in 2024. This update reflected the latest vulnerabilities and provides developers and security professionals with essential guidelines to mitigate these risks.
Additionally, the UK government announced an £8.5m ($10.6m) investment to tackle AI safety challenges, further demonstrating its commitment to ensuring the safe development and deployment of AI technologies. The UK also signed the Council of Europe's AI Treaty, reinforcing its dedication to upholding human rights and ethical standards in AI development.
These collective efforts highlight the global recognition of the need for robust AI governance and the proactive steps to address the associated risks. As AI continues to evolve, these regulatory and governance initiatives will play a crucial role in ensuring its safe and responsible use.
For Aleksandr Yampolskiy, Co-Founder and CEO of SecurityScorecard, US state-level AI legislation will ignite a new wave of AI legislation and test American AI leadership.
“California and Texas are poised to lead a transformative era of AI regulation, setting the pace for other states with legislation addressing urgent challenges like ransomware, LLM safety and oversight, and ethical AI use,” he said. “However, state-specific rules may create friction with federal policies and complicate compliance for businesses operating across state lines, increasing costs, added compliance, and operational hurdles to navigate a state network of patchwork legislation.”
He believes that as the patchwork of state laws grows, pressure on the federal government to act will intensify. “A unified approach will be critical to minimize economic impacts and ensure innovation is not stifled. An outstanding question is whether the new Republican-controlled Congress can prioritize with the Trump Administration on rules of the road in a manner that can keep the US ahead of its AI race with the Government of China.”
Deepfakes and AI-Fueled Misinformation Proliferate
In 2024, the proliferation of deepfakes and AI-fueled misinformation emerged as a significant concern. This trend has been particularly alarming given the potential for deepfakes to rapidly undermine trust in institutions and spread false information.
The Scottish Parliament was identified as a potential target for deepfake attacks, with researchers warning that live-streamed parliamentary proceedings could be manipulated to create misleading videos of Members of the Scottish Parliament (MSPs). Such attacks could erode public trust in democratic processes and institutions.
Russian state-sponsored media organization RT has been using AI-powered software to create authentic-looking social media personas en masse. This software, known as Meliorator, has been employed to disseminate disinformation on various topics, including the Russia-Ukraine conflict. The use of AI in these campaigns has made it easier to produce and spread false narratives, complicating efforts to combat misinformation.
In response to these threats, the UK government has launched several initiatives to enhance AI safety and combat misinformation. This includes the establishment of the AI Safety Institute and the overhaul of election laws to address the challenges posed by AI-generated content. These measures aim to ensure that AI technologies are used responsibly and that the integrity of democratic processes is protected.
To counter the spread of deepfakes, companies like Meta and OpenAI have introduced initiatives to watermark AI-generated content – primarily through the Coalition for Content Provenance and Authenticity (C2PA). These efforts are part of a broader industry push to increase transparency and prevent the misuse of AI technologies.
Although deepfakes and AI-fueled misinformation have not yet been used on a broad scale in 2024, the year has likely served as a testbed for adversaries to experiment with new generative AI and deepfake technologies.
According to Ollie Neuberger, Accenture Managing Director, deepfakes should not be underestimated.
“Deepfakes, particularly those crafted by generative AI, are a very real threat and organizations need to be far more prepared for this new era of social engineering,” he said. “Cybercriminals can create sophisticated scams that can deceive even the most discerning of employees. The threat is rising – with Accenture’s Cyber Threat Intelligence team reporting a 223% surge in deepfake-related tool trading on dark web forums.”
Read now: Cyber Threat Intelligence Pros Assess AI Threat Technology Readiness Levels
Conclusion
While the advancements in AI and GenAI technologies have revolutionized the cybersecurity landscape, they have also introduced significant challenges that necessitate robust regulatory frameworks.
As we move forward, it is crucial to balance innovation with ethical considerations to ensure the responsible and secure deployment of AI in all sectors.