2025 to be a Year of Reckoning for AI in Cybersecurity

Written by

With artificial intelligence (AI) gaining prevalence across a variety of industries in recent years, the technology isn’t set go anywhere anytime soon. One key area in which it is increasingly being leveraged, for better or for worse, is cybersecurity.

According to research by the Institute of Electrical and Electronics Engineers (IEEE), despite the growing concerns of threat actors using AI to surpass conventional security protocols, 44% of UK businesses remain convinced that AI applications will be advantageous in the year ahead, particularly within real-time cybersecurity vulnerability identification and attack prevention.

In fact, 41% of respondents expect their organizations to start implementing robotics cybersecurity into operations, using AI to monitor, identify and flag security threats in real-time and prevent data leaks or financial losses.

A strong majority of enterprise leaders (91%) agree that in 2025, there will be a generative AI ‘reckoning’ as people gain a greater understanding of, and expectations for, what the technology can and should do.

The Growing Impact of AI on Cybersecurity

Clearly the dynamic between hackers and cybersecurity teams is shifting. Both sides are employing AI tools within their arsenal to outmanoeuvre one another. The cybersecurity landscape has evolved, and this has gradually become a high-stakes arms race.

For defenders, AI has become a valuable asset. Enterprises have implemented GenAI and other automation tools to analyze vast amounts of data in real time and identify anomalies, enabling teams to better mitigate potential threats.

At the same time, for threat actors, AI can streamline phishing attacks, automate malware creation and even help scan networks for vulnerabilities. In fact, GenAI can now generate far more sophisticated content and capture a wider range of writing styles, meaning email scams are more plausible and harder to detect.

By employing GenAI within their phishing attacks, threat actors can target a greater number of people quickly and achieve even better results.

Deepfakes, Malware and Other AI Threats

Given the pace at which AI has developed over the past year, attackers’ methods have also evolved, making it harder for traditional security measures to detect and mitigate threats. This includes polymorphic malware, a form of malware which is able to manipulate its code so it is no longer recognized, enabling it to remain undetected by scanners.

Adversarial AI is also an emerging attack vector, whereby threat actors aim to ‘trick’ AI systems into making incorrect or poor decisions by introducing small changes to the AI model itself or its data.

Deepfake scams are also far more effective now. With the advent of AI, scammers can manipulate audio, video or image content and deploy it to the masses, with the intent of extracting sensitive data, such as login credentials or financial information, or identity authentication processes.

Developing Personalized Security Measures

In-house teams are now using AI to personalize security measures to individual user behaviors within their organization, improving the accuracy of threat detection and reducing false positives.

Indeed, using AI’s predictive analytics, cybersecurity personnel can forecast emerging threats based on pattern recognitions. Machine learning (ML) models can also be applied to identify unusual network activities or anomalies by analyzing previous attack methods and data. In doing so, these AI models can build more adept security mechanisms and counter measures to enable quicker response times.

Over time, cybersecurity teams could apply natural language processing (NLP) to analyst threat intelligence feeds, security logs and behavioral data to better understand emerging attack patterns and stay ahead of, or even stop attackers in their tracks before the threat materializes.

In the future, teams could in theory employ ‘deception’ techniques – similar to their adversaries – by using AI technology to mislead or trick threat actors. This would involve creating decoys that continuously evolve and adapt depending on how attackers engage with it.

These AI models would generate convincing but fake data, systems or network assets, to lure cybercriminals into engaging with these traps rather than the organization’s actual infrastructure. It is still within its infancy, but in time, these AI systems operate more autonomously.

AI and Cybersecurity in 2025

It is hard to predict what the cybersecurity landscape will look like in 2025. AI has proven to be a valuable tool for cybersecurity teams and threat actors alike. Given how quickly the technology has evolved, organizations will have to adopt a more comprehensive and proactive cybersecurity policy if they want to stay ahead of attackers.

After all, cyber-attacks are a matter of ‘when,’ not ‘if,’ and AI will only accelerate the number of opportunities available to threat actors.

At the moment, there is a risk that security teams will become over-reliant on AI, potentially sidelining human judgment and leaving systems vulnerable to attacks. There is still a need for a human ‘copilot’ and roles need to be clearly defined. However, AI can handle the data-intensive tasks, and humans can manage decisions requiring critical judgment or thinking.

Automation will become more prevalent in security operations, but given how quickly AI has been employed by adversaries, there will need to be more sophisticated AI-driven countermeasures to combat them. It will just be a matter of finding the right balance for security teams.

There may indeed be a future where AI ‘jousts’ with another AI and the victor is decided based on how quickly the model can respond to an event in real time. However, for now, organizations should focus on establishing more comprehensive security measures and access controls and continue to encourage employees’ training.

Some organizations have started implementing attack simulations in their training, known as ‘red teaming’. By replicating real life scenarios, the hope is that employees will gradually become more comfortable when faced with a real threat and respond far more effectively.

Needless to say, 2024 has been an eventful year rife with unprecedented attacks, showcasing a lack of readiness by organisations. Next year presents an opportunity for organizations to establish greater counter measures and employ AI within their arsenal. 

What’s hot on Infosecurity Magazine?