There is no doubting that the rapid democratization and accessibility of artificial intelligence (AI) assets are a threat. Most infosec professionals do not have the time to keep up with the lightning-fast developments in AI, so in this article, I aim to provide a taste of where the field is at right now (January 2024) – and how AI tools might be leveraged for attack and defense.
It was nearly five years ago that I set out to learn and document the inner workings of AI. From that body of work, one of the most essential learnings was that in the AI era, there is a need for perpetual learning. We are living in a time when today’s information will almost certainly be out of date by next month. Here are just some of the AI possibilities that have changed over the past 12 months.
Voice Cloning
The first low-hanging fruit from a cybercriminal’s perspective is that many biometric authentication methods are becoming extremely low-cost to defeat. Take a three-second audio clip from someone, perhaps from a video, a phone call, or a podcast, push it through a free or low-cost AI voice-cloning service and voila – you now have the ability to seem to get that person to say anything, in real-time, in any emotion, style, or language.
Consider what that means. For example, any entity still reliant on voice authentication is effectively at high risk. Certainly, it should be relatively easy to prove that any money or other items extorted via voice authentication could not be reliably and solidly blamed on the account holder. It also means that any phone conversation you think you might be having with someone you know could actually be with a nefarious actor.
This voice cloning capability has been around for over a year now and there have been many instances of people being called up by fake versions of close relatives requiring immediate money. This was highlighted in a US Federal Trade Commission (FTC) warning issued in March 2023.
A year ago, this form of voice cloning would require a longer voice sample and a long time to process. Within the past few months, the effort requirement for a voice attack has reduced to about the same as the time required to put together a decent phishing email.
Image Generation
It is now a similar situation for facial recognition. More recently, AI image generators have also been shown to be able to create not just faces but also images of identity documents such as driving licenses. Within the past few weeks, these image generators have also shown the ability to generate thoroughly convincing but completely fake images of real people holding up their driver’s license.
Fingerprints are harder to fake, primarily because they are less readily shared or available (at present) online. However, even for fingerprints, if the print itself can be imaged, then the fake prints can be printed and will pass most basic sensors.
All of this means that security teams are likely to need to rapidly enhance authentication by requiring additional layers using information that can more deeply authenticate access, geographic location, IP address, time of day, patterns of behavior, installed security certificates – and even personalized challenge/response questions.
Creating AI Entities
On the flip side, there is one emerging capability that can potentially be leveraged by both security teams and cybercriminals – the ability to create and train your own, standalone AI entity. While only a year ago, running an AI with the skills of a large language model would have required a reasonably sized data center, anyone with a decent home computer and sportingly large graphics processing unit (GPU) can now run and train their own AI.
Where such an endeavor until recently required quite a dedicated amount of effort, the website jan.ai now offers the ability to install and run LLMs on a PC in your own home. From a purely technical perspective, I can set up my own AI large language model and train it to do anything I want, such as make it into a virtual CISO or train it as a nefarious cybercriminal (although I am sure certain uses may violate terms and conditions – not to mention the laws in many territories).
Embracing AI Opportunities in Cybersecurity
What is the upside for security teams? Is there one? In short – yes.
In 2024, as we wander deeper and deeper into the skills and possibilities of AI, knowledge is no longer hard to get to. The right investments into freely available AI tools and information can equip security teams with what they need to stay ahead of any AI-wielding cybercriminals.
You do not need to be an existing AI expert to navigate the tools that are emerging – you simply need the time and inclination to learn about them and leverage them. Whilst cybercriminals may be training their own LLMs, there is nothing (except time and money) to prevent security-passionate enterprises from freeing up people and resources to assemble their own AI expertise and tools.
Yet a recent ISACA survey shows that 54% of organizations do not provide AI training, even to teams directly impacted by AI.
In the past, it was the organizations with the most security control gaps that could be taken down. What will rapidly happen now is that rogue AI use will enable all kinds of formerly secure organizations to be taken down through the smallest chains of gaps. The solution might be to allow your security team and staff the investment in time and education to build your own AI defense force.
There has never been a more pressing need to invest in freeing up and training security resources to understand and make countermeasure use of AI. This effort is crucial to both grasp the rapidly emerging risks and assemble the most robust AI-driven defenses.
At this moment in time, most of those defensive tools are easy to access – but in this fast-moving era of AI, that situation may be different within a few months. Keeping apprised of these changes is not just a matter of staying informed – it is a critical strategy for maintaining security in an increasingly AI-dominated landscape.