You might have read reports from some cybersecurity “experts” that AI-powered cyber-attacks already exist and that they’re a threat we should be worried about. They paint a picture of futuristic Skynet-esque programs with the ability to intuitively understand their environment and adapt behaviors to outwit both automated cyber-defense measures and humans alike. Luckily, such reports are simply an attempt to market cybersecurity products through fearmongering.
Misconceptions around AI-powered malware are primarily fuelled by ignorance of what AI, or more accurately machine learning, is capable of. They assume that current machine learning techniques can be used to recreate human-level creativity and decision logic. This is the stuff of science fiction.
Let us contemplate, for a moment, how AI-powered malware might be created using current machine learning techniques. When considering current mechanisms, reinforcement learning seems the obvious choice for creating a program (agent) that can automate steps in a cyber-attack. Reinforcement learning can easily be used to train agents to perform actions (moving or copying files, launching executables, altering registry values, etc.) based on observations (information about the file system, processes, registry entries, etc.) from a target system.
The problem formulation is similar to that of creating an agent capable of playing an old-school text adventure. Agents in this scenario would be trained against pre-configured systems that contain vulnerabilities, security holes or misconfigurations typically encountered during red teaming or penetration testing operations. These would be designed to perform one – or several – steps in a typical cyber-attack chain, such as lateral movement, persistence, reconnaissance, privilege escalation or data exfiltration. The tools required to pull all of this off already exist.
However, for those planning on building their own AI-based attack tools in this manner, know that there are some caveats. Reinforcement learning models typically need to train for millions of steps before they converge on a good policy. In our described scenario, each step would involve running commands on an actual machine (or virtual machine) that would need to be spun up and configured for each episode. This means it would likely take weeks or even months and a lot of computing resources to train an agent, even if the process were parallelized.
"Misconceptions around AI-powered malware are primarily fuelled by ignorance of what AI, or more accurately machine learning, is capable of"
Also, an attack tool created in this manner would likely only be as good as the scenarios it was trained on. Given the current state of reinforcement learning, the tool may also need some domain knowledge built into it, and such a tool would not be able to adapt to more challenging scenarios or discover novel attacks. To build a tool capable of generalizing and discovering novel attack scenarios, one would need to implement an agent capable of utilizing byte-level observations and generating arbitrary commands using only characters as building blocks. This is something that would require new innovations in the reinforcement learning space, or perhaps an entirely different approach.
A tool created in the manner outlined would be able to execute sequences of actions very quickly. It would probably complete attack goals faster than an organization could manually or automatically respond to alerts from an intrusion detection system. However, such a tool probably wouldn’t be all that stealthy. As such, it would be well-suited for use against weak targets – organizations that have not yet implemented commonly recommended cybersecurity measures and that may have left security holes or vulnerabilities in their infrastructure. If used against such targets, attackers could let the tool automate simple actions while they themselves manually focus on more difficult tasks. Such automated attack tools would also be ideal for an adversary who wishes to compromise numerous weak targets in a very short amount of time.
Although the automation of attack steps would be useful for cyber-criminals (such as those responsible for corporate ransomware attacks), given the resources required, it is much more likely that a nation-state would be the first to develop such a tool.
Since the first AI-powered malware will probably only exploit common security flaws and misconfigurations, organizations must already be able to defend against this threat by simply plugging holes in their infrastructure. In addition to basic cyber hygiene and timely patching of vulnerabilities, red teaming exercises and penetration tests can be used to identify additional problems to be fixed. There should be plenty of time to do this – continuous machine learning (that would be needed to create agents that can adapt and learn as they go), and one-shot learning (so as not to require millions of examples for training) techniques are still in their infancy, especially in the field of reinforcement learning.
At the end of the day, developing an understanding of what machine learning means and how cyber-attacks are performed will allow you to shrug off any fictitious ramblings you might read about AI-powered cyber-attacks.