In response to cyber-defenders’ increasing use of AI technologies, malicious actors are discussing their potential application for criminal use.
Research from Control Risks, the specialist global risk consultancy, shows that cyber-threat actors are actively exploring the development of innovative new techniques to use these technologies and tools to enhance their capabilities. For instance, in the post-infection phase, clusters of compromised devices, dubbed hivenets, could develop the ability to self-learn and could be used to automatically identify and target additional vulnerable systems.
“More and more organizations are beginning to employ machine learning and artificial intelligence as part of their defenses against cyber-threats,” said Nicolas Reys, associate director and head of the Control Risk cyber-threat intelligence team. “Cyber-threat actors are recognizing the need to advance their skills to keep up with this development. One application could be to use deep learning algorithms to improve the effectiveness of their attacks. This shows that AI and its subsets will play a larger role in facilitating cyber-attacks in the near future.”
Another way AI could assist threat actors in a number of ways is in spearphishing campaigns. In the targeting of a criminal campaign, threat actors could use algorithms to generate spearphishing campaigns in victims’ native languages, expanding the reach of mass initiatives. Similarly, larger amounts of data could be automatically gathered and analyzed to improve social engineering techniques – and with it the effectiveness of spearphishing campaigns.
In another scenario, based on its assessment of the target environment, AI technology could tailor the actual malware or attack in order to be unique to each system it encounters along the way. This would enable threat actors to conduct vast numbers of attacks that are uniquely tailored to each victim. Only bespoke mitigation or responses would be effective for mitigation, rendering traditional signature or behavior-based defense systems obsolete.
Threat actors also could evade detection by developing and implementing advanced obfuscation techniques, using data from past campaigns and the analysis of security tools. Attackers may even be able to launch targeted misdirection or “noise generation”: attacks to disrupt intelligence gathering and mitigation efforts by automated defense systems.
“The use of AI is not likely to become widespread soon, given the financial investment that is currently needed,” Reys continued. “However, as more research is produced and AI technologies become more mature and more accessible to threat actors, this threat will evolve. Organizations should be aware of the potential for these types of attacks to emerge in the course of 2018. Staying informed and being able to identify relevant emerging attacks, technologies and vulnerabilities is therefore just as important as being prepared in the event of an attack.”