With the world becoming increasingly digital, security becomes an increasingly pressing demand. Threat actors are leveraging more intelligent — and increasingly more automated — attacks, and businesses are taking recourse to advanced technology. This is where artificial intelligence (AI) comes into the equation.
Consider a recent Synack Report that claims that combining cybersecurity talent and AI-enabled technology results in 20x more effective attack surface coverage than traditional methods. It’s common knowledge that AI can analyze colossal quantities of data at speed and augment under-resourced security operations. Yet, people also know the dangers that AI can pose, which explains why 96% of C-level executives, directors, and managers have already started preparing for AI-powered cyber-attacks, according to an MIT Tech Review report. To adequately explain AI’s actual value in the cybersecurity industry, it’s necessary to comprehend current cybersecurity challenges.
The Challenges
The grim reality for businesses and beyond is that there is a burgeoning cybersecurity threat landscape. Cyber-attacks are increasing in volume and complexity, and cybersecurity personnel are struggling to keep up. Google had registered over 2.1 million phishing sites as of Jan 17 2021. This stat is an increase of 27% over the last 12 months. Ransomware attacks have increased exponentially, with a 41% increase in attacks since the beginning of 2021 and a 93% increase year on year.
Ironically, another sobering concern is AI. It’s not easy to forget that in 2019 cyber-attackers impersonated a CEO’s voice using AI-based software and demanded a fraudulent transfer of $243k. This is merely one example in a throng of AI-related attacks. Ponder a recent report published by MIT Tech Review that goes into the specifics concerning ‘offensive AI’ and states that it is expected to increase exponentially and become more sophisticated. Worryingly, personalized attacks will fall under the auspices of deep learning, allowing AI to increase personalization further.
"Worryingly, personalized attacks will fall under the auspices of deep learning, allowing AI to increase personalization further"
With ransomware being topical, a related worry is that AI enables cyber-criminals to create malware that can self-seek system vulnerabilities and autonomously figure out which payloads are likely to be most successful. It can do this without revealing itself via communication with its C2 server. Derek Manky, who argues that AI offers cyber-actors more bang for the buck, has raised this point in an article he wrote for Infosecurity. Additionally, he elucidates that we have already seen multi-vector attacks paired with advanced persistent threats (APTs) or an array of payloads. Here, AI accelerates such tools’ effectiveness to autonomously learn about targeted systems, enabling attacks to be laser-focused.
Why, though, would AI be a panacea, to whatever degree, to these challenges? There are three primary reasons.
The Benefits of Using Artificial Intelligence
1) AI can identify threats in a network or system early. It can analyze user behavior faster and on a larger scale than humans can. A recent study by Capgemini compounds this point, showing that 69% of organizations see AI as integral to speedy and timely responses to cyber-attacks. Here, AI can deduce a pattern and spot when behaviors deviate from the norm. Self-organizing map (SOM) algorithms essentially ‘model’ normal data to discern whether a particular activity on a network or computer system is normal or abnormal. The SOM is a type of machine learning to map relationships — some often call this type of learning generating signatures to capture behavior — between input data. This helps identify exposed areas in a system or network, stopping attackers dead in their tracks.
2) AI can expedite the detection of online threats. For example, machine learning can aid with fraud detection, which becomes possible since machine-learning algorithms can learn from historical fraud patterns and detect them in future transactions. This isn’t just true of fraud. Take malware as an example. AI, trained on a wide variety of previously detected malware, can predict which types of malware might show up online in the future. The system can cross-reference the new type of malware against a database used when it learned to detect similarities in the patterns and can thus create a reaction depending on previously successful block attempts.
3) AI is the best opponent of AI-led cyber-attacks. Unlike traditional cybersecurity approaches, which often remain stagnant once implemented, AI (such as deep learning) can be highly scalable. This is important for, as an example, AI-based forms of malware that can propagate and mutate frequently. AI can scale to hundreds, thousands and even millions of training samples, meaning that as a training dataset grows, the AI solution can continuously improve its ability to detect anomalies.
Unsurprisingly, many people understand the future of cybersecurity as defensive ‘AI vs. offensive AI ’ and the ‘battle of the algorithms’. The future of attacks is that they’ll become more common and sophisticated. With AI, they’ll become stealthier, faster and more effective. Ultimately, if enterprises can leverage AI into their cyber defenses, especially machine learning, they will not only be able to withstand the disparate and high number of cybersecurity threats of today, but they’ll be able to protect themselves from the highly sophisticated nature of threats tomorrow