In recent years, artificial intelligence (AI) has become one of the most popular topics to talk about. In fact, according to McKinsey there were twice as many articles referencing AI in 2016 compared to 2015, and you only have to scan the news each day to see how many companies are talking about how the technology will revolutionize businesses practices.
This is especially true in the cybersecurity industry. Conferences, exhibitions, the media, and marketing materials of next-generation vendors are all awash with claims focusing on how AI and machine learning (ML) can help organizations build a robust cybersecurity strategy to keep hackers at bay.
The hype seems to be working. Our latest research revealed that three quarters (75%) of IT decision makers across the world believe that AI could be the ‘silver bullet’ to solving their company’s cybersecurity challenges. The technology, they say, will help them detect and respond to threats faster (79%) and also help solve the skills shortage they currently face (77%).
Of course, when you consider that the number of data breaches enterprises face each day is forever growing, and the fact that the average cost of a data breach, globally, now stands at $3.86m - a 6.4% rise on last year, you can understand why businesses want to believe there is a ‘silver bullet’ solution in cybersecurity.
However, while it would be nice to believe this ‘silver bullet’ exists – it’s simply not true, and to think otherwise, could put businesses at greater risk.
Miscommunication leads to misunderstanding
Part of the problem is that, in some of the media and marketing materials from next-gen vendors, the terms ‘AI’ and ‘ML’ are often used interchangeably. This is making matters confusing for IT decision makers. In fact, we found that just 42% of UK IT decision makers believe their company fully understands the difference between the terms AI and ML.
Put simply, AI happens when machines conduct tasks without pre-programming or training. It sounds great but, in truth, it does not exist – yet!
By comparison, ML relies on training computers. It uses algorithms to interrogate vast catalogues of malicious programs, deduce common characteristics, and learn what to look out for. It’s an important tool in cybersecurity but it is nothing new: it has been used in cybersecurity practices since the 1990s. In fact, the majority of IT decision makers we surveyed already use it – with 78% of UK respondents saying their endpoint protection product already deploys ML to detect malicious attacks.
Understanding the limitations
It’s also important to note that ML – if it’s done properly – comes with problems and limitations. For example:
- It requires some hand holding - ML needs a lot of inputs – and each one must be correctly labelled. In a cybersecurity application, this translates into a huge number of samples, divided into three groups – malicious, clean and potentially unwanted. This takes a considerable amount of time to get right. What’s more when an algorithm has been fed a large quantity of data, there is still no guarantee that it can correctly identify all the new samples it encounters. Human verification is still needed. Without this, even one incorrect input can lead to a snowball effect and possibly undermine the solution to the point of complete failure.
- It can never be flawless - Even a flawless machine will not always be able to decide whether a future, unknown input would lead to unwanted behavior. If a next-gen vendor claims its ML algorithm can label every sample prior to running it and decide whether it is clean or malicious, then it would have to preventively block a huge amount of undecidable items – flooding company IT departments with false positives. These false positives can easily disrupt business continuity, causing even more headaches.
- It is restricted by rules - No matter how smart a ML algorithm might be, it is bound by rules. It learns from a specific data set and has a narrow focus. Cyber-criminals, in comparison, don’t play the rules. In fact, they can change the game entirely. Black hat hackers benefit from inspiration and think outside the box, targeting companies in ways in which an algorithm is completely unable to foresee.
Looking beyond the hype
In order to stay ahead of the hackers, humans and machines need to work together (alongside multi-layered solutions) to build a robust cybersecurity defense. There are huge risks in falling into the trap of believing the hype around AI and ML as the ‘silver bullets’ in cybersecurity. ML is one part of the solution, but it is not the sole answer.
The technology is simply not mature enough to be the only line of defense protecting your company. With a purely ML-based cybersecurity solution, it only takes one successful attack to open up your company’s endpoints to a whole battalion of cyber-threats.
In order to keep detection rates high and false positives low, a team of real humans need to be on hand to evaluate items that are too divergent from the norm.
What’s more, the cybersecurity industry has an obligation to make things clearer. Businesses cannot afford to be confused; the costs are way too high. As our research shows, the hype is muddling the message for those making key decisions on how best to secure their company’s networks and data.
In today’s unpredictable threat landscape, businesses need to fully understand the unique cybersecurity challenges of their organization and then clearly identify the solutions that best meet their needs. The fact is that every business is unique so there can be no such thing as a universal solution.