Use cases for artificial intelligence (AI) and machine learning are abundant on the enterprise security landscape, and we remain in an upward trajectory for the array of ways these technologies can be successfully implemented. Yet the oft-repeated refrain that AI is overhyped is also true.
There is some nuance to sift through, so let’s start with this important point – the current hype cycle for AI is different from the ones of the past. While many visions for AI in decades past proved to be misleading, at least in the short term, today, we do have the tools and machine power to do many amazing things with AI, and that list is only going to continue to grow. The problem is, many product vendors and organizations misunderstand or misrepresent their AI usage. For example, many cybersecurity products’ core functionality is not AI-based (though it might have machine learning in a narrow area). In many cases, companies are just utilizing heuristics and slapping on an AI label.
Why does this matter? When technologies are overhyped, people can grow disillusioned. In my days as a student and early-career professional, the quickest way to have a project nixed was to pitch it as an AI project. Let’s not go backward and instead focus on the many realistic ways AI and machine learning can create value for security teams and their organizations today.
ISACA’s recent white paper, AI Uses in Blue Team Security, details several areas in which machine learning is especially useful on the cybersecurity front, including network intrusion detection, phishing attack prevention and offensive cybersecurity applications. Additionally, AI and machine learning have proven extremely useful for data categorization and classification – a major priority for many enterprises in an era when data security and data privacy are increasingly top-of-mind. Frankly, humans are not very good at categorizing data – they tend to miss things, and machine learning tools are more likely to be thorough and painstaking. Conversely, if you need to know yes or no to a question, you need a deterministic answer. The minute AI and machine learning are used, by definition, the results are probabilistic – you hope the percentage of time they get it right is high, but we all know from using Siri and Alexa devices that is certainly not a given.
For example, if you need to know the differences between the network behavior of software versions, you don’t want to rely upon the guesswork inherent to machine learning. In this situation, a deterministic approach like NCAST is more appropriate than using AI. Controls are another prime example of where it makes sense to steer clear of AI and machine learning. Controls should be deterministic – the user either has the needed credentials or does not have the credentials. An example of better usage of AI or machine learning here would be for intrusion detection, to flag if an identity has potentially been stolen to gain access. The bottom line: use the right tool for the right purpose.
When considering the big-picture impact of AI on security teams, another sobering consideration is that the use of AI by attackers likely will outpace our effectiveness at using AI on the defensive side, for the simple reason that’s it’s always easier to find one way in than to defend every possible attack vector. The attacker’s job is easier, whether the defender is human or AI. The ISACA white paper details malicious uses of AI and machine learning, especially as it pertains to social engineering and phishing threats, and how security teams can best keep pace.
AI and machine learning are powerful tools for security teams, even if it’s nearly impossible to live up to the lofty and occasionally disingenuous hype surrounding the technology. Organizations should be proactive about looking for practical ways to implement AI and machine learning in their security programs but avoid embellishing how they are doing so. Instead, they should accurately describe how and why they are using AI – or not using it. Any claims that products and solutions are magically all-powerful due to AI are counterproductive and contribute to skepticism about the very real potential for AI to be a major asset on the security landscape.