The impact that AI is making on business and consumer continues to grow by the day. If a company isn’t already using AI to improve how works gets done, to improve customer service, or to reduce costs, then they’re trying to figure out how to use it.
Much of this explosive growth comes thanks to data scientists. They are the individuals who analyze massive volumes of complex digital data required as part of AI and machine learning.
It wasn’t all that long ago data scientists were part of an exclusive club and were found only deep in the trenches of Silicon Valley-esque startup hubs. They possessed in-demand new skills and commanded extremely high salaries.
The environment today is much different. Data science skills have become more common. Universities offer a wide selection of courses on the subject, and the role of a data scientist is well embedded into the fabric of most industries.
The emergence of the next generation IT security analysts is starting to follow a similar path to that of the data scientist. This emergence, however, is fueled by slightly different needs --the explosive growth in cybercrime and the unmanageable number of cyber security threat false positives currently drowning CISOs and their teams.
More than 230,000 new malware samples are launched every day. The average small and medium-sized business experiences 44 cyber attacks every day. The cost of damage directly related to cybercrime is adding up, expected to reach $6 trillion by 2021.
Right now, the market is convinced that AI, machine learning, and behavioral analytics will help solve these problems. However, an unintended consequence of these emerging technologies could potentially be making the lives of CISOs and their teams much more challenging.
Today, anything that AI identifies as an anomaly is considered a potential threat. The problem with this approach is that many of these threats are false positives. According to one recent survey, 37% of large enterprises receive more than 10,000 alerts each month. Also, 52% of those alerts are false positives, and 64% are redundant alerts. Using current systems, companies are then left to manually review thousands of AI generated false positives every month.
Current systems lack the contextual data to give security analysts the tools to thoughtfully assess threats.
Consider this example: an employee accesses an internal network server and data sources that he’s never accessed before; these activities are flagged as potentially malicious. The same employee is also viewing web content that no one in the organization has ever previously accessed. Malicious activity? Maybe. Without the proper context, we can’t be sure.
The employee could have been re-assigned to a new team and is working on a completely new project that required massive amounts of external research. Regardless, IT has to manually process these false positives and is ill equipped to paint a clear picture of the situation. This eats up precious time and resources.
The rising number of threats, the unmanageable number of false positives, and the lack of context are several of the factors creating a shortage of two million cybersecurity professionals worldwide.
In addition to the massive shortage, ISACA found that less than one in four candidates who apply for cybersecurity jobs are qualified. As with data science, you can’t fake being a security analyst. There’s no on-the-job training.
Herein lies the need for the new generation of IT security analyst. Armed with the right tools, this emerging role will improve existing security policies. Security analysts will build on the work of AI, machine learning, and behavioral analytics by making the data more consumable and understanding risk thresholds based on context. With tools that help assemble and interpret the signals needed to hunt and assess threats, security analysts won’t need deep backgrounds in data modeling or skills querying databases.
Location, time of day, preferred device, new device, employee status, bandwidth, abnormal file creation, accessing abnormal content, and even incorporating external data on weather, traffic, social feeds and other location-specific data can paint a complete picture for malicious threat detection. Any individual point could potentially be flagged by systems as a threat, but when enriched with the right contextual data, a different story could emerge.
This new generation of security analysts will first address the problem of reducing false positives in real-time. Concurrently, security analysts armed with the right tools will transition from a defensive posture of responding to threats after they have occurred (sometimes several months afterwards), to playing offense and helping identify potential attacks in progress before they have had catastrophic impact on the organization.
Increased attacks combined with the increase in false positives will plague companies and governments. It decreased productivity with employees and has a negative impact on customers and brands.
As was the case with data scientists, the market will make it easier to obtain the right skills for more to succeed in security analyst roles over time. More education. More resources. Better tools. Until then, when you find a good security analyst don’t let them go. There aren’t enough people, with the right tools, to do the job that needs to be done, today.
We will know the market for skills is improving when security analysts transition from really smart threat detectives, to aggressive, accurate threat hunters.