It’s often said that listening is a skill in itself, and yet we’ve yet to grasp the nuances of listening to network noise. From the chatter on social media, to the deep recesses of the subterranean dark web, there’s now a vast array of data that we need to tap into, monitor and interpret to get advance warning before threats are realized.
Data observance on this scale requires automation but machine-based threat monitoring is only half the answer. Automation has seen a reliance upon alert-based technology but we need to wean ourselves off this reactive approach because these alerts are either being ignored, missed or the response is simply too little too late.
It’s this automated response mechanism that forms the basis of most people’s understanding of a Security Operations Center (SOC). A technology-driven method of automated monitoring, the SOC tends to be used for event logging and the issuing of alerts and, more often than not, is simply deployed for compliance purposes to make auditing easier.
I’ve seen plenty of organizations try to stand up their own SOC. Is it of any value? Frequently it merely acts as a placebo and that ‘protective monitoring’ they claim to be doing? It’s a member of staff who simply populates the incident log and initiates an element of lessons learned.
Used in this way, the SOC is little more than a glorified alarm system. To give a very basic example, if you don’t know who is accessing your infrastructure, what for and where that traffic originates from, you will continue to investigate false positive alerts. That could be staff accessing your infrastructure over RAS or it could be a company portal utilized by a supplier in China. Either way the suspect activity will be flagged and needlessly investigated.
According to a recent Infosecurity Magazine webinar poll, fewer than half of respondents had a SOC in place, and those that did were using the SOC to perform incident response; in other words they were reactive rather than proactively monitoring threats. Just 6% of respondents said they were looking to embrace SOC 3.0.
The next generation SOC – or SOC 3.0 – is a strategy as much as a technology in that it qualifies issues and alerts according to their relevance to the business. Events are still captured and flagged but now the data is being scruntinized, sifted and segmented according to criteria specific to the business. That includes the obvious factors, such as the region or sector the business operates in, but also other influences from internal changes (such as plans to adopt a new technology or corporate strategy such as expansion) and external factors (such as trends, political changes or legislative changes which may impact the perception of the company brand).
This information is used to compose a risk profile that is continually revised to determine how well the security controls of the business compare to those required to defend the business.
How does this work in practice? Automation is used to harvest data that is then crunched using various algorithms to deduce and assess risk. We look at the technology stack the business operates in – so if they’re a CISCO house then its CISCO noise that you’re interested in, further refined by sector and region – and when the heat intensifies in relation to these criteria then the risk profile is adjusted. That gives you the ability to appraise your current security stature against your needed stature and it’s here where SOC 3.0 will realize ROI.
Next generation SOCs will see a seismic shift away from the reactive to the proactive as we move in to an era where we can predict the type and voracity of attack, its impact if realized, and use that knowledge to inform security spend.
We’ll be able to use that tailored intelligence to make defenses more robust or compensate for any inadequacies through further investment, rather than blindly throwing money at controls that may be unnecessary or insufficient and after all that noise, enjoy the silence.