It’s become a consistent adage in security that defense no longer stops at the perimeter. There is so much more to consider in a modern defense in depth security posture, much of which revolves around understanding the full scale of your network, with its ever more nebulous borders, and being able to effectively analyze and assess activity to prevent attacks escalating once perpetrated.
One of the metrics most frequently employed around advanced attacks and incident response is attacker ‘dwell time’. Depending on the source, statistics on average dwell time can vary wildly, but the essential significance is the same: attackers are given free rein to cause damage for way too long, moving undetected within a company’s IT infrastructure before detection.
Statistics published by the Ponemon Institute and Arbor Networks in May stated that the average dwell time of a cyber-attacker on a retail organization is 197 days. Financial services, as you might expect, fared better, but still see average dwell times of 98 days. The same report found that 83% of financial services firms expect over 50 attacks a month; 44% of retailers expect the same amount.
Clearly, these are worrying numbers. But what lies behind the inability to detect infiltrators early? Is it insufficient resource and attention being paid to security practices that could have detected known bad activity earlier? Are these attacks so sophisticated that stopping them any earlier is just not realistic?
Most security professionals and vendors argue that that allocation of resource and applying better practices around monitoring will go a long way to reducing the potential severity of an advanced threat.
Spotting attacks early often boils down to an ability to identify ‘anomalies’. That’s a somewhat scientific way of saying unusual activity that could suggest something sinister. The trick, when faced with such a volume of information about network activity, is being able to identify the anomalies within the sea of digital noise. False positives tend to abound, leading to investigations that can waste time and resource that could be better spent on targeting the real threats bombarding your systems.
So how can your organization achieve this? Much of the debate focuses on the extent to which your technology and human resources can be used in harmony. There’s clearly a balancing act; automation solutions will go a long way to giving analysts the information they need in a comprehensible dashboard, giving a neat visualization of traffic trends. (Though to what extent this actually constitutes ‘threat intel’ is a subject for another column.)
The same research quoted above found that both financial services and retail firms tend to allocate slightly more budget to technology than staff. With the amount of security products available, and the often-quoted skills gap in the marketplace, this is perhaps unsurprising, but there’s certainly a case for making more of the human element when going after advanced threat attackers.
I recently spoke with Darren Anstee, chief security technologist at Arbor Networks, who advocates that people are the best assets that organizations have when it comes to security. In order to leverage people’s potential more effectively, he told me, companies need to equip them with the right skills and tools.
These skills aren’t necessarily just about technique – it’s a way of thinking about attacks, and defining how an attacker is likely to approach their target. This comes back to understanding your weak points, and knowing what trends to monitor for. Then, armed with intelligence and context, your analysts can make the most of what’s available to them to stop attacks once they’re in, rather than obsessing over initial prevention, or leaving all the detection to automated solutions that don’t have context or the ability to analyze critically what an indicator of compromise actually signifies.
Technology, Anstee argues, should help, not define, your security process. Sound advice if you are wondering how to make more of the assets at your disposal.