As businesses try to distance themselves from expensive public security failures, deniability is becoming the name of the game
One glaring downside of industry and regulatory frameworks is the desire for poorly performing (in the security sense) organizations to successfully deny responsibility for data breaches. This is further compounded by the increasingly litigious nature of our culture.
As businesses increasingly try to distance themselves from expensive public security failures, disclosures of incompetence, and board members seek to maintain a clean professional rap sheet, deniability is becoming the name of the game.
The challenge for security is that deniability is beginning to have a seriously negative impact on the way that data is handled. It also means that consistently poor practices and persistent flaws are allowed to go unchecked in organizations.
A classic example of deniability and distancing working hand-in-hand is the frankly irritating and inaccurate labelling of APTs, or advanced persistent threats. This has become a catch-all term, seemingly created by vendor marketers to fudge why their products don’t/can’t work. It’s been gleefully adopted by corporate comms departments to deflect attention away from embarrassing security breaches.
Sure, an initial compromise might well be down to a very high grade piece of malware, but the subsequent propagation and privilege escalation is usually a result of missing patches and re-used or simple passwords: not really the stuff of espionage thrillers is it? And it’s certainly not what the APT evangelists spout.
Faced with the prospect of punitive legal action many organizations will rejoice at having this glib, throwaway misnomer handed to them on a plate. No longer are they liable or to blame because the rise of the ‘APT’ has made breaches impossible to defend against. While their PR and marketing people will cry ‘undefendable’ the irony is that such a stance is utterly indefensible.
Interestingly, while high-grade, zero day malware is undeniably difficult to spot, the actions of the attacker once on the network should be much more obvious. This is another aspect that generally goes undiscussed; detecting and managing breaches and outages should be second nature for high-profile, target-rich organizations.
Considering that there are free tools out there which can be used to test reaction times and organizational resilience there’s really no excuse. Chaos Monkey for example isn’t specifically an attack training tool but it easily lends itself to readiness and awareness exercises.
“[APT] has become a catch-all term, seemingly created by vendor marketers to fudge why their products don’t/can’t work”
Home Depot, which suffered a breach of 56 million customer records, cited “unique custom-built malware” that had sat on the network for five months, skimming credit card details. It’s also all too easy to shift the onus onto suppliers; US retailer Target quickly pointed the finger at HVAC contractor, Fazio Mechanical Services, for introducing malware on to the point of sale systems across 1800 stores and compromising the credit card data of 40 million customers. While this was no doubt a huge factor, organizations need to take responsibility for how they govern and secure relationships with third parties.
So that’s third party issues, but what about malware? Is it actually possible to mitigate against the majority of malware? In terms of ingress, maybe not 100% of the time, but how about preventing them migrating across the entire information estate unhindered? Even with a highly sophisticated attack, migration shouldn’t be possible if basic security controls are in place.
Such a compromise is usually down to one or more issues. Weak password management (which enables exploits to be propelled from one system to another and sweep through the enterprise), lack of segregation across the network, minimal or no user education, and poor or inconsistent patching.
Many organizations fail to patch their systems regularly or consistently, presenting the hacker with a perfect window of opportunity. Not patching can be a symptom of the mentality that it has to equal downtime, so how can that be overcome? Perhaps the most effective method is to look at the problem afresh. If scheduling patch updates to minimize disruption (and mitigate against any outage) is the issue then, why not work with two concurrent live environments? One is your ‘actual’ live environment, the other a perfect facsimile of that environment, updated at the same times, with the exact same changes.
Forget a disaster recovery zone, this practice enables full redundancy and fail over, and if set-up correctly, you should be hard pressed to tell which environment you’re in as one is a complete mirror of the other. It will also ensure that you always have a ‘spare’ environment to test patches in. A word of caution here though: to avoid unintentionally migrating critical business assets segregation must be properly managed.
So, bearing in mind everything that covered here (policing third party relationships, effective patching, and segregation) is calling ‘APT’ on a breach still a justifiable defense? I’ll leave that up to you to decide, but maybe if we encouraged a culture of disclosure rather than deniability, these measures would be more widely explored and put into practice. Otherwise we all have to live with the consequences: more protestations over “undefendable” attacks and the proliferation of malware.
About the Author
Ken has been working in IT security for over 15 years. He writes for various newspapers and industry magazines, in an effort to get beyond the unhelpful scaremongering put about by many security vendors. He works for a firm of penetration testers, otherwise known as ethical hackers, who specialise in helping organizations understand and quantify risk to their business. Ken speaks widely on computer security, and takes great pleasure in highlighting vulnerabilities in software and hardware.