For over a decade, I have been analyzing and commenting on data breaches, either as someone sent in to perform post-incident root cause analysis and remediation or to provide analysis for press commentary.
It is always asserted that post-incident analysis is not a matter of discovering who is to blame. However, there is an inevitability that whatever is discovered will subsequently result in one or more people being held to account.
People want someone to blame. They want someone held accountable – and it does not matter if that person is not really the culprit – so long as it can be made to look as though there is the slightest possibility that they could be.
The cliché tactic at that point was to blame the intern – a person lacking both the experience and the financial means to bother with a defense.
The narrative would be something along the lines of: “We had someone inexperienced make a minor error that resulted in an exceptional, unexpected, chance-in-a-million cascade failure. This accidentally took out all our operations and exposed millions of encrypted customer details,” as though it would be acceptable for a vast organization to allow an office junior to do the equivalent of accidentally switching off an entire company by flicking a switch on the wall.
This tactic fools no infosec professional. It is the same tactic as pretending that an employee clicking on a single phishing link could somehow be responsible for taking down a business empire. Fortunately, after the SolarWinds attempt to do that backfired (the breach was initially blamed on an intern using a weak password), that tactic has become less popular.
Whenever I investigate a breach or a full-scale business disruption, there are four fundamental questions that I want to understand:
- Did the organization have the competence to know how to run security effectively?
- Did that organization provide the security function with enough resources?
- Was there any failure to provide security with enough empowerment?
- Was there a management culture of demoting security considerations to a lower priority than short-term profit?
The problem with post-incident analysis is that transparency is now rare. Ten years ago, the truth would be revealed – but now full disclosure of the sequence of security gaps that permitted something terrible to happen is treated like publicly acknowledging financial liability.
"After a major data breach from an organization, you are more likely to find out what happened from the external commentators or the attackers than from the victim"
After a major data breach from an organization, you are more likely to find out what happened from the external commentators or the attackers than from the victim.
Here is my #1 tip for unpacking what really happened whenever I have been faced with a company that is definitely NOT going to reveal the truth: I look at where the CISO reports.
The reason I do this is because the CISO reporting line will tell me almost everything I need to know about what really happened.
Step 1) Do They Actually Have a CISO (And Is There Only One of Them)?
You would think that the word “Chief” being in the title of a chief information Security officer would be a clue that this role is supposed to be a single point of accountability. Unfortunately, that is not always the case – many organizations have many CISOs – and if that is the case, that will tell me that there is no single point of accountability or control for security.
Slightly worse than having several CISOs is having none at all. This will again lead me to the same conclusion: that the breached organization answered “no” to all four of my questions.
Step 2) Does the CISO Report Into the CIO?
Is having a CISO report into a CIO a bad idea?
For many of us, the statement above reads much like: is sticking a metal object into a live power socket a bad idea?
The natural answer for most infosec professionals (to the first question) would be an unequivocal “yes – it is a terrible idea” … however, this statement may be considered contentious since, according to numerous ISACA cybersecurity surveys that monitor the reporting line for CISOs and security staff, over 30% of security professionals report up to the CIO line.
So, why can a CISO reporting to a CIO be a significant clue after a data breach?
Consider the potential conflict of interest:
A good CIO wants to acquire as much data of value as possible, whereas a good CISO wants the footprint of valuable data to be as cost-effective as possible to secure.
If a CIO wants to rush out and collect terabytes of data, a CISO needs to ensure that data collection, processing and storage are correctly triaged and secured. When the CISO reports into the CIO, it means the CIO controls what rules the CISO can apply. That is great for optimizing data acquisition but lousy when it comes to consistently applying effective security.
Why is having the CISO report into the CIO line a problem? Because challenging the actions of the manager that you report into is more than a bit awkward.
While it is theoretically possible for a CIO to achieve their own balance between optimizing the value of data and achieving the appropriate security, the reality is that such balance requires a level of integrity more likely to be found in people who would decline to take on such a conflicted role.
What do you think?
- Is having a CISO reporting into the CIO OK?
- Does an independent annual audit of the CISO function mitigate the risk?
- Is a CISO reporting into the CTO (technology line) better or worse?
- Should all CISOs report to the very top of an organization?
Leave your comments below – and I will be sure to read them all.