Once upon a time, accidentally opening a malicious computer file in the office might have caused you much embarrassment. Back then, viruses displayed strange messages, froze computer screens, or in the worst instances, deleted files for no particular reason.
These days, cyber threats have grown up and pose a far more potent, organization-wide risk. Now, in the worst instances, unwittingly falling victim to a piece of malware could cost you your job.
A recent report from Oracle, ‘Security in the Age of AI’, tells us that at board level, organizations see ‘human error’ as their top cybersecurity risk. Presumably this is why employee training is higher on their priorities list than additional security tech investment.
This viewpoint is understandable; after all, over the past decade the corporate world has been steadily conditioned into believing it by a security industry that has always required a scapegoat to account for the threats that slip through the net. We pay top dollar for security technology, so if our organization suffers a breach, it must be a case of human error.
As this view has become accepted wisdom, we’ve seen more and more stringent cybersecurity policies baked into employees’ contracts, more instances of employees losing their jobs after a data breach, and the growth of a multi-billion dollar cybersecurity training industry. Blaming the humans for cybersecurity breaches is now big business.
It is undeniably the case that the vast majority of cyber-criminals still rely on exploiting human vulnerability to break through corporate defenses. Employees’ inability to distinguish a potential threat from, say, a run-of-the-mill web link, email, or on-screen message is crucial to the attacker being able to enter the organization’s network.
However, ‘vulnerability’ is not the same as ‘error’. In reality, we will always open emails we shouldn't, click on ads we shouldn't or visit websites we shouldn't. This is human nature, not the human factor. We are a curious species, as well as a uniquely creative species.
We are experts at using our creativity to pique the curiosity of others, not to mention using it to mask our true intentions, to obfuscate and deceive. Furthermore, the prevailing clickbait culture we find ourselves marooned in today has been built entirely upon the premise that the more people can be persuaded to click on something, the more money there is to be made – whether through legitimate or nefarious means.
The upshot of these fundamental human truths is that even where employees are at their most diligent, there's still a good chance they can be outwitted. Employees’ default setting is unlikely to be ‘high cybersecurity diligence.’
Take the following example as case in point: if you’re already behind schedule and you receive an email appearing to be from your boss urgently asking you to read a web article before your meeting, is your first thought to check whether the sender is legitimate, or is it to click through and read the article?
Outside of a dedicated internal cybersecurity team, every employer is a part-time cyber defender at best, and probably a time-poor one at that. They're attempting to add cybersecurity diligence to a long list of other priorities, and yet they’re facing threats concocted by full-time cyber attackers with a singular professional remit: to break into their organization. When analyzed in the cold light of day, it’s clear which side is the better equipped to succeed.
So, while it is undoubtedly sensible to educate employees on basic ‘common sense’ security practices that can easily be adopted in their everyday working lives, attempting to upskill employees beyond this is not going to work. Human vulnerability will not be eradicated. Trying to deny basic human nature will result in shoddy cybersecurity.
This leaves us with a choice: either we adopt the fatalistic attitude that a successful cyber-attack is therefore only a matter of time (which explains the massive growth of the cybersecurity detection and remediation industry), or we can consider other possibilities for enhancing organizational protection. For example, perhaps our current technology simply isn’t good enough.
This perspective is unlikely to win you many friends in security industry circles, but it’s high time we stop blaming users for breaches and reframe the debate. If we’re paying top dollar for security technology, shouldn’t it work?
People will always make mistakes. It is the job of the security industry to create and implement solutions to mitigate or eliminate the impact when a mistake is inevitably made.
These technologies do exist. As anyone who has worked in the realm of national security will attest, it’s perfectly possible to create secure operating environments within which are built technology safeguards that take human mistakes out of the equation. Such organizations have been successfully operating high-security environments for decades.
Historically these high-security systems have been prohibitively expensive, and have also come at the cost of usability and employee productivity. As with all technology, over time the price comes down and the functionality increases. We’re seeing this right now in the world of national-security-grade cyber defense technology.
Today, the world’s biggest companies are the size of nation states. The high-end criminals trying to attack them share many of the same sophisticated capabilities as nation states. In some instances, companies are even finding themselves attacked by rogue nation states. Without a step-change in the basic level of security afforded to organizational IT systems and processes, these adversaries will win the day.
Cyber-attacks are now ranked by the World Economic Forum as the fifth biggest risk to businesses worldwide. It’s time for enterprises to start defending their most sensitive systems like nation states protect their most sensitive information, better protecting their employees’ cyber systems and taking the onus for cybersecurity management and remediation off employees. Alongside this, it’s time for security vendors to be held to account based on whether their solutions really protect or not.