Professor Steve Furnell, head of the school of School of Computing, Electronics and Mathematics at the University of Plymouth, looks at how security needs to become an implicit element of our technology culture, and to keep pace with the technology rather than trail behind it.
The recent spate of security incidents provides timely evidence that our adoption of technology appears to be outstripping our ability to protect it. If you’re wondering which particular incidents this refers to … well, you can probably take your pick.
No matter when you are reading this, there will almost certainly be a completely different set of ‘recent incidents’ to reflect upon. That, unfortunately, is the problem – the statement has held true for quite some time, and seems likely to continue to do so.
We face a fundamental problem that security practices don’t keep pace with the threats. Alas, there is nothing new in this – in fact, many of today’s threats can be traced back 20-30 years. However, they didn’t pose such a problem back then, and so practices didn’t change to address them.
What's changed over time – thanks mainly to the internet - is our exposure, the possible impact, and a more widespread recognition of the need to respond. However, this recognition has been far more gradual than the growth of the problems, and so security is often absent in both the technologies themselves, and in the minds of those developing and using them. While it may follow on eventually, this security lag allows significant windows of exposure.
Unfortunately, we don't have to look far to see evidence of practices not keeping up. For example, in prior research at Plymouth University, we surveyed almost 300 users about security on their personal systems and devices, and discovered that more were actually indicating explicitly bad practice than good practice.
Specifically, while 9% claimed that they chose reasonable passwords, installed updates promptly, and believed they had up-to-date anti-virus protection, 11% fell into the exact opposite group. Meanwhile, the practices amongst the rest were mixed (e.g. behaving well in terms of installing updates, but choosing weak password, etc).
Given that none of these were advanced aspects of security, and all could reasonably be expected to be standard elements of IT literate behaviour in this day and age, it is clearly disappointing that less than a tenth of people even claimed to do it properly.
To consider a more specific case, mobile devices offer a classic example of security lag in action. To appreciate this, let’s look at a bit of the history in the context of malware risks. A decade or so ago, with basic 2G phones and PDAs, there wasn’t a real issue here, aside from some early proof-of-concept cases offering a sign of problems around the corner.
For malware to become a bigger issue, a few conditions needed to be met. Firstly, the devices lacked full networking capabilities - phones had limited internet access, while many of the PDAs lacked connectivity beyond being cabled up to sync with a PC. The second constraint was the inability to install and run code. PDAs could do it, but the earlier phones couldn’t. Thirdly, there had to be a sufficient population of users to make it worthwhile for attackers to divert attention from their traditional Windows-based targets.
So for quite some time, while mobile devices became immensely popular, their limited capabilities and diversity of platforms meant they were largely ignored in terms of malware. As a result, when related concerns were raised, it was easy for them to be dismissed as hype, and few wanted to hear about potential future problems.
Of course, time passed, and all of the preconditions were met, with Android devices in particular now ticking all of the boxes most handsomely. Android’s popularity, combined with the openness of its app marketplace (i.e. as opposed to the policing that Apple applies to its App Store) has led to 99% of mobile malware targeting this platform. The scale is still nowhere near that on PCs, but it is real, it exists, and many users now find themselves exposed.
Drawing again upon survey work from Plymouth (this time involving a wider sample of over 1,200 users), we found that while over 90% claimed to have anti-virus protection on desktop devices, only 10% claimed to have it on smartphones (rising to 14% when specifically considering the ~700 users that had Android devices).
So, now we have a massive population of users that have become accustomed to using mobile devices without having to worry too much about security, who now need to be re-educated to recognise an issue that they have already accepted in the PC context.
Sticking with mobile devices, some similar comments can be made around authentication. While many users have gradually accepted the need to have some protection here, the method used is still very often the 4-digit PIN. While this may have been perfectly reasonable back in the days of the basic phones that made calls, and only stored text messages and contact details, things are rather different these days.
Is protection via a 4-digit code really commensurate with the apps, services and data on the devices (particularly given that the same content would typically be protected by at least a strong password when being held elsewhere)? Apple’s recent update in iOS 9 changed its baseline to 6-digits instead of 4, and while this is clearly progress of a sort, it is certainly not a shift of the same magnitude as the device content.
So, at the user level, mobile device security still seems optional, and clearly lags behind where it should be. Moreover, it is also falling short in organisations. Indeed, some still kid themselves that they don't have a mobile security issue because they've not given the staff such devices, or they haven’t formally sanctioned a BYOD approach. Mobile devices are not even an emerging technology anymore; they are firmly established. As such, any organisation that lacks a policy (or at least a clearly stated position) on their usage has clearly missed something significant!
Looking beyond mobile security, it’s easy to cite other examples. For instance, successive security surveys reveal long-term recognition of significant breaches being linked back to (lack of) staff awareness. However, the self-same surveys show little attention towards education and awareness initiatives – the very things that one might consider relevant to addressing the problem. Another good example of lag is around patching latency.
It’s well established that exploitation of known vulnerabilities is a significant cause of incidents, and while there are certainly zero-day attacks to worry about, there are many cases in which quicker action would lessen the risk. For example, a recent study from NopSec Labs suggested that it takes an average of 176 days for vulnerabilities to be remediated by organisations in some sectors (compared to an average of 7 days for attackers to build an exploit).
Meanwhile, there are myriad end-user systems that remain open to compromise because updates have not been applied. It is easy to appreciate the viewpoint that such users might be coming from (particularly when dealing with their own personal technologies) because at the system already does what they want, regardless of whether they update it.
So the fact that everything still appears to be working serves to make security look like a choice or an optional extra, rather than a necessity. Of course, the fact that many breaches will not be visible means that this mind-set can persist long after a system has actually been compromised.
Findings from Google from earlier this year revealed that some users actually view updates as a security risk, believing them to be a route by which malicious code might be installed on their system. The fact that there is such a misconception around such a fundamental element of security illustrates just how much distance we need to cover in bringing good practice in line with our desire to use new technology.
Despite all the prior experiences, we are still being given technologies that don’t offer security until sometime later. Some of this links back to security being overlooked on the basis of rather casual risk assessment. It is easy to run into the assumption that certain devices won’t be attacked because they don’t do much, or can’t offer much to an attacker, whereas in reality such things have a track record of being exploited simply because they are vulnerable. As with mobile malware, the key point again is that, at this stage, we shouldn't have to be relearning the same lessons
Unfortunately, but unsurprisingly, there is no magic wand solution. Overcoming the historic lag needs a mixture of action (by developers) and expectation (by the rest of us). We are past the point where systems and devices should be provided without proper attention to security, and we ought to be similarly past the point where we would accept them.