This piece in InfoSecurity reminded me of a recent webinar I did with Jake Kouns of the Open Security Foundation. (An archive of which is available if you are interested.)
Insider Threat is such an emotive term. It's hard not to imagine suspicious Milton-esque characters lurking in the corridors and sabotaging the mail servers. In fact, while this can be the case, the rather more prosaic truth about insider attacks (if that's even the right word) is that they are often accidental, one-off incidents usually caused by a moment of all-too-human inattention or error.
The Open Security Foundation's website that tracks breaches, datalossdb, publishes stats on a number of different types of breach, and one of the more interesting that I had chance to discuss with Jake was that for 2010, accidental insider breaches outnumber malicious breaches by almost three to one. So you're three times as likely to see information exposed through someone just trying to do their job than you are from the disgruntled (and Swingline stapler obsessing) insider looking for revenge.
In a world in which organizations such as WikiLeaks seem only too eager to publish anything you might call private, it's understandable that organizational security teams are highly focused on controlling access to information and watching for suspicious behavior. They should be. But that shouldn't be at the expense of basic controls designed to act as a safety net when the guy sitting next to you (or, heaven forbid, you) send the wrong file to the wrong person, or lose that thumb drive with the super-secret designs for that new, better mousetrap.
In reality, controls that focus on the data, and the way users interact with it, can help in both cases, as can new technologies such as DLP, and reemerging approaches such as encryption. Focusing on the data will help prevent both intentional and accident breaches, and watching how users interact with information will help provide insight when a malicious insider is trying to steal your information, or a well-meaning user is doing something foolish. Of course, that assumes you can keep control of your data in the first place. Or even know where it is.
Want a prediction for 2011? Try this: if you thought keeping insider activity under control was tricky when you had the data sitting behind concrete walls in your data center, picture what insider threat means when it's three thousand miles away in someone else's cloud storage infrastructure. Insider? Whose? Yours or theirs?
Moving data out into the cloud introduces new and specific complexities to the way in which we think about, and prepare for, insider attack. However, just as before, the closer we keep security approaches to the data itself, the better chance they have of thwarting or minimizing an unpleasant breach, whether intentional or not.