When it comes to insider data leakage, “we are our own worst enemy”. This is the view of Paul Henry, a security and forensic analyst with endpoint security firm Lumension. To illustrate this point, Henry provides an example of the confidential information that can be gathered from a simple Google search.
He professes that the fundamental causes of internal data leaks, especially the unintentional ones, are often too much access to sensitive data by those who don’t need it, and transmitting that data in a way that makes it highly vulnerable.
Henry enthusiastically tells Infosecurity to do a simple Google search for .doc and .pdf file types, using the search term “document is confidential and proprietary”.
Sure enough, Henry is spot on, as the query brings back more than 5000 hits leading to a vast array of documents, many branded in one way or another as ‘confidential’ or ‘proprietary’. “It’s just such a pathetic example of the problems we face today”, Henry remarks, adding that this sort of data leak is entirely preventable.
"If you have a disgruntled employee, there’s a chance that they will walk away with some form of information, which most do anyway" |
Patrick Walsh, eSoft |
Data from the recent Verizon 2010 Data Breach Investigations Report supports Henry’s claims. The study, which examined 141 data breach incidents investigated by Verizon and the US Secret Service in 2009, found that 48% of breaches somehow involved those within an organization. After reviewing the causes of these incidents, the report cites the restriction and monitoring of privileged users as a primary focus to limit similar data leaks, as many employees are provided access to information that is not related to their work functions.
So, in the wide world of insider threats, which we will examine in further detail, it appears that, quite often, the causes have rather simple solutions.
‘Enemy’ Profile
Insider threats to organizational assets – whether it is customer data, intellectual property, payment info, etc. – can be broken down into two types: the intentional insider, acting with malicious intent; and non-malicious insiders, who violate policy or leak data without necessarily seeking to do so. It’s a point that Rich Baich tries to clearly illustrate.
A principal with Deloitte Security and Privacy Services, Baich says the term ‘insider threat’ often has a “broad wrapper” around it. He prefers to consider insider threats as those with malicious intent and lays out a framework for the types of employees who typically pilfer data.
"In a good economy you only have to worry about bad people doing bad things. In a bad economy, unfortunately, you have to worry about good people doing bad things" |
Paul Henry, Lumension |
Baich says employees who are likely candidates include those recruited due to their particular skill set, title, or knowledge, much like a Cold War-style espionage scenario. He also includes employees who knowingly violate an organization’s infosec policy, but are not necessarily aiming to leak information. Finally, Baich highlights disgruntled employees, including those who feel under-paid, under-valued, or face termination.
There are actually three types of insiders if you ask Paul Smith, the CEO of PacketMotion, a provider of user activity management (UAM) products. These include the innocent, the not-so-innocent, and those who have gained legitimate user access and now masquerade within a system with ill intent.
The stereotypical insider seems to be supported by the aforementioned Verizon study, which found that many employees who commit data theft or leaks are often previously cited for minor violations of company policy.
In addition, Verizon found that nearly a quarter of employees (24%) who committed insider data theft recently underwent some kind of job change, whether it was firing, resignation, being a new hire, or a role change. It cited the untimely termination of employee account privileges as being the primary cause of data leaks in this situation, which on its face appears to be entirely preventable.
Backing this up are results from a 2010 cybercrime survey sponsored by Deloitte that polled 523 private and public sector C-suite respondents. The study found that while technology can play a role in preventing data leakage, spending does not necessarily equal greater security.
Although the firms represented spent copious amounts on technology tools, the report concludes that many “neglect simple, inexpensive measures such as patch management, log analysis, privilege restrictions, password expiration, and termination of former employees’ access through a robust deprovisioning process”.
The Changing Environment
Data about whether the recent brutal economic conditions have factored into the insider threat question is, at best, incomplete. Reports like those from Deloitte and Verizon show no bona-fide connection between an uptick of insider data breaches and the Great Recession, but most analysis concludes it may be too early establish such a relationship.
Regardless, many of those who spoke with Infosecurity feel that the economy and insider misdeeds go, quite logically, hand-in-hand.
“We tend to have an implied trust for our employees”, Lumension’s Henry asserts. “In a good economy you only have to worry about bad people doing bad things. In a bad economy, unfortunately, you have to worry about good people doing bad things.”
Watch for people with motive, access, and capability, warns Chris Petersen, CTO and founder of log management provider LogRhythm. This includes IT admins, developers, engineers, and people who operate with what he calls a free-agent mentality.
The hard part about profiling likely internal threats, Peterson acknowledges, is “that you just don’t know” who the bad ones are.
“Anytime certain environmental factors change, your risk increases” says Baich. He provides an equation in response to questions about the role the economy plays in promoting increased insider activity: Risk equals vulnerability times the threat.
Baich believes the vulnerabilities have not changed due to the economic crisis, but the threats have shifted with the advent of social media.
The role that social media plays in escalating insider threats is a concern that is echoed by Nick Levay, information security and operations manager at the Center for American Progress, a Washington, DC-based think tank.
Because CAP is involved in public advocacy as well, Twitter and Facebook are business-critical tools to his organization. Therefore, Levay must take a “college campus” approach to security and allow access to these riskier applications while, at the same time, providing a measure of protection against the Wild West of online apps that could allow the non-malicious insider to inflict serious damage within an organization’s network.
To achieve this mission, Levay said CAP is also exploring application whitelisting as a “primary weapon to secure end points”. Furthermore, in his opinion, the one piece of technology that is now largely impotent against insider data leaks, regardless of the attack vector, is anti-virus offerings, which he calls “the worst tool in the toolkit these days”.
Not surprisingly, Lumension’s Henry concurs with this assessment. “Anti-virus, of and by itself, in today’s environment, is completely useless.” The forensic analyst says there are well over 30 million bad things floating around at any time, and he doubts organizations’ anti-virus will be monitoring for all those signatures.
Don’t throw away your anti-virus, however, Henry advises. He suggests you complement it with some sort of application control technology, such as whitelisting.
The Proof is in the Logs
To rip off a line from late-night infomercial mogul Ron Popeil, log management and audit products are not “set it and forget it” solutions. If you have the money to invest in these tools, then you must make sure you have the time and resources to review them as well.
And why is this the case? Because most data breach incidents, regardless of who is at fault, can be detected through the log data.
The joint Verizon-US Secret Service data set shows that 86% of all breaches investigated had evidence of the event within the organization’s log files. Instead of scrutinizing each line of the logs, the report recommends looking for abnormal fluctuations in the amount of data being produced, in either direction. Or as the report conveniently portrays: look for “haystacks” rather than “needles”.
Levay, who uses a log management/SIEM platform from LogRhythm, contends that you have to examine audit log data to find the anomalies, such as people accessing the same account from two different IP addresses. “You have to log everything”, including both failed and successful logins, he warns. “Newsflash: when you have compromised people, it tends to be successful logins that you’re concerned with.”
His advice is to determine who in your organization is likely to be targeted, and then focus in and audit their activities. But there is only so much time one can look at log data, Levay confesses, adding you can really only spend 15 minutes at a time on them without your “brain turning to mush”, likening it to the X-ray technician syndrome.
“If you find you’re the victim of a breach, the first thing you should probably do, unless it’s something you have dealt with a lot, is go out and get external help”, Levay advises. He says apart from your team having hands-on experience in this area, you might as well do it with a blindfold on. He humbly asserts: “You need to embrace your own ignorance”.
There is another reason why an outside analyst can be helpful to security professionals, who often need to justify their suspicions to management. “It forces you to compare what you think against what other people have found. It’s a confidence builder” if what you suspect can be confirmed by a third-party forensic analyst says Levay.
Patrick Walsh, chief technology officer of network security firm eSoft concurs. “I would start with an IT forensic analyst”, he says, because “folks will almost always leave some sort of trail”. Walsh believes this should be your first call if you suspect a data breach, regardless if you have a inkling of who was at fault.
Employees with a technical background tend to engage in insider theft, especially if they believe they can do so discretely says Lumension’s Henry. But so much data is logged and stored, that you really need to be highly skilled to pull this off.
Henry, who does computer forensics for a living, agrees that the proof of insider leaks, whether intentional or unintentional, is typically found within the log data. “It is rare that you don’t find that smoking gun”, he assures.
"Newsflash: when you have compromised people, it tends to be successful logins that you’re concerned with" |
Nick Levay, Center for American Progress |
Upon suspecting an insider threat situation, Henry says most companies are ill-equipped to conduct analysis themselves. “I have seen many a case where individual organizations have made an attempt to do some discovery work, and they’ve literally destroyed evidence in the process.” Henry advises that organizations doing in-house forensic investigations can alter data, making it inadmissible in legal proceedings.
He issues a useful warning: “If you suspect anything, immediately contact a professional, someone that is experienced in forensic analysis. Don’t pull the plug, don’t turn the PC off. In the past, the first step used to be to pull the plug.”
Henry contends that doing so, in today’s environment, could cause you to lose valuable data or evidence. “A lot of malware will actually break an executable and inject code into a running executable”, adding that the environment has “become too complex today to be handled by the average IT professional”.
Facing Facts
All of these insights and recommendations should be tempered in the face of one fact: nearly half of all data breaches involve organizational insiders in some respect, but they account for only 3% of lost records. This means that external threats still remain the primary source of headaches for those who work in security.
This was the conclusion of Verizon’s investigation, which added that inside data leaks are typically not a “large-haul technique”.
Most security people with experience in this area pointed to several educational programs that can drastically reduce, at the very least, compromises that are facilitated unintentionally by staff members. These include education on spear phishing campaigns, the use of strong passwords, locking out inactive terminals, and other common-sense approaches.
But if Paul Henry is right, and we are our own worst enemy, then monitoring and detecting this type of activity will remain a necessary evil.
Of course, there is another face to this type of risk, as Patrick Walsh reminds us that you can’t figure out your insider threats without also looking at the people problem. “If you have a disgruntled employee, there’s a chance that they will walk away with some form of information, which most do anyway.”