The IT sector is predicated on people being first. As a technology-driven industry, new discoveries and developments can deliver significant profits, which is partly why IT companies are often beset with patent infringement claims. Unfortunately, what applies for new products and technology concepts also applies to security flaws. Researchers can profit from newly found vulnerabilities, especially if they or their customers decide to bundle that exploit into widely distributed malware.
In the early days of security software, tools relied heavily on detecting vulnerabilities that were already known. A known virus could be fingerprinted, so that the software could recognize and quarantine it if it tried to infect a machine. But in the early days, most malicious code was replicated using a floppy disk. As broadband internet connections left computers perpetually connected, attacks spread more quickly, giving researchers less time to dismantle and profile a piece of malware before it made its way into the wild.
Consequently, zero-day attacks – exploits based on vulnerabilities that have not yet been patched – are becoming commonplace, and an increasingly worrying phenomenon for corporate users. The SANS Institute identified zero-day attacks as the most important trend in its top 20 list of vulnerabilities in 2006.
Bids for bugs
How can companies defend against attacks when they aren’t sure what they will look like? One approach is for product vendors to try and get to the flaw before the black hats do, but in the past, researchers have been frustrated by some vendors’ lack of interest or slowness to act.
Now, Herman Zampariolo is hoping to make the whole vulnerability disclosure field more transparent. He set up WabiSabiLabi, an auction system in which researchers can sell their zero-day vulnerabilities to the highest bidder. “The payoff is that we’re closer to zero risk,” he says. He dismisses concerns that black hats could purchase the vulnerabilities too, arguing that he has put checks and balances in place to verify buyers. He believes it will make “a contribution to security”.
It’s a potentially profitable idea, but auctioning the flaws may not stop them reaching the wild before a patch is available. The battle there is being fought on two different levels. The most immediate skirmishes are in the trenches, where systems administrators try to avoid their users being compromised. But another, more strategic war is being fought back in the planning room. Researchers make more strategic decisions about product development, reacting to what they find on the shady IRC servers where many of the initial clues about black hat theory can be picked up.
War games
"Ideally, you’d hope that we could turn the infinite cat and mouse game between black hats and white hats into a time-bound game, where the white hats won totally, by providing a system so adaptive that it could block any unknown attack that was thrown at it." |
“We are in an arms race. We are explicitly trying to innovate and get ahead of them,” says Michael Barrett, chief information security officer at PayPal. “Whenever we make change X, we know that they’re going to respond with attack Y.” However, while thinking about change X, security researchers have often already thought about defense Z two steps down the line, he adds. “Security researchers may seem surprised when a certain attack comes along, but usually they’re pretty predictable.”
The cat and mouse game between white hats and black hats brings to mind the kind of formalized game theory that pervaded organizations like the Rand Corporation, when it tried to game out nuclear war scenarios as if there was ever going to be a winner.
Papers have been published, peppered with mathematical equations, specifically about game theory as it applies to security research.
Paul Midian, a principal consultant at security specialist Siemens Insight Consulting, says that IT security is an infinite-length game as opposed to a game where there is some sort of closure and eventual payoff.
Ideally, you’d hope that we could turn the infinite cat and mouse game between black hats and white hats into a time-bound game, where the white hats won totally, by providing a system so adaptive that it could block any unknown attack that was thrown at it.
“If you are going to model a computer attack scenario, then at the end of the game, either the security vendor has won (because the computer is unharmed), or he loses,” says Midian. This is the direction in which vendors have been attempting to move.
Behaving strangely
One development in this area has been behavioral analysis, explains Dean Turner, director of global intelligence networks at security tools vendor Symantec. “Thanks to things like polymorphism and variants, you had lots of things that obfuscated the threats,” he says. “You want to build technologies that are a little more proactive.”
Security vendors started with technologies like heuristics, which uses various rules to try to identify a virus, even if the signature is not known. The rules are developed using what is already known about malware attack techniques. It is closely linked to behavioral analysis, in which security tools watch for behavior that is inappropriate on a particular system, or is known to be behavior typical of malware. An example could be a program that tries to set up an SMTP email server on an end point, for example, or something that writes to the registry.
But Johannes Ulrich is not convinced. As chief research officer for the SANS Institute, he is responsible for the organization’s Internet Storm Centre, and has seen his share of malware in test environments. “Whenever we run a new piece of malware through virus checkers, they usually come up empty,” he says, adding that it would kill the anti-virus vendors’ business model to move from a signature-based system that requires a regular subscription. “I don’t find any current commercial anti-virus tools to be effective against relevant current threats.”
An alternative to behavioral analysis is anomaly detection, in which a tool profiles a system or network and then watches to see if the pattern of behavior changes. Geoff Sweeney, chief technology officer at anomaly detection software firm Tier-3, dislikes calling his Huntsman anomaly detection system an intrusion prevention system, because he says that IPS products can fail if they collect information about the wrong things.
“If you are really trying to solve the zero-day attack problem, then the only place you can start is to gather as much information as possible,” he warns. “Where most of these solutions fall flat is that they are trying to make decisions against a limited piece of data.”
Instead, anomaly detection often relies on passive network analysis, using all network traffic to create a baseline representing normal activity. The baseline must be updated regularly to reflect changing trends in network traffic. Then, instead of using discrete rules designed to try and trap specific exceptions, artificial intelligence can be used to see if, in general, usage patterns deviate from normal. This can raise alarms and staff can be alerted.
What’s up doc?
As we move beyond discrete rules-based systems into tools that look at a broad baseline of what constitutes normal activity, other metaphors begin to open up. The most obvious one is medical. Doctors and patients both anxiously look for physical behavior that deviates from the norm as a sign of illness, and researchers have been thinking about security in the same way.
Computational immunology was a technique developed by Stephanie Forrest at the University of Mexico. It uses algorithms to mimic the T-cells that form the basis of the human immune system. The computerized ‘cells’ are taught about patterns of system behavior deemed native and normal to the system. They are then taught to recognize (or ‘bind with’, using the medical metaphor) events deemed to be foreign to the system.
The significant difference between these anomaly detection approaches and those that went before them is that whereas the former tried to match behavior against an index of known suspicious activities, the latter focuses on negative detection. It uses the status quo as its reference, and looks for negative matches with system behavior. Researchers believe that using this approach in a distributed manner enables systems to deal better with the noisy, changeable data inherent in the average network.
The security of crowds
“The most profound security breaches happen when people choose to run a piece of code,” |
Jonathan Zittrain, Berkman Centre |
But another kind of distributed system may prove to be the most effective of all. The net effect – in which the value of the network is the square of the number of nodes – has led to a revolution in distributed work and ‘crowd sourcing’. Distributed processing initiatives like the alien-spotting attempt SETI@Home carved up mammoth computing tasks among tens of thousands of individual personal computers. Sites like Wikipedia rely on the contributions of their users to make the whole better than the sum of the parts. Why can’t the same be done for security?
Californian anti-spam product vendor Cloudmark uses an algorithm called Vipul’s Razor in its product. Incoming spam is fingerprinted, to create a reference digest for the email. A network of thousands of users uses an email client plug-in to flag spam email when received. The fingerprint of this email is then sent back to a central server, where a reputation system checks the user’s standing within the larger community, based on the number of spam mails successfully flagged, versus inappropriately flagged mails. “So when new threats come out, we typically don’t have to do any manual response. The automated trust system is tracking the reputation of each user,” says Cloudmark’s chief technology officer Jamie De Guerre.
Another, nascent, system called Herdict is designed to help detect ‘badware’. Researchers at the Berkman Centre for Internet and Society at Harvard refer to badware as “unwanted applications that depend on naive human behavior to pose a security risk, and that are installed as a result of user action”.
The Herdict client sits on PCs distributed around the web. Whenever a PC runs a process, the client gathers data about its effect on system performance. This data is made anonymous before being distributed across the Herdict network, enabling others to assess the impact of a piece of software before installing it.
“The most profound security breaches happen when people choose to run a piece of code,” says Jonathan Zittrain, professor of internet governance and regulation at the Berkman Centre, adding that researchers tend to focus too heavily on attacks that users can do little about, while ignoring the end-user ignorance that often leads to widespread security infections. “We are trying to get a sense where they feel themselves part of a group of people trying to combat this, and being aware of the whole phenomenon.”
An abstract problem
Amid all this activity, what’s interesting is how abstracted the battle is becoming. When a problem becomes big enough, we tend to apply metaphors to it. We use the metaphors of illness and creeping disease to describe security risk that in a way that may help us to combat them more effectively. Similarly, just as we layer the metaphor of the market on top of problems such as global warming, with carbon credit exchanges becoming increasingly popular, we do the same for security threats, in an attempt to mitigate them.
But as we talk in more abstract, conceptual terms about real-world threats, one danger remains: that we pile millions into new, sophisticated technologies while ignoring basic measures such as proper firewall configuration, and multilayered defenses at the gateway and end point.
Such measures can never be guaranteed to stop all unknown threats, but they will certainly shore up our defenses and make us feel just a little safer, as we continue whistling in the dark.