We've all been to the information security shows and conferences and been impressed by the white hat hackers who are quite capable of blowing your socks off with their cracking exploits. But what about artificial intelligence (AI) hackbots?
AI hackbots are autonomous software-driven robots that can perform, learn and adapt to changing conditions and present a report to the user about which IP addresses - and their connected IT resources - are hackable and to what degree.
"The average IT security manager would view this stuff as science fiction, but it's actually science fact." |
Simon Perry |
This isn't science fiction. If we spin back the clock ten years to when a freeware Unix package called SATAN (Security Analysis Tool for Auditing Networks) was released, we can find the beginning of this new genre of AI hackbots applications.
SATAN was originally condemned as a double-edged sword, since its designers Wietse Venema and Dan Farmer intended it as a network analysis tool, but its capabilities made it very popular with hackers and crackers around the world.
SATAN allows almost anyone to input a series of IP addresses and let the package merrily probe away, returning a list of what operating systems - and their versions - are sitting on the end of the link. From there, by drawing on internal tables, SATAN attacks the IP-connected resources and probes for known security vulnerabilities. After a short while, the user is presented with a database of what has been scanned and can get down to some serious cracking.
Fast forward ten years to the start of 2009 and crackers are starting to use Amazon's EC2 (Elastic Computing) Cloud Computing service as a launch pad for a Bittorrent file harvester, as well as a botnet swarm controller. Worryingly, since Amazon's service can be purchased using a pre-paid debit card and accessed using a pre-paid mobile broadband dongle, you are talking about totally anonymous and almost untraceable activities.
Mission Lock-down
It's not just the crackers that are breaking new ground in terms of automated cloud-based illegal online activities.
The US Department of Energy's Oak Ridge National Laboratory, Tennessee, has developed an application that uses online entities called Ubiquitous Network Transient Autonomous Mission Entities (UNTAME). These entities interact with each other on the internet and via IP-facing super-servers (super-servers are daemons that run on Unix-like systems). Each UNTAME entity is aware of what the others are doing and can plan its actions accordingly.
In an interview with the Daily Beacon, Joe Trien, a programming leader with Oak Ridge's computational sciences and engineering division, said that the entities, known as cybots, form part of the UNTAME network, which is designed to operate as an advanced cyber-security protection system.
According to Oak Ridge, as each UNTAME entity detects a network intrusion or intruder, it shares information with the others and locks down the entire network resource from further incursions in a matter of seconds.
"The automated stuff is where the next generation of hacking is headed." |
Most enterprises, says Trien, have intrusion detection centers set up in key spots, but they don't communicate with each other. The UNTAME cybots, on the other hand, are designed to work with other cybots, continue their mission, or regenerate when necessary so they can pick up where one left off.
According to Simon Perry, principal analyst with IT analysis company Quocirca, this is not the stuff of Star Trek, but exists now. Oak Ridge, he says, is using the UNTAME cybot technology to monitor what is really happening on US government networks and stop network incursions in real time.
"The average IT security manager would view this stuff as science fiction, but it's actually science fact. Very few people understand this stuff," he says. The problem facing the information security industry, he adds, is that people do not realize the technical sophistication that is behind the current crop of attacks.
It also begs the question why the authorities are putting so much effort into technologies such as the UNTAME cybot project. “Have the hackers developed the same level of technology?" he asks.
Talent Galore
There is, says Perry, no shortage of technical talent capable of developing a darkware version of UNTAME. What makes matters more complex is that the development of virtualization technology now allows crackers to test their AI superbots on their own system boxes, with no need to unleash them onto the internet. So what is the solution?
According to Perry, high levels of information security and patch management can only go so far in protecting an internet-facing resource. Companies must, he says, plan for a disaster scenario in which their system is brought down by an AI superbot.
The problem, Perry notes, is that too many companies treat disaster recovery as a technical project and fail to involve their management, PR and marketing functions. And yet this is vital if a company is to prepare an effective disaster recovery plan.
On the protective front, meanwhile, Perry says that companies must now start employing high levels of encryption for all data flowing across and into/out of their networks, including data in transit and data at rest. Only then, he notes, will firms be protecting their data from this new generation of attack threats.
Peter Wood, a member of the ISACA conference committee, is another believer in preparing for a worst-case scenario. High levels of security patch management, he says, are central to the effective protection of an internet-connected IT resource.
"I had an interesting chat with a cracker at a recent ethical hacking convention about the development of AI hacking technology. The technology is clearly being developed out there," he says.
The worrying thing is that most firms have little or no way of preparing for this new genre of attack methodologies. "The automated stuff is where the next generation of hacking is headed, especially when hackers can employ computational acceleration technologies from companies like Russia's Elcomsoft," he says.
Elcomsoft has developed a ‘password recovery assistance' package that, by using up to four graphics cards installed on a single PC, can speed up a brute force or rainbow password attack routine by up to 20 000 times. At this level of brute force attack, says Wood, it only takes around 15 minutes to crack an admin password on a typical server.
It's against this backdrop of jaw-dropping hacker exploits that Wood argues against firms jumping into cloud computing and outsourcing their information security.
"It's just too early to do this sort of thing without raising all sorts of security spectres," he says, adding that “firms must protect the security of their data before they outsource or move to cloud computing”.
"Small firms also need to be aware of the security risks of moving to cloud computing and AI bot issues. The problem is that most firms don't have the budgets to deal with this issue. If you move your data into the cloud, you run the real risk of your data coming under attack. It really is that simple," he adds.
Dig Deep
According to Peter Sommer, a veteran IT security expert and a visiting professor for the London School of Economics, whilst the uber-cracking software that is SATAN was considered to be breakthrough in the late 1990s, no-one really knows the capabilities of today's SATAN-like cracking packages.
The issue of defending your IT resource against an AI superbot version of SATAN, however, is not, he says, rocket science. "You are looking at high levels of patch management and ensuring that your systems are as resilient against attack as possible. This can be supplemented by an advanced intrusion detection technology that spots anything untoward on a network as soon as it happens and locks down the network," he says.
Like Quocirca's Perry, Sommer says that firms must also have an effective disaster recovery plan in place. The reality, he says, is that some automated attacks will break through a company's information technology security defenses, and firms need to carefully plan for that eventuality.
Resilience, Sommer argues, is the key. Resilience against attack and the resilience to recover from an attack that breaks through. "Large companies and government departments also need to develop standby systems that can be used in the event of an automated threat such as a distributed denial of service attack," he says.
Major companies cannot hope to save money with developing the high degree of resilience against automated attacks, especially when it comes to AI superbot-driven threats, Sommer says. They must be prepared to spend real money on what has become a real information security problem.
"If you move your data into the cloud, you run the real risk of your data coming under attack." |
Peter Wood, First Base Technologies |
The internet attacks on Estonia in 2007, he remembers, were a classic example of what can happen when an entire country's internet systems come under attack.
"Estonia is an economy whose success is largely based on the internet, and when it came under attack, it showed what can go wrong to any large organization connected to the internet," he says.
Legal Deterrent
Securing your defenses and implementing a well-planned disaster recovery plan are only two facets of an effective strategy for dealing with AI superbots.
According to legal IT specialist Alistair Kelman, the third aspect to consider is the legal deterrent against crackers developing such applications in the first place. It has become, he says, a lot easier to prosecute crackers in a court of law for their actions and enforce any penalties the court hands down as a result.
"The international aspect of hacking used to make it very difficult to prosecute, but it's become a lot easier now that the courts say - `where is the harm taking place' - and consider the guilt of those concerned in the light of this," he says.
"It now doesn't matter where the bad people are geographically located. What matters is where the harm occurs and legal action can take place in that country," he adds. Pursuing a guilty decision, Kelman says, has become a lot easier with the Comity - a legal term for friendship - that now exists between the United States and the European Union.
In a post 9-11 world, the effective prosecution of hackers and crackers who program AI superbots and other malicious entities online has become much easier.
The courts, he continues, can now also decide that a given prosecution is an important one in terms of the law and its precedent. Then the case takes on a very important status in terms of a country's legal system.
Kelman is also concerned about the technology that hackers and crackers now have at their fingertips - and what they can do with that technology in terms of damage to internet-connected systems. "We're reaching the stage where commodity PCs are available that can perform quite astounding functions. Developing AI bots and other internet attack genres has become a matter of routine for many of the bad people on the internet," he says.
So how can companies protect their IT resources against the onslaught of an automated attack, whether from an AI superbot or otherwise? According to Kelman, just as the hackers are developing automatic attack methodologies, IT managers can protect their resources from automated systems by using human-oriented challenge-response systems that computers cannot currently handle.
CAPTCHA (Completely Automated Turing Test To Tell Computers and Humans Apart) tests can be installed on internet-facing login points to stop automated attack programs in their tracks.
There is also, says Kelman, a fascinating book called Daemon from Daniel Suarez coming out in April on the subject of AI superbots and what threat they pose. "The book serves as a warning of what computers, in the wrong hands, are capable of. It is a must-read for any IT security manager," he says.