Imperva adds that automating the query and result parsing allows the attacker to issue a large number of queries, examine all the returned results and get a filtered list of potentially exploitable sites in a very short time and with minimal effort.
These searches, the security vendor asserts, are conducted using botnets, and not the hacker’s IP address, meaning that the attacker’s identity remains concealed.
The last time this happened – a few years ago, Infosecurity notes – the search engine giant introduced a number of security features for IP addresses seen to be generating a large number of queries. These features include CAPTCHA routines and, in certain circumstances, a `tar pitting' process whereby requests are deliberately slowed down.
Most Google users don't encounter these security features, but users of anonymous proxy servers, such as the Tor Project, will see them quite often.
According to Amichai Schulman, CTO of Imperva, hackers have become experts at using Google to create a map of hackable targets on the web.
“This cyber reconnaissance allows hackers to be more productive when it comes to targeting attacks which may lead to contaminated web sites, data theft, data modification, or even a compromise of company servers”, he said, adding that these attacks highlight that search engine providers are need to do more to prevent attackers from taking advantage of their platforms.
Interestingly, Imperva's research – which has been published in a downloadable report - suggests that hackers can now easily overcome Google's detection mechanisms by distributing their queries across different compromised machines using a botnet swarm.
During May and June, the company says that its researchers observed a specific botnet attack on a popular search engine. For each unique search query, the botnet examined dozens and even hundreds of returned results using paging parameters in the query.
The volume of attack traffic was huge: nearly 550,000 queries - up to 81,000 daily queries, and 22,000 daily queries on average - were requested during the observation period. The attacker, asserts Imperva, was able to take advantage of the bandwidth available to the dozens of controlled hosts in the botnet to seek and examine vulnerable applications.
As a result of its findings, Imperva is advising that search engines need to start looking for unusual and suspicious queries, or where the searcher is looking for known sensitive file types such as /etc or database data files.
The data security specialist also advises that businesses need to be aware that, with the efficiency and thorough indexing of corporate information - including web applications - the exposure of vulnerable applications is bound to occur.
Whilst attackers are mapping out these targets, therefore, Imperva says it is essential that organisations prepare against exploits tailored against these vulnerabilities.
This can be achieved by deploying a number of runtime application layer security controls using a web application firewall and reputation-based controls to block attacks coming from known malicious sources.