So we all know the story, you’re browsing a site and you get asked to prove you’re a human: ‘Click all the pictures with cars in’ or something similar. A ‘Captcha’ in tech speak. Are these the solution to the bot problem? There are solutions that work like Captcha (but on steroids) which purportedly can’t be solved by robots.
If it is suspected that a bot is solving it, then that particular captcha is removed from the database. In order to combat this we’re seeing the emergence of so-called captcha farms based in India and China.
They work like this: a bot network is performing a certain task, it gets hit with an unsolvable captcha, it passes it to a human to solve and then continues its automated task. Imagine a shed full of people, sitting around, waiting for captchas to solve. Are you a human? Please prove that you are a human by solving this puzzle. So the human workers crack on and they solve these captchas, all day, every day. Humans proving that they are humans so bot networks can continue their work unrestricted.
In effect we’ve witnessed the birth of a Nigerian-fraudster-style of economy with people effectively working along with the malicious bots in order to overcome human challenges. The bots are actually passing off this work to a human (in a shed somewhere) which is quite crazy when you come to think about - but it is happening. Imagine a more strange job than sitting in a room all day proving that you're a human for the purpose of a computer - the whole thing's upside down.
Then again should we be surprised? If there's money in something, don't be surprised as to how complicated technology can get. In another example, we see the EA Sports game FIFA, a part of which allows you to ‘build’ players. As an online game, you can buy and sell players to other users across the entire platform, but it involves real interaction with things that happen on the screen and menus that need clicking at the right times.
So one gang came along and actually set up a farm of PlayStations with webcams watching the output on TVs. They then built code that tracked where it needed to move the mouse cursor to, or where it needed to move the control pad in order to click and make in-game purchases. In this way, they could play huge volumes of matches against each other, build up the qualifications of certain players (to make them more valuable) and then sell them in an online marketplace.
We are talking about a room full of computers with webcams attached to them, with enough intelligence to know how to interact with the game - a computer program was taught how to play this online football game, entirely autonomously.
If you consider how much this must have all cost, probably hundreds of thousands of pounds if not millions, then clearly the financial reward for doing so was there. That is the endgame - financial reward. If you do it en masse, and for long enough, then the money is there for the taking and the sophistication knows no bounds.
Ever since they were created, captchas have been commonly used to help prevent online fraud and those who perpetrate it. However, the monetary incentives we touched upon previously has effectively kicked off an arms race of sorts with a seemingly never-ending loop - fraudsters develop automated solvers and then captcha services modify their design to break the solvers.
Google’s captcha offering, ReCaptcha seeks to offer a system that requires a minimum of effort on behalf of a legitimate user whilst making things more challenging for the computers by requiring tasks that are more challenging than mere text recognition alone. This evolution of a more automated model is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned.
In this way, users might be asked to click within a checkbox, or to solve a challenge by identifying images of similar content.
Indeed, one particular paper outlines a comprehensive study of reCaptcha where the authors explore how the risk analysis process is influenced by each aspect of the request. Their extensive experimentation allowed them to identify flaws that effectively allow adversaries to effortlessly influence the risk analysis, bypass restrictions and deploy large-scale attacks.
Whilst the study focused on reCaptcha overall, the findings have wider implications - the future of captchas relies on the exploration of novel directions, an evolution in automation.
So what do these evolutions look like? One of the most promising on the market is Arkose Labs. Their system takes a new approach by creating objects that cannot be recognized by current AI. In this system, an object is rendered in a 3D environment with lighting and shadows being constantly changed. The result is a grayscale image that (to humans) looks like a dog or some other animal, but to AI it looks like smoke or a cave (pictured).
The user is then asked to complete some puzzle with the shape, for example turning it upright, which is easy if you know the object is an animal, not so easy if you think the object is smoke. Because it’s a whole new paradigm it would require the training of a whole new AI system, which takes a lot of time and money.
No doubt the arms race will continue and what is unsolvable today will be solved tomorrow. It will be down to human ingenuity to dream up things that we can easily solve, but the robots cannot. For sure the problem will become more difficult over time, which should make the solutions even more interesting.