As someone who has worked in cybersecurity for years, it’s been fascinating to watch the evolution of CAPTCHA. Whilst you’re probably familiar with the acronym, you may not know that CAPTCHA stands for “Completely Automated Public Turing Test To Tell Computers and Humans Apart,” coined by Luis von Ahn, Manuel Blum, Nicholas Hopper and John Langford of Carnegie Mellon University in 2000.
The Evolution of CAPTCHA
When first developed, CAPTCHA asked users to read distorted text and type the letters into a box prior to registering for accounts, posting blog comments, purchasing products and services, etc. Since bots and other computer programs were incapable of performing that task, CAPTCHA helped prevent fraud by keeping some bots out of the system.
User experience, however, was definitely one of the initial areas in which CAPTCHA fell short. Requiring users to solve a problem prior to completing a desired action during their account experience led to frustration and customer abandonment. Frequent failures at entering the correct CAPTCHA solution have been shown to result in a negative impact on conversions and therefore on revenue.
While new versions of CAPTCHAs that use images instead of text are easier for humans to solve, they can be near impossible for individuals with eyesight challenges or disabilities. Different studies conducted by Stanford University, Webnographer and Animoto, showed that there is an approximately 15% abandonment rate when the users are faced with CAPTCHA challenge.
After Artificial Intelligence (AI) reached the point at which it could solve distorted text at 99.8% accuracy, Google introduced the No CAPTCHA reCAPTCHA in 2014. Rather than solving a CAPTCHA, individuals are asked to check a box that says, “I’m not a robot” to validate that they are truly a human. In 2017, Google introduced Invisible reCAPTCHA. Users no longer need to click the “I’m not a robot” checkbox, as the action is now bound to any button a user chooses.
Security Issues
Despite the security and UX improvements, there are still many ways for bad actors to get around CAPTCHA systems. CAPTCHA bots and CAPTCHA farms even exist where low-skilled workers are utilized to mass solve CAPTCHAs for rates as low as 80 cents for 1,000 solved codes. CAPTCHA attack systems presented at Black Hat Asia in Singapore showed a more than 70% CAPTCHA-cracking success rate with an average running time of just 19.2 seconds.
Consider the Use Case
It’s import to note, CAPTCHA is best used to prevent bots. It is not an effective tool to prevent fake account creation. When used strategically as a part of the full user onboarding flow, however, reCAPTCHA can be effective. Tools like reCAPTCHA should be layered with other solutions, such as phone verification and phone number fraud risk scoring. By layering reCAPTCHA, some bots can be prevented from entering the initial phase of the account verification process. This improves the overall user experience, which leads to the growth and engagement of new users.
Adding on Phone Verification: More Than Bot Prevention
Looking beyond CAPTCHA, there are additional ways to prevent bots that provide many more benefits at the same time. Before a user can interact or transact with your web or mobile app, they should be required to verify their phone number.
With phone verification, the valid end-user verifies that they are who they say they are by entering their phone number, receiving an SMS or voice verification code to their mobile device and then re-entering that code where prompted on the website or mobile application. Because there is financial cost in obtaining phone numbers, the number of fraudulent accounts a user can create is limited.
This process not only serves as a verification that the users are real humans, but it also connects a valid phone number to their account – something no fraudster wants to do. This phone number can then be used for two-factor authentication at any suspicious point in their account lifecycle. Additionally, when phone verification is layered with evaluation of phone data and intelligence, bad actors cannot use burner phones in order to create fraudulent accounts.
There are additional layers to consider, with news recently out about even using shoes or household junk as additional layers of account authentication, but whatever you choose to use, it is important that you are providing layers of security to keep bots out of your system and to protect your real users.