Give a person a phishing email and they might fall for it once – but teach AI to craft convincing emails in the voice of their acquaintances and you could upgrade from a Toyota to a Bugatti.
The question arrived today: How much of a cyber threat is AI?
Automation of phishing attacks has been a vector for cyber-criminals for years, but now AI is about to take cyber risks and threats to a new level. This article explores how AI can not only remove almost all of the human component from committing crime but execute attacks with far more proficiency than humans could.
This is very much in my wheelhouse because I am devoted to the three research areas where this all joins together:
- Cybersecurity
- PsyOps / scam psychology and
- Artificial intelligence
A demonstration is the best way of exemplifying the risk.
Most people have now heard of ChatGPT, but plenty of AI technologies are out there. Although they are eager to please, they will usually (at present) decline to actively participate in anything underhand. For example, Jasper.AI responded like this:
And here is the ChatGPT response when directly asked if it could write a phishing email:
But these safeguards are currently easy to circumvent using the method known as the ‘jailbreak.’
The jailbreak is any technique that enables an AI to circumvent or bypass its own policies, controls or other safeguards. At present, this is not difficult to do. As an example – in this case – I am going to pretend to be a cybersecurity researcher (comments welcome below!).
And now the AI is okay to help me out – but the first emails it produces are a bit slim on content because I have not given the AI much insight or direction, other than what I want to try and obtain from the target (in this case, $20 and his car keys in a padded envelope, thank you kindly).
I happen to know the identity of the test target’s boss and I obtain a very small sample of some text his CEO wrote (from the internet) and ask the AI to analyze this writing style.
The AI has a ‘tone analyzer’ and that tells me the writing style of my target’s boss is compassionate and caring… I know. Really? But apparently so.
I ask the AI to write the phishing email, using the boss’s tonality, but also add the tones of urgent and convincing – and that the postal address should be a post box at the White House.
This is what the AI wrote:
Weirdly and predictably – other than the oddity of the request – this language does sound like this CEO.
Once I had prepared the training data, the email was submitted and solved in under a second. Although it took me 20 minutes to prepare and obtain the necessary training data, an AI can quickly learn such patterns and reproduce them with little to no human input. In the hands of a criminal organization or a rogue nation-state, AI can go much further, industriously targeting specific individuals with spear-phishing campaigns and exploiting cracked email services. The only task for the first criminal is the initial overhead of training the AI to finesse the process and results and teach the AI about any failures.
Instead of sending millions of generic phishing emails, criminals and rogue states will soon be able to flood inboxes with content that can look more authentic than actual correspondence. This trend may develop within the next 12 months.
Before we reach some good news, there is more bad news. The barrage of emails is not the most devastating of the threats posed by the AI evolution.
The three nearest-term threats from any AI repurposed by criminals or rogue nation use are predicted to be:
- Forget using a small sample to mimic a person. In the example above, the AI only needed a single paragraph to mimic a writing style – but AI also can pull sufficient information from social media and public information and read any cracked email service, generating and mailing out overwhelming waves of very authentic prose. As an example, using resources equivalent to a fraction of ChatGPT, an AI could write a very personal email from a CEO to every single person in a global company, and each phishing email can be completely in the context of any actual correspondence that has gone before.*
- An AI operated by an adversary can rapidly analyze and understand any organization’s security posture in real time. Versions of vulnerability reports about your own organization that you could not dream of obtaining internally on vast budgets will soon be available (at very low cost) to adversaries and updated as gaps appear and disappear.*
- An AI can go much further and even adaptively formulate (and update) how best to perform a successful cyber-attack at any given moment and perform this attack seamlessly in any major language.*
(*All of these points were made to me in a conversation on the topic with ChatGPT)
Even for a small business, such attacks can be devastating. For example, I witnessed first-hand a local trader who had failed to implement multi-factor authentication on his work email and had his account compromised. The attacker digested all of the recent correspondence and then sent out rogue invoices (from the trader’s own email account) for 50% deposit payments to every customer booked to have work started. The attackers even deleted the sent items as soon as they left the email service – and monitored for replies.
For the very short term, AI remains under the control of organizations that are (at least) endeavoring to detect and prevent misuse – but that is unlikely to remain the case.
Powerful AI tools will soon be accessible to cyber-criminals, enabling the industrial automation of phishing and cyber-attacks.
The good news is that although threats tend to precede protection, the proper defenses will follow rapidly. The solution (right now) is that organizations need to ensure security is already as strong as it can be (multi-factor authentication is a must, proactive monitoring and threat alerts) – and communicate/train your customers and staff on the expected risks:
“With advances in AI, we anticipate the arrival of more convincing phishing emails that may be written to be more compelling than ever. A detectable, noticeable difference is that it will be asking you for some action it can leverage against you or our organization. No matter how compelling a message seems, remember that our organization will never write to you and ask you [insert your safeguards here – such as to reveal a password, test a multi-factor authentication approval…]”
Enterprises must stay vigilant and learn more about emerging platforms such as ChatGPT. My projection is that organizations will need to meet these new AI-based threats by deploying AI countermeasures of their own – and that many of those AI defensive technologies will be new to the market, very different and more capable than any of us are currently running.
In the meantime, make sure your cyber defenses are fit-for-purpose. Expect no mercy.
It is going to be an interesting year.