How to Protect Against Advanced AI Scams

Written by

We are all familiar with receiving poorly worded emails from a person in a far-off land who is down on their luck and having trouble transferring a vast sum of money, offering to give up a significant cut if you are only willing to help them. Historically these scams have been easy to detect and identify simply by the suspicious email address or poor use of language and formatting.

However, with the advent of artificial intelligence (AI), there is a greater risk of being caught out through the improved quality of the approach. Taking the above-mentioned email scam as an example, AI could easily be used to craft a perfectly worded email and even suggest plausible reasons why you should help.

It could help facilitate a more enduring and targeted campaign, building up trust and rapport over time, which leads to the increased likelihood of an unsuspecting person being conned or manipulated.

Beyond email scams, AI is already being used to open up new avenues for people to be deceived. There have been examples of AI successfully mimicking and cloning people’s voices and being used to try to coerce unsuspecting relatives out of money.

Few parents would think twice about transferring money in response to a call from their child saying they were in trouble and needed money. One such horrifying real life example of an AI scam was where a  US mother received a call from her ‘kidnapped daughter.’

The complexity and credibility of scams using AI is advancing at an unprecedented rate. An audacious and successful example was demonstrated where a finance worker was scammed in to transferring a large sum of money while attending a deepfake video conference call where they were the only real person on the call.

Governments are talking about providing legislation and promoting safety around AI and this will help, particularly where perpetrators are caught and can be bought justice. Some AI engines are taking a responsible and proactive approach to filtering and declining to respond to questions that could be used for illegal means.

However, the sad reality is that legislation and the good moral compass of some AI developers is unlikely to stop scammers from using AI for illicit gains as the technology becomes more commoditized, easier to develop, operate and access.

How to Protect Against Advanced AI Scams

So how do people and organizations protect themselves in the future? Traditional security controls, simple actions, can be taken that at the very least will make the lives of criminals more difficult. 

Checking the actual email address of a sender rather than the displayed name can often quickly identify non-legitimate emails. Although care still needs to be taken as perpetrators may use names that are easily confused with legitimate domains. For example, substituting a 0 (zero) for an O or simply adding number after a legitimate domain.

Making use of the old security adage of something you know for identification purposes can be a useful approach to verify the identity of a person. For example, you could agree with family members a phrase or word that you would use to verify it is you or maybe use a nickname that’s not known outside of the immediate family.

Companies should put in place policies and processes. For example, a simple call back mechanism or multiple levels of verification can easily be put in place to significantly reduce the risks. These processes and policies should also be backed up with employee education and be routinely tested.

Finally, raising awareness of these types of scams, including talking to colleagues and family members about them, will greatly reduce the likelihood of people being caught out. Consider using the various steps outlined above. Although not entirely infallible, they at least raise the bar to being the victim of an AI scammer.

What’s hot on Infosecurity Magazine?