If you’ve read the news lately, you’ve probably read about ChatGPT. The new synthetic chat tool has taken the world by storm, and people are discovering that the question isn’t what ChatGPT can do in terms of text generation but rather what it can’t. Need to write an essay on a book you haven’t read, summarize a 50-page document or even come up with wedding vows? ChatGPT is on the job, working more quickly and often producing better results than a human typically would.
With innovative technologies, we often say that if it can be used, it can be abused – and the more widely used it is, the more quickly bad actors will realize its potential and start using it to further their own goals. We’ve already seen some discussion of this with ChatGPT, as researchers have quickly discovered its promise at writing malicious code and phishing emails.
Though the options for potential misuse are endless, there is little evidence to date of ChatGPT playing a role in an actual cyber-attack or phishing incident. This begs the question – are we more concerned than we need to be about the current threat? Before spiraling around the many ways that cyber-criminals will abuse ChatGPT, it’s important to step back and understand the risks of synthetic chat technology overall and best practices for keeping your organization safe.
What Can Cyber-Criminals do with Synthetic Chat Technology?
ChatGPT offers threat actors the ability to scale their existing attack techniques, many of which require humans, with the power of AI. Take spearphishing, for example, which involves tailoring malicious messages to targets to increase the success rate of those targets engaging with those messages.
Historically, spearphishing messages have been partially or entirely crafted by people. However, synthetic chat makes it possible to automate this process – and highly advanced synthetic chat, like ChatGPT, makes these messages seem just as, or more convincing, than a human-written message. It also opens the door for automated, interactive malicious communications. With this in mind, threat actors can quickly and cheaply massify high-cost and highly effective approaches like spearphishing. These capabilities could be used to support cybercrime, nation-state operations and more.
Advances like ChatGPT may also have a meaningful impact on information operations, which have come to the forefront due to foreign influence in recent US presidential elections. Technologies such as ChatGPT can generate lengthy, realistic content supporting divisive narratives, which could help scale up information operations. This could look like using synthetic chat technology to craft plausible misinformation or closely mimic an individual’s cadence and tone to pose as that individual and extract information. And undoubtedly, researchers and adversaries will find many more uses for synthetic chat offensively and perhaps on the defense front down the line.
The Best Way to Defend Yourself – and Where to Start
When a new technology like ChatGPT explodes onto the scene, it often leaves security leaders scrambling to understand just how concerned they should be. And, even more importantly, what should they do right now to minimize the potential for damage?
To defend against the risks mentioned above, start with a formal assessment of the potential risk posed by attackers leveraging synthetic chat against your organization. Once you’ve completed a solid risk assessment, you can identify the appropriate defenses against those risks. Synthetic media intelligence (SMI) will be key, as it will provide direction based on ongoing developments in the world of synthetic chat technology and the needed changes in security posture to stay protected.
Many organizations will ultimately need a program, including part-time or dedicated staffing, to defend against this new class of threats. This will require building subject matter expertise and specific policies and implementing manual and technical controls to support the team. Training and awareness for every member of the organization will also be key to ensuring attacks using synthetic media don’t slip through the cracks. Automated monitoring, alerting and response are suggested to get tipped to related abuse and to drive downstream response or other defensive efforts.
As for whether to be concerned, my advice is not to panic and focus on what you can do now to be proactive. Keep in mind that synthetic chat technology isn’t brand new, and just because ChatGPT is dominating headlines now doesn’t mean similar tools didn’t exist previously – and none have had the massive impact security researchers are concerned about today. The volume of attacks leveraging synthetic media has been relatively low compared to other tactics, although the use of synthetic media in attacks is increasing. Now is the time to take stock of your defenses, ensure you understand your organization’s specific risks and prepare to face this capability in the field.