EmailGPT Exposed to Prompt Injection Attacks

Written by

A new vulnerability has been found in the EmailGPT service, a Google Chrome extension and API service that utilizes OpenAI’s GPT models to assist users writing emails within Gmail. 

The flaw discovered by Synopsys Cybersecurity Research Center (CyRC) researchers is particularly alarming because it enables attackers to gain control over the AI service simply by submitting harmful prompts. 

These malicious prompts can compel the system to divulge sensitive information or execute unauthorized commands. Notably, this issue can be exploited by anyone with access to the EmailGPT service, raising concerns about the potential for widespread abuse.

The main branch of the EmailGPT software is affected, with significant risks, including intellectual property theft, denial-of-service attacks and financial losses stemming from repeated unauthorized API requests. 

The vulnerability has been assigned a CVSS base score of 6.5, indicating a medium severity level. Despite multiple reported attempts to contact the developers, CyRC received no response within their 90-day disclosure period. Consequently, CyRC advised users to immediately remove the EmailGPT applications from their networks to mitigate potential risks.

Read more on these risks: Why we Need to Manage the Risk of AI Browser Extensions

Eric Schwake, Director of Cybersecurity Strategy at Salt Security, emphasized the gravity of the situation. He highlighted that this vulnerability differs from typical prompt injection attacks, as it allows direct manipulation of the service through code exploitation. He also called for organizations to perform audits of all installed applications, specifically focusing on those utilizing AI services and language models.

“This audit should identify any applications similar to EmailGPT that rely on external API services and assess their security measures,” Schwake added.

Patrick Harr, CEO at SlashNext, also commented on the news, underscoring the necessity of solid governance and security practices in AI model development. 

“Security and governance of the AI models is paramount as part of the culture and hygiene of companies building and proving the AI models either through applications or APIs,” Harr said.

“Customers and particularly businesses need to demand proof of how the suppliers of these models are securing themselves including data access before they incorporate into their business.”

What’s hot on Infosecurity Magazine?