More than 100,000 compromised accounts of the OpenAI language model ChatGPT have been found on illicit dark web marketplaces.
The discovery comes from Singapore-based cybersecurity firm Group-IB, which described the findings in a blog post published today.
“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code,” commented Dmitry Shestakov, head of threat intelligence at Group-IB.
According to the security expert, if threat actors manage to obtain account credentials, the fact that ChatGPT’s standard configuration retains all conversations could unintentionally provide them with a wealth of sensitive intelligence.
“At Group-IB, we are continuously monitoring underground communities to promptly identify such accounts,” Shestakov added.
According to the firm’s Threat Intelligence platform, the compromised credentials were found within the logs of the information-stealing malware Raccoon and traded on the underground platforms over the past year.
Read more on the Raccoon malware: Infostealers Spread Via AI-Generated YouTube Videos
The number of available logs containing compromised ChatGPT accounts peaked at 26,802 in May 2023, Group-IB wrote.
The Asia-Pacific region experienced the highest concentration of credentials being offered for sale, with the area accounting for 40.5% of stolen ChatGPT accounts between June 2022 and May 2023.
To mitigate the risks associated with compromised ChatGPT accounts, Group-IB recommended users update their passwords regularly and enable two-factor authentication (2FA) for added security.
The company also highlighted the importance of threat intelligence to protect against such attacks.
“Using real-time threat intelligence, companies can better understand the threat landscape, proactively protect their assets, and make informed decisions to strengthen their overall cybersecurity posture,” they explained.
The latest Group-IB report comes weeks after Vulcan Cyber’s Voyager18 research team shed light on a new cyber-attack technique relying on ChatGPT that allowed attackers to spread malicious packages in developers’ environments.
Editorial image credit: Diego Thomazini / Shutterstock.com