Commercial large language models (LLMs) were used as part of a cyber-attack which targeted a municipal water and drainage utility provider in Mexico, cybersecurity researchers at Dragos have warned.
A “significant compromise” of the water infrastructure providers’ IT environment escalated into an attempted attack against the organization’s operational infrastructure (OT), said a Dragos report, published on May 6.
The research suggested that attackers used Anthropic’s Claude AI and OpenAI’s GPT models to aid with planning and conducting the campaign.
The cyber-attack against the water facility in the Monterrey metropolitan area of Mexico took place between December 2025 and February 2026.
Dragos analyzed 350 artifacts associated with the attack, most of which were AI-generated malicious scripts used as offensive tooling during the intrusions. They found that the adversary leveraged commercially available tools to aid with the campaign.
Attribution remains unclear, with no named threat actor publicly identified.
AI Exploited to Operate Attack Faster
Anthropic’s Claude AI was used to as “the primary technical executor of the intrusion” and handled prompt-and-response interactions, intrusion planning and the development and deployment of malicious tools.
Meanwhile, OpenAI’s GPT models were used for what Dragos described as “analytical roles,” as well as processing collected data and generating outputs in Spanish.
The AI models were deployed to help the campaign operate faster and more efficiently and allowed the attackers to refine their techniques in real-time, based on what was working and what was not.
According to Dragos, Claude was also deployed to analyse vendor documentation around the SCADA systems at the water facility and was even used to generate lists of default and known login credentials for brute force attacks against the systems.
While a breach of the OT system was ultimately unsuccessful, Dragos pointed out that the AI-assisted campaign should serve as a warning over how commercial AI models can be exploited by nefarious threat actors. In this case, the attackers seemed to have no prior experience with targeting OT.
“This investigation showed how commercial AI tools assisted an adversary with no prior objective in OT targeting to identify an OT environment and develop and refine a viable access pathway to OT infrastructure,” Jay Deen, associate principal adversary hunter at Dragos, wrote in the blog post.
“These findings demonstrate how the adoption of commercial AI tools as an intrusion aid has made OT more visible to adversaries already operating within IT,” he added.
To help counter cyber-attacks against OT, Dragos recommended that security teams ensure that secure remote access policies are put in place and strong authentication controls are applied to limit unauthorized progression into OT environments.
The research by Dragos builds on previous research by Gambit Security into the attacks against government and infrastructure operators in Mexico, which exposed the personal data of millions of people.
Infosecurity has contacted both Anthropic and OpenAI for comment
