The rush to adopt AI in enterprise environments is not only creating new security vulnerabilities, but is also reviving old security failures, a top Mandiant executive has warned.
Speaking to Infosecurity during Google Cloud Next 26, Jurgen Kutscher, VP of Mandiant Consulting, part of Google Cloud, said that AI deployment in enterprises is often accompanied by a neglect of basic security controls.
“A lot of the old problems are new again,” Kutscher said. “We’ve seen enterprises really worried about new AI threats like large language model poisoning while forgetting the most basic security controls.”
Mandiant Red Team Reveals Cybersecurity Failings
Kutscher said Mandiant’s red team has uncovered real security failures caused by this mismanagement during simulated real‑world attacks, in which testers adopt the tactics of genuine adversaries to probe organizations’ defenses.
During red-team engagements, he has seen AI-enabled environments where an attacker could change data classifications, allowing them to bypass protections like data loss protection (DLP) solutions.
Furthermore, Kutscher was “surprised” to find even simple mistakes such as unencrypted communication streams.
“For instance, we observed an unencrypted communication stream between the AI and the browser when working with a financial company,” he said, underscoring how basic hygiene was being overlooked.
In multiple engagements, Mandiant red teamers were able to social-engineer initial access and then rely on the AI to perform follow-on actions, including exfiltration and policy changes.
“Once we're inside, we've had the AI do the rest for us, including data theft and everything. And I’m talking about authorized AI deployments, not event shadow AI cases, where employees have deployed AI workflows without the company’s oversight,” Kutscher said.
Organizations should build AI security governance processes as soon as possible.
He emphasized that creating policies and governance is easier than cleaning up uncontrolled AI usage after the fact. He recommended revisiting secure architecture and performing red-team validation to ensure critical assets are truly segmented.
While recognizing AI’s power for defense, Kutscher urged CISOs not to assume AI adoption absolves them of basic cybersecurity responsibilities.
“It’s possible that these mistakes partly come from the fact that CISOs aren’t always involved in the deployment of AI workflows, among many other reasons, I don’t want to speculate, but the lack of basic security controls around AI workflow deployments is there and it’s a significant risk,” he concluded.
