CISA and International Partners Issue Guidance for Secure AI in Infrastructure

Written by

US and international cybersecurity agencies have issued new guidance to help critical infrastructure operators safely incorporate AI into operational technology (OT) systems. 

Published on December 3, the guidance was developed collaboratively by the US Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate's Australian Cyber Security Centre, with input from international partners including the UK's National Cyber Security Centre (NCSC).

The document focuses on AI tools such as machine learning (ML), large language models (LLMs) and AI agents, while remaining applicable to traditional logic-based and statistical automation systems.

It addresses both the potential efficiency and cost benefits of AI alongside the unique security and safety challenges it introduces in OT environments.

Key Principles for AI Security in OT Environments 

As per the guide, critical infrastructure operators are encouraged to:

  • Understand AI risks and promote secure development practices among personnel

  • Assess AI use in OT environments, including data security and integration challenges

  • Establish governance frameworks for ongoing model testing and regulatory compliance

  • Embed safety and security practices, maintaining transparency and incident response integration

The guidance also emphasizes protecting sensitive OT data. This includes engineering configuration information, such as schematics and asset inventories, as well as ephemeral data, such as process measurements, which may be exposed when used to train AI models.

Read more on AI governance: BSI Warns of Looming AI Governance Crisis

The cyber agencies also noted how OT vendors are increasingly embedding AI directly into devices. Because of this, the guidance recommends operators demand transparency regarding AI functionality, software supply chains, and data usage policies. 

Integration challenges include system complexity, cloud security risks, latency constraints and ensuring compatibility with legacy OT systems.

Operators should employ testing in controlled environments, maintain human-in-the-loop oversight and update AI models regularly to prevent errors and maintain safety.

Oversight, Compliance and Safety

Human oversight remains central to AI-enabled OT systems, the report warned. Monitoring AI outputs, detecting anomalies and maintaining fail-safe mechanisms are critical to ensuring operational reliability. 

Operators are also urged to align AI integration with existing cybersecurity frameworks, conduct regular audits and adhere to evolving international AI standards.

"The integration of AI into OT presents both opportunities and risks to critical infrastructure owners and operators," CISA commented.

"By adhering to these principles and continuously monitoring, validating and refining AI models, critical infrastructure owners and operators can achieve a balanced integration of AI into the OT environments that control vital public services." 

What’s Hot on Infosecurity Magazine?