The Department of Homeland Security (DHS) has unveiled new resources aimed at addressing the emerging threats posed by artificial intelligence (AI).
These resources include guidelines designed to mitigate AI risks to critical infrastructure and a report focusing on AI misuse in the development and production of chemical, biological, radiological and nuclear (CBRN).
These initiatives, unveiled on Monday, are part of DHS’s broader efforts to safeguard the nation’s critical infrastructure and facilitate the responsible utilization of AI.
Last week, the establishment of the Artificial Intelligence Safety and Security Board was announced. This board comprises technology and critical infrastructure executives, civil rights leaders, academics and policymakers, among others, aiming to advance responsible AI development and deployment.
“AI can present transformative solutions for US critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks,” said Secretary of Homeland Security, Alejandro N. Mayorkas. “Our Department is taking steps to identify and mitigate those threats.”
He also highlighted DHS’s accelerated efforts in response to President Biden’s AI 2023 directive, citing the establishment of the AI Corps and the development of AI pilot programs across the department.
Read more on the directive: Biden Issues Executive Order on Safe, Secure AI
The safety and security guidelines released by DHS, in collaboration with the Cybersecurity and Infrastructure Security Agency (CISA), focus on addressing AI risks across critical infrastructure sectors.
Furthermore, DHS outlined a four-part mitigation strategy based on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This strategy emphasizes governance, mapping individual AI use context and risk profiles, measuring and tracking AI risks, and managing identified risks to safety and security.
In addition, DHS collaborated with its Countering Weapons of Mass Destruction Office (CWMD) to analyze the risk of AI misuse in CBRN threat development.