President Biden has issued an Executive Order to establish new standards for AI safety and security.
The order follows previous actions the President has taken on responsible innovation, including work that led to 15 leading tech companies pledging to drive the safe, secure, and trustworthy development of AI.
The Executive Order (EO) aims to protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.
The announcement comes days ahead of the UK’s AI Safety Summit as the UK seeks to establish a favorable regulatory environment for AI development. Biden’s EO is set to support and compliment the UK’s efforts, as well as Japan’s leadership of the G-7 Hiroshima Process, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
New Standards for AI Safety and Security
The order acknowledges that with the growth of AI capabilities, there are implications for citizens’ safety and security. The US government has outlined the following actions that should be taken to protect Americans from the potential risks of AI systems:
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the US government.
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
- Protect against the risks of using AI to engineer dangerous biological materials.
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
- Order the development of a National Security Memorandum that directs further actions on AI and security.
Casey Ellis, CTO and Founder of Bugcrowd, said, “Overall, the order reflects a proactive approach to manage AI's promise while mitigating its potential risks.”
Meanwhile, Kunal Anand, CISO & CTO, Imperva warned that while the EO is a "good first step" it is important to create regulatory frameworks that can adapt to the fast-paced development of AI technologies.
"To promote fair competition and establish trust among all stakeholders, the process of creating new regulatory standards must be transparent. Any new regulations should be unbiased and not be influenced by established players. They must be clearly justified and serve a clear public interest rather than being used to protect incumbents. These frameworks should also promote technological advancement without being too rigid or prescriptive in order to balance innovation and address legitimate societal concerns by adopting flexible regulatory practices," he said.
Most commentators have acknowledged that the EO is a positive step towards regulating the use and development of artificial intelligence.
However, Hitesh Sheth, President and CEO, Vectra AI warned that the new regulations ought not to stifle innovation.
"As the US government works with international partners to implement AI standards around the world, it will be important for these regulations to strike a balance between advocating for transparency and promoting continued innovation - rather than simply creating artificial guardrails. There’s no doubt that AI advancements and adoption have reached a state where regulation is required – however, governments need to be cognizant of not halting the ground-breaking innovation that’s taking place that can transform how we live for the better,” he said.
Alongside the safety and security focus, the Executive Order also highlights the need to protect privacy. The President has called on Congress to pass bipartisan data privacy legislation in this area.
A recent ISACA survey of 23000 digital trust professionals showed that 68% were concerned about privacy violations relating to AI. Meanwhile, 77% were concerned about the spread of disinformation/misinformation.
AI and Civil Rights
Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing, the White House noted.
The EO aims to prevent AI algorithms from being used to exacerbate discrimination. It has been found that some large language models (LLM) have some social bias because of the data input at the training stage.
“Technology is a mirror of global society,” said Andre Lutermann from Microsoft Deutschland’s CTO’ Office during a presentation at ISACA’s Digital Trust event in Dublin, Ireland. He noted that there is a need for responsible AI principles and guardrails on what AI should and should not do. This is all part of the LLM training effort.
Supporting the Workforce
AI has long been understood to have an impact on jobs. The EO sets out plans to produce reports on AI’s labor market impacts and develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers.
ISACA’s study found that, overall, AI could be a major job creator in the area of digital trust. Chris Dimitriadis, Global Chief Strategy Officer at the association said: “We believe that the number of jobs will increase because with every new emerging technology, especially AI, you see the introduction of risks and with that you see an emerging need for digital trust professionals in order to help society and industries enjoy the benefits of AI in a secure and safe manner.
Some 70% of those who took part in the ISACA Generative AI 2023 Survey said AI will have a positive impact on their jobs. However, 81% of them said they will need additional training to retain their job or advance their career.
Within the EO the Biden administration also wants to ensure the responsible and effective government use of AI. One part of this is to accelerate the rapid hiring or “AI professionals”.
Image credit: Consolidated News Photos / Shutterstock.com