AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments

Written by

In a “historic first,” 16 global AI companies have signed new commitments to safely develop AI models.

The announcement was made during the virtual AI Seoul Summit, the second event on AI safety co-hosted on May 21-22 by the UK and South Korea.

The Frontier AI Safety Commitments’ signatories include some of the biggest US tech giants, such as Amazon, Anthropic, Google, IBM, Microsoft and OpenAI.

They also include AI organizations from Europe (Cohere and Mistral AI), the Middle East (G42 and the Technology Innovation Institute) and Asia (Naver, Samsung and Zhipu.ai).

AI Risk Thresholds to Be Decided in France

These organizations vowed to publish safety frameworks on how they will measure the risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.  

The frameworks will also outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed. 

In the most extreme circumstances, the companies have also committed to “not develop or deploy a model or system at all” if mitigations cannot keep risks below specific agreed-upon thresholds.

The 16 organizations have agreed to coordinate with multiple stakeholders, including governments, to define those thresholds ahead of the AI Action Summit in France in early 2025. 

Professor Yoshua Bengio, a world-leading AI researcher, Turing Prize winner and the lead author of the International Scientific Report on the Safety of Advanced AI, said he was pleased to see leading AI companies from around the world sign up to the Frontier AI Safety commitments.

“In particular, I welcome companies’ commitments to halt their models where they present extreme risks until they can make them safe as well as the steps they are taking to boost transparency around their risk management practices,” he said.

An Emerging Global AI Safety Governance Regime

These commitments build on a previous agreement made with leading AI tech companies at Bletchley Park during the first AI Safety Summit in November as well as other existing commitments such as the US Voluntary Commitments and the Hiroshima Code of Conduct.

In November 2023, the Bletchley Declaration saw eight of the current 16 AI companies agree to “deepen” access to their future AI models before they go public in November 2023.

Read more: 28 Countries Sign Bletchley Declaration on Responsible Development of AI

While the first list was Western-centric, the newly expanded list of signatories includes key organizations in the development of generative AI ‘frontier’ models, including the French ‘OpenAI killer’ Mistral AI, the Abu Dhabi-based Technology Innovation Institute, which developed Falcon, one of the biggest large language models (LLMs), and the Chinese firm Zhipu.ai.

Commenting on the news, UK Prime Minister Rishi Sunak said: “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety. It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

Ya-Qin Zhang, chair professor and dean of the Institute for AI Industry Research at Tsinghua University in China, strongly welcomed commitment.

“These commitments by a diverse group of Chinese, American, and international firms represent a significant step forward in the public transparency of AI risk management and safety processes,” Zhang said.

Bengio concluded: “This voluntary commitment will obviously have to be accompanied by other regulatory measures, but it nonetheless marks an important step forward in establishing an international governance regime to promote AI safety.”

Committing on AI Safety Beyond “Empty” Promises

However, there is some criticism of the agreement.                                                                                                                                        

Jamie Moles, a senior technical manager at ExtraHop, commented: “This ‘commitment’ once again feels like a classic case of empty promises. A safety framework sounds great in theory, but the vagueness of the principles - secure, trustworthy, ethical - are a far cry away from the harmful uses of AI that we’re seeing day-to-day.”

“Companies need to ditch the grand pronouncements and start engaging in real dialogue with cybersecurity experts,” he said.

“AI can be used for many great causes, especially in cybersecurity, but if we don’t set out clear restrictions and regulations with which businesses can be held accountable, AI will be used for malicious purposes as we are already seeing so often.”

Ivana Bartoletti, global chief privacy and AI governance officer at Wipro and a Council of Europe expert on AI and human rights, also expressed mixed feelings about the AI Seoul Summit and the commitments of AI developers.

“The pre-summit report is a welcome departure from the usual speculation and alarmism. However, this is not enough, and we need to advance on the governance front as well. Having multiple AI safety institutes is commendable, but we need to clarify their function,” she said.

Read more: UK and US to Build Common Approach on AI Safety

What’s hot on Infosecurity Magazine?