More than a dozen of the world’s top AI companies made new safety commitments at a global summit in Seoul on Tuesday (May 21), according to a statement from the U.K. government.
The agreement with 16 tech firms – including ChatGPT-maker OpenAI, Google DeepMind, and Anthropic – builds on the consensus reached at the first global AI safety summit at Bletchley Park in Britain last year.
The announcement came as South Korea and the U.K. hosted a global AI summit in Seoul during a time when the rapid pace of artificial intelligence innovation leaves governments scrambling to keep up. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” U.K. Prime Minister Rishi Sunak said in a statement released by the country’s Department for Science, Innovation and Technology.
In an historic first, tech companies from across the globe have committed to developing AI safely.
From @OpenAI to @Meta, 16 companies have signed up to the fresh ‘Frontier AI Safety Commitments' 👉 https://t.co/KqcBbvKLSu#AISeoulSummit pic.twitter.com/cqmwryq494
— Department for Science, Innovation and Technology (@SciTechgovuk) May 21, 2024
Under the agreement, AI firms that have not already shared how they assess the risks of their technology will publish those frameworks, according to the statement. These will include what risks are “deemed intolerable” and what the firms will do to ensure that these thresholds are not crossed.
“Ensuring AI safety is crucial for sustaining recent remarkable advancements in AI technology, including generative AI, and for maximizing AI opportunities and benefits, but this cannot be achieved by the efforts of a single country or company alone,” added South Korea’s Interior and Safety Minister Lee Sang-min.
Which companies have agreed to AI safety commitments?
Apart from ChatGPT-maker OpenAI, Google DeepMind, and Anthropic, the firms that have agreed to the safety rules include Microsoft, Amazon, IBM, Meta, France’s Mistral AI, and China’s Zhipu.ai. In addition, South Korea’s Naver and Samsung Electronics, the UAE’s G42 and Technology Innovation Institute, Canadian company Cohere, Inflection AI, and Elon Musk’s xAI are also involved.
The companies have pledged that, under severe conditions, they will “not develop or deploy a model or system at all” if they cannot sufficiently mitigate risks to meet certain thresholds, according to the statement. These thresholds will be determined before the upcoming AI summit scheduled to be held in France in 2025.
The Seoul summit is taking place shortly after OpenAI announced the dissolution of a team focused on addressing the long-term risks of advanced AI.
“The field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science,” said Anna Makanju, OpenAI’s Vice President of Global Affairs, in the same statement.
Seán Ó hÉigeartaigh, University of Cambridge Director of AI Futures and Responsibility, wrote on X, “Great to see a wider range of companies commit to responsible scaling policies.”
Great to see a wider range of companies commit to responsible scaling policies – good job summit team. Great comments from Ben Garfinkel, Yi Zeng, and Beth Barnes too.https://t.co/yhydDgN2XV
— Seán Ó hÉigeartaigh (@S_OhEigeartaigh) May 21, 2024
The two-day summit will take place partially virtually, featuring a combination of private sessions and others that are open to the public in Seoul.
Later on Tuesday (May 21), South Korean President Yoon Suk Yeol and the UK’s Sunak will then co-chair a virtual meeting of world leaders.
Featured image: Canva