The UK and the US have agreed on a first-of-its-kind agreement around AI safety, with a focus on security evaluations.
The two countries will work together to develop ‘robust’ methods for the evaluation of artificial intelligence (AI) safety and the models that make them up. The collaboration between the UK and the US is the first of its kind, marking an important development in the evolution of artificial intelligence.
UK tech minister Michelle Donelan described the agreement as “the defining technology challenge of our generation“.
“We have always been clear that ensuring the safe development of AI is a shared global issue,” she continued. “Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”
Specifically, the agreement builds upon earlier commitments made at the AI Safety Summit held in Bletchley Park in November of last year. The event was attended by leaders in the field of AI, including OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and tech billionaire Elon Musk. It also saw both the UK and US create dedicated AI Safety Institutes, to evaluate the safety of both open and closed-source AI systems.
What are some of the concerns around AI safety?
The majority of AI tools currently used are what’s known as ‘narrow AI’, or tools that can complete simple tasks that could also be done by a human. However, the real safety concerns are centered around the advance of AI, where the technology could expand to take on more complex tasks, beyond data analysis or offering learned responses to prompts.
The goal of the agreement between the UK and the US is not to slow down the progress of AI but rather to make sure principles of AI safety are embedded within it as it grows. In particular, it’s important for AI companies to acknowledge that there’s the potential for human biases to become embedded within language models as they learn from us.
Featured image: generated by Ideogram