Most Britons are worried about how safe new AI systems are, according to a recent poll. This comes just as leaders from 80 countries, along with tech executives, academics, and other experts, are about to meet for a two-day global summit in Paris to talk about where AI is headed and what should be done about its rapid and sometimes disruptive growth.
The survey, seen by Time magazine, found that 87% of British people think AI companies should be required to prove their systems are safe before they’re allowed to release them. On top of that, 60% believe AI models that are smarter than humans shouldn’t be developed at all. When it comes to trusting tech CEOs to do the right thing in AI regulation, only 9% of people said they do. The poll was run by YouGov for Control AI, a non-profit that focuses on AI risks.
Only last month, UK Prime Minister Keir Starmer announced that AI will be “unleashed across the UK to deliver a decade of national renewal,” despite the wider public having concerns.
Against this backdrop, 75% of Brits in the survey said there should be laws banning the development of AI systems that could escape their environments. Meanwhile, 63% supported the idea of prohibiting AI that can make itself smarter or more powerful.
In the UK, where YouGov surveyed 2,344 adults on January 16-17, there’s still no clear set of rules for AI. Before the last general election in 2024, the ruling Labour Party had promised to introduce new AI regulations, but since taking power, it has kept pushing back the idea. Instead, the government has been focused on trying to revive the country’s struggling economy by creating more opportunities through the expansion of AI.
British politicians and organizations warn about AI safety risks
UK POLITICIANS DEMAND REGULATION OF POWERFUL AI
TODAY: Politicians across the UK political spectrum back our campaign for binding rules on dangerous AI development.
This is the first time a coalition of parliamentarians have acknowledged the extinction threat posed by AI.
The… pic.twitter.com/kvMNfEsLrJ
— ControlAI (@ai_ctrl) February 6, 2025
In a statement posted on X, Control AI revealed that a number of politicians had backed their campaign to regulate “superintelligent” AI in Britain. They wrote: “Assembling this coalition is a significant milestone on the path to getting dangerous AI development under control.
“The Labour government clearly promised in its manifesto it would introduce ‘binding regulation on the handful of companies developing the most powerful AI models.’ The public wants it, parliamentarians call for it, humanity needs it. Time to deliver.” In a separate release, 16 British lawmakers from both major political parties, stated: “Superintelligent AI systems would…compromise national and global security.”
While AI companies are gunning for superintelligence, most politicians are asleep at the wheel. Our campaign brings together UK lawmakers to address this threat.
They show true leadership amid overwhelming inaction. We need surgical regulation on powerful AIs: it's time to act. https://t.co/EVbvh14s0c— Andrea Miotti (@_andreamiotti) February 6, 2025
The organization’s CEO Andrea Miotti added: “While AI companies are gunning for superintelligence, most politicians are asleep at the wheel. Our campaign brings together UK lawmakers to address this threat.”
ReadWrite has reached out to YouGov and Control AI for more information.
Featured image: Canva