A controversial AI safety bill in California has been vetoed by the state’s governor over the weekend (Sept. 29).
Senate bill 1047 was passed by the State Assembly at the end of August. It set out to implement safeguards on large AI models and prevent “critical harms” against humanity. The creators of models in California would also be liable for damages above $500 million.
It was met with a mixed response from the industry. The creators of ChatGPT, OpenAI, were one of the key players to speak out against it. Their chief strategy officer called for “a federally-driven set of AI policies, rather than a patchwork of state laws”.
Many were surprised that businessman Elon Musk was in favor of SB 1047. He backed the bill just before it passed the Assembly vote.
“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X.
Why did Gov. Newsom veto the AI safety bill?
Despite the Assembly being overwhelmingly in favour of SB 1047, Governor Gavin Newsom has now vetoed it.
In his veto message published over the weekend, he highlighted that California houses 32 of the world’s top 50 AI companies and that there is serious responsibility to get things right.
Newsom said that the situation around AI and its regulation is “rapidly evolving”.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” he wrote.
“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
The governor went on to say that the bill is “well-intentioned”. He also agrees that something must be done before a major incident occurs.
He concluded: “I am committed to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation. Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right.”
Featured Image: Brandon Starr on Flickr