Home 200 AI researchers urge OpenAI, Google, Meta to allow safety checks

200 AI researchers urge OpenAI, Google, Meta to allow safety checks

More than 200 of the world’s leading researchers in artificial intelligence (AI) have signed an open letter calling on big players in AI  like OpenAI, Meta, and Google to allow outside experts to independently evaluate and test the safety of their AI models and systems.

The letter argues that strict rules put in place by tech firms to prevent abuse or misuse of their AI tools are having the unintended consequence of stifling critical independent research aimed at auditing these systems for potential risks and vulnerabilities.

Prominent signatories include Stanford University’s Percy Liang, Pulitzer-winning journalist Julia Angwin, Renée DiResta from the Stanford Internet Observatory, AI ethics researcher Deb Raji, and former government advisor Suresh Venkatasubramanian.

What are the AI researchers concerned about?

The researchers say AI company policies that ban certain types of testing and prohibit violations of copyrights, generation of misleading content, or other abuses are being applied in an overly broad manner. This has created a “chilling effect” where auditors fear having their accounts banned or facing legal repercussions if they push the boundaries to stress-test AI models without explicit approval.

Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter states.

The letter lands amid growing tensions, with AI firms like OpenAI claiming that The New York Times’ efforts to probe for copyright issues in ChatGPT amounted to “hacking.” Meta has updated terms threatening revocation if its latest language model is used to infringe intellectual property.

Researchers argue companies should provide a “safe harbor” allowing responsible auditing, as well as direct channels to responsibly report potential vulnerabilities found during testing, rather than having to resort to “gotcha” moments on social media.

“We have a broken oversight ecosystem,” said Borhane Blili-Hamelin of the AI Risk and Vulnerability Alliance. “Sure, people find problems. But the only channel to have an impact is these ‘gotcha’ moments where you have caught the company with its pants down.”

The letter and accompanying policy proposal aim to foster a more collaborative environment for external researchers to evaluate the safety and potential risks of AI systems impacting millions of consumers.

Featured image: Ideogram

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Sam Shedden
Executive Editor

Sam Shedden is an experienced journalist and editor with over a decade in online news. A seasoned technology writer and content strategist, he has contributed to many UK regional and national publications including The Scotsman, inews.co.uk, nationalworld.com, Edinburgh Evening News, The Daily Record and more. Sam has written and edited content for audiences whose interests include media, technology, AI, start-ups and innovation. He's also produced and set-up email newsletters in numerous specialist topics in previous roles and his work on newsletters saw him nominated as Newsletter Hero Of The Year at the UK's Publisher Newsletter Awards 2023. He has worked…

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.