Home OpenAI employees warn of AI’s potential existential threats to humanity in letter

OpenAI employees warn of AI’s potential existential threats to humanity in letter

A coalition of current and former employees at OpenAI, the parent company behind ChatGPT, has issued a warning about the existential threats posed by advanced artificial intelligence, including the potential for human extinction.

In a detailed letter released yesterday (June 4), the group, consisting of 13 former and current employees from firms such as OpenAI, Anthropic, and Google’s DeepMind, outlined a series of threats associated with AI, despite its potential benefits.

The letter states, “We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.” However, it also highlights concerns: “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

Neel Nanda, the Mechanistic Interpretability lead at DeepMind and formerly of AnthropicAI, was among the signatories. “This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers,” he wrote on X. “But I believe AGI will be incredibly consequential and, as all labs acknowledge, could pose an existential threat. Any lab seeking to make AGI must prove itself worthy of public trust, and employees having a robust and protected right to whistleblow is a key first step.”

Lack of accountability and regulation of AI

The supporters state that while AI companies and global governments recognize these dangers, the current corporate and regulatory measures are insufficient to prevent them. “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” they argue.

It then criticizes the transparency of AI companies, claiming they hold “substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.” It points out the lack of obligation on these companies to disclose such critical information, pointing out, “They currently have only weak obligations to share some of this information with governments, and none with civil society.”

The workers expressed a pressing need for more government supervision and public accountability. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the group stated. They also showcased the limitations of existing whistleblower protections, which do not fully cover the unregulated risks posed by AI technologies.

OpenAI in hot water

The open letter comes amid a shake-up for leading AI companies, particularly OpenAI, which has been rolling out AI assistants with advanced features capable of engaging in live voice conversations with humans and responding to visual inputs like video feeds or written math problems.

Scarlett Johansson, who once portrayed an AI assistant in the film “Her,” has accused OpenAI of modeling one of its products after her voice, despite her express refusal of such a proposal. Although the CEO of OpenAI tweeted the word “her” during the launch of the voice assistant, the company has since refuted claims of using Johansson’s voice as a model.

In May, OpenAI also dissolved a specialized team that had been created to investigate the long-term threats associated with AI, less than a year after its inception. Last July, OpenAI’s head of trust and safety, Dave Willner, also resigned.

Featured image: Canva

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Suswati Basu
Tech journalist

Suswati Basu is a multilingual, award-winning editor and the founder of the intersectional literature channel, How To Be Books. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award. With 18 years of experience in the media industry, Suswati has held significant roles such as head of audience and deputy editor for NationalWorld news, digital editor for Channel 4 News and ITV News. She has also contributed to the Guardian and received training at the BBC As an audience, trends, and SEO specialist, she has participated in panel events alongside Google. Her…

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.