Home OpenAI’s advanced AI system Q* raises safety concerns

OpenAI’s advanced AI system Q* raises safety concerns

OpenAI, the company behind ChatGPT, was reportedly working on a groundbreaking system, codenamed Q*, before the temporary dismissal of CEO Sam Altman, according to a recent report by The Guardian. This advanced AI model demonstrated the ability to solve unfamiliar basic math problems, marking a significant leap in AI capabilities.

The rapid development of Q* raised safety concerns among OpenAI researchers, leading to a warning to the board of directors about its potential threat to humanity. This alarm was part of the backdrop to the recent turmoil at OpenAI, which saw Altman briefly ousted and then reinstated following staff and investor pressure.

OpenAI’s Q* and the race toward AGI

The development of Q* feeds into the broader debate about the pace of progress toward Artificial General Intelligence. AGI represents a system capable of performing a wide range of tasks at or above human intelligence levels, potentially beyond human control. OpenAI is at the forefront of this race, sparking concerns among experts about the implications of such rapid advancements.

Andrew Rogoyski from the University of Surrey’s Institute for People-Centred AI commented on the significance of a math-solving LLM. He noted that this intrinsic ability of LLMs to perform analytical tasks represents a major advancement in the field.

OpenAI’s mission and governance

OpenAI, initially founded as a nonprofit, operates with a commercial subsidiary governed by a board. Microsoft stands as its largest investor. The organization’s mission is to develop “safe and beneficial artificial general intelligence for the benefit of humanity.” Recent governance changes at OpenAI reflect this commitment to safety and responsible AI development.

The future of AI safety

The controversy surrounding Altman’s temporary removal highlighted the tension between rapid AI development and safety. Emmett Shear, Altman’s brief successor, clarified that the board’s decision was not due to a specific disagreement over safety. However, the incident underscores the challenges and responsibilities facing AI developers in balancing innovation with ethical considerations and human safety.

Featured image from Sanket Mishra via Pexels

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Maxwell Nelson
Tech Journalist

Maxwell Nelson, a seasoned journalist and content strategist, has contributed to industry-leading platforms, weaving complex narratives into insightful articles that resonate with a broad readership.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.