OpenAI CEO Sam Altman has shared that the company feels confident about knowing how to build artificial general intelligence (AGI), a milestone that many tech firms are currently trying to reach. In a blog post published on Sunday (Jan. 5), he also predicted AI agents used by companies would be able to handle certain tasks on their own as early as this year.
The company has often stressed that its core mission is to create AGI that “benefits all of humanity.” This week, Altman went further to say: “We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”
However, last month, Altman tried to manage expectations around the technology, suggesting that it might end up being “much less” significant than people imagine. As ReadWrite reported on Monday, the tech entrepreneur revealed that the company has started working on what he described as “superintelligence in the true sense of the word.”
He added: “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”
Altman and OpenAI face criticism over AGI claim
AI agents are the latest buzz in the world of AI, designed to let models take actions on behalf of users. Although AGI is also popular concept, it remains a nebulous and contentious concept within the AI community. Consequently, critics of the company and Altman didn’t waste any time sharing their opinions on social media. SimularAI’s research lead, Xin Eric Wang, wrote: “OpenAI seems to be trying too hard to convince people they have a solution to AGI—ironically, this is a sign that they don’t.”
OpenAI seems to be trying too hard to convince people they have a solution to AGI—ironically, this is a sign that they don't.
— Xin Eric Wang (@xwang_lk) January 6, 2025
Vocal critic and cognitive scientist Gary Marcus stated that AI might have reached “a period of diminishing returns of pure LLM scaling” and highlighted the “lack of distinct, accessible, reliable database-style records,” which makes AI systems still prone to hallucinations. As a result, some remain less convinced by Altman’s claims.
8. As I noted in 2001, the lack of distinct, accessible, reliable database-style records leads to hallucinations. Despite many promises to the contrary, this still routinely leads to inaccurate news summaries, defamation, fictional sources, incorrect advice, and unreliability.
— Gary Marcus (@GaryMarcus) January 6, 2025
Despite the skepticism, OpenAI continues to tout progress with its AI models. Back in December, the company introduced o3, its latest SR model, which reportedly performed impressively on challenging math benchmarks and even caught the attention of some AI experts. However, it hasn’t been made available for public testing yet.
Featured image: TechCrunch via Wikicommons / Canva