Microsoft might soon unveil its first, in-house processor tailored for artificial intelligence tasks, according to The Independent’s reporting earlier today. The tech giant aims to reduce costs and reduce reliance on Nvidia, the leading AI chip supplier.
The Independent suggests Microsoft might launch its artificial intelligence-focused chip at next month’s developers’ conference.
A shift in the AI landscape
Microsoft’s processor will allegedly focus on data center servers and enhancing AI features within the company’s suite of productivity applications. Select teams from both Microsoft and OpenAI, which has received significant Microsoft funding, have reportedly been testing the chip.
OpenAI, not to be left behind in the AI chip race, is also purportedly exploring the possibility of designing its own AI chips. According to Reuters reporting also published today, there are whispers that OpenAI is exploring an acquisition to jumpstart its GPU development.
However, when it comes to the current landscape of companies building AI chips, none compare to the dominance of Nvidia. Since developing the world’s first Graphic Processing Unit in 1999, Nvidia has manufactured the majority of the world’s microchips. Reuters even pegged the company’s current manufacturing output of the high-end chips required for AI modeling at a whopping 80%.
Despite today’s extensive reporting, there haven’t been any on-the-record comments from Microsoft, OpenAI, or Nvidia. As of press time, ReadWrite has not received any response to its requests for comment.
The broader AI chip ecosystem
The AI chip market has witnessed a surge in activity, with tech giant Amazon’s Inferentia and Google developing (and soon possibly manufacturing) the Tensor Processing Unit. If Microsoft’s endeavors materialize, it will join these tech giants in the reinvigorated AI chip arena, further intensifying the competition.
OpenAI CEO Sam Altman has been outspoken in his concern over the scant supply of AI chips and the cost-impact it is having on startups and individuals interested in AI. In March, TrendForce released estimates that training OpenAI’s GPT model in 2020 required the power of 20,000 Nvidia A100 GPUs. The market intelligence firm also projected an increase to 30,000 GPUs in order to support ChatGPT’s commercialization.
Major financial firms have also expressed worry about chip supply. Last month, Switzerland’s largest financial institution, UBS, highlighted the potential risks Microsoft faces due to GPU constraints. Analysts at the Swiss bank pointed to chip shortages as potential limitors to Microsoft’s 2024 AI revenue streams. UBS analysts offered a more optimistic view yesterday, however, stating the bank now has “even higher confidence” that Microsoft will be able to satisfy its near-term capacity needs.
The AI arms race, ignited by OpenAI’s launch of ChatGPT a year ago, has led to a demand for AI chips that is outpacing supply. In response, Nvidia and its now main competitor, AMD, which recently announced its own high-end AI chip, are both ramping up production.
While Microsoft has committed to continue purchasing Nvidia GPUs, the development of its own processor could be an industry game-changer. Yet, so long as Mircosoft and OpenAI remain diplomatic with the major GPU manufacturers, developing their own in-house AI chip(s) could mitigate future supply risks to their businesses while also increasing broader accessibility to AI chips.