HawkInsight

  • Contact Us
  • App
  • English

NVIDIA adds more AI infrastructure,$100 billion to help OpenAI build a 10GW data center

NVIDIA spends hundreds of billions to bind OpenAI.

On September 22, Nvidia announced that the company will reach a strategic partnership with OpenAI to build and deploy at least 10 gigawatts (GW) of AI data centers and use millions of Nvidia's graphics processing units (GPUs) to train and deploy OpenAI's next-generation AI model.

Nvidia said that the first 1GW system will be launched in the second half of 2026, based on its latest Vera Rubin platform, and the entire 10GW system will contain as many as 4 million to 5 million GPUs, equivalent to Nvidia's annual GPU shipments. The total amount, marking this will be one of the largest investment actions in Nvidia's history.

英伟达加码AI基建,千亿美元助OpenAI建10GW数据中心

The motivation for this cooperation is clear and clear.OpenAI is currently the most representative AI R & D organization. Its product ChatGPT has more than 700 million weeks of active users. With the iterative upgrading of product capabilities, the demand for computing resources for training and reasoning continues to rise.In order to support the construction and deployment of next-generation AI models, OpenAI urgently needs to establish large-scale and efficient computing infrastructure. With its long-term leadership in GPUs, AI acceleration systems, and supercomputing platforms, Nvidia has become OpenAI's most ideal computing power partner.

At the same time, Nvidia is also facing the growing trend of self-developed chips by AI companies, especially the continuous expansion of OpenAI and partners such as Microsoft, Oracle, and Softbank at the AI hardware level, posing potential supply chain risks.Under this background, Nvidia chooses to deeply bind with OpenAI in the form of investment, which not only can stabilize the core customer relationship, but also realize the rebalancing of industrial initiative by becoming a capital participant and deeply embedding the middle and upper reaches of AI technology stack.

In terms of cost, according to public information, the overall cost of building a 1GW-level data center is between US$50 billion and US$60 billion, of which approximately US$35 billion will be used to purchase NVIDIA's GPUs and systems.Based on this estimate, the overall investment of the 10GW project will reach hundreds of billions of dollars, and Nvidia's US$100 billion investment commitment will be injected in stages during the project advancement process.

Nvidia CEO Huang Renxun emphasized that this cooperation represents another leap in the ten-year cooperation between the two parties. As early as when OpenAI launched the first-generation AI model, it began to collaborate in depth with Nvidia. From the original DGX supercomputing platform to the current 10GW level infrastructure layout, the relationship between the two parties has always been at the forefront of the industry.

After the news was released, the capital market quickly responded positively.Nvidia's share price rebounded quickly after falling at the beginning of the session. It rose more than 4.5% at noon and finally closed up nearly 4%, setting a record high.According to the analysis, the "investment-procurement-binding" model between the two giants has actually built a highly collaborative and closed-loop business ecosystem. NVIDIA has changed from a chip supplier to a system-level platform for AI computing economy. Provider, OpenAI has received the most advanced hardware support while avoiding the technical and operational risks brought by its own supply chain.

It is worth noting that Nvidia has made frequent moves this year. It not only invested US$5 billion in Intel for AI chip cooperation, but also invested nearly US$700 million in British data center startup Nscale and spent US$900 million to acquire key AI networking company Enfabrica teams and technology.

David Bader, director of the Data Science Institute at the New Jersey Institute of Technology, pointed out that this cooperation comes at a critical stage in the transition of global AI infrastructure from scattered layout to centralized construction. Behind it is not only the connection of supply and demand, but also represents the positive direction of the entire AI value chain. Vertical integrated development.From chips and systems to deployments and commercial applications, NVIDIA is transforming from an underlying hardware manufacturer to a co-builder of the entire AI ecosystem, and OpenAI is also moving towards becoming a super platform that integrates model development, product operations and infrastructure.

Both Nvidia and OpenAI said they will announce more details of the cooperation in the coming weeks.This also means that there are still more variables to be solved in this cooperation, such as regulatory approval, project implementation progress, the participation of other partners, and whether OpenAI will simultaneously promote alternative solutions such as self-developed chips.

英伟达加码AI基建,千亿美元助OpenAI建10GW数据中心

·Original

Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.