In the era of rapid development of artificial intelligence, the "Stargate" project jointly launched by OpenAI, SoftBank and Oracle has undoubtedly become the focus of global attention. The project plans to invest up to $500 billion over the next four years to build multiple hyperscale data centers to meet the growing demand for AI computing power. The first data center will be located in Abilene, Texas, and is expected to deploy 64,000 NVIDIA GB200 AI acceleration chips by the end of 2026.
This huge investment scale and computing power deployment not only demonstrates the firm confidence of American technology giants in the future of AI, but also marks the official start of the global AI computing power arms race. In the face of the increasing computing demand for AI models, enterprises have accelerated their deployment in an effort to seize the opportunity in the new round of competition. China Exportsemi will interpret this project for you from a technical point of view.
1.The shocking display of project scale and technical strength
The scale of the Stargate project is unprecedented. The first 16,000 GB200 chips in the Abilene data center alone could cost billions of dollars to procure. Although Nvidia has not disclosed the specific pricing of the GB200 chip, the unit price is expected to be between $30,000 and $40,000 for its slightly less powerful B200 chip. Based on this, the unit price of the GB200 chip is likely to be higher, which means that the entire deployment of 64,000 chips will cost tens of billions of dollars, and the hardware investment alone has reached a staggering level.
The chips will be deployed in phases to ensure the stability and optimal operation of the infrastructure. The first 16,000 GB200 chips are scheduled to be installed in the summer of 2025 and will be gradually expanded to the final 64,000 chips. This phased construction strategy can not only flexibly adjust according to the changes in AI computing needs, but also effectively reduce the technical and financial pressure of initial deployment.
It is worth noting the cooperation model between OpenAI and Oracle in this project. Oracle will be responsible for procuring, operating, and managing supercomputer infrastructure, while OpenAI will focus on AI model development and data center design and optimization. This division of labor allows both parties to perform their respective roles, improve overall operational efficiency, and ensure that computing resources can fully meet the needs of future AI models.
In addition, the location of the Abilene data center was well thought out. The region not only has a stable power supply, but also has relatively low energy costs, as well as its proximity to major technology hubs, ensuring low-latency access to data centers. This site selection strategy will provide strong support for the expansion of AI computing power in the future.
Figure: 500 billion dollars investment! 64,000 chips! "Stargate" opens the AI computing power arms race
2. The fierce competition in the arms race of AI computing power
The rapid development of AI models has made the demand for high-performance computing (HPC) resources higher than ever. In response to this demand for computing power, major tech giants have accelerated their investment in AI infrastructure. The Stargate project is not an isolated case, but part of an arms race for AI computing power, and technology companies around the world are actively increasing their efforts.
1. Elon Musk's xAI: A $5 billion deal has been reached with Dell Technologies to build a supercomputing center in Memphis dedicated to training large AI models.
2. Meta: Announced plans to deploy the equivalent of 600,000 NVIDIA H100 chips with computing power by the end of 2024 to support its AI applications.
3. CoreWeave: The cloud service provider focused on AI computing has deployed more than 250,000 NVIDIA GPUs in 32 data centers to support AI training and inference tasks.
4. Microsoft and Google are also actively expanding their AI computing infrastructure to support the computing power needs of AI platforms such as Azure AI and Google DeepMind.
From the above cases, it can be seen that the competition for AI computing power has become the core battlefield of global technology competition. Not only do companies need to invest heavily in hardware procurement, but they also need to optimize their software architecture and improve energy efficiency to reduce long-term operating costs.
However, this fierce competition also brings new challenges:
1. Return on investment cycle – Tens of billions of dollars are invested in computing power, but the commercial monetization path of AI applications is still in the exploration stage, and how to achieve long-term profitability is still unknown.
2. Technology iteration risk - AI computing architecture is constantly evolving, the current GB200 may soon be replaced by newer chips, how to ensure that the investment is not outdated is a problem that enterprises must consider.
3. Energy consumption and sustainability – AI data centers consume a lot of power, and how to reduce their carbon footprint while pursuing performance has become a common challenge for major enterprises.
3. The far-reaching impact of AI infrastructure construction
Stargate is not only a data center project, but also the prototype of a global AI infrastructure ecological network. According to the plan, the project will build data centers in various regions such as Pennsylvania, Wisconsin, Oregon and Salt Lake City, Utah. This strategy of multi-point layout helps to:
1. Diversify risk – Avoid a single data center being impacted by disasters or policy factors.
2. Optimize resource allocation – Optimize investments based on power, network, and land costs in different regions.
3. Reduce data latency – Ensure faster AI computing services for customers around the world.
From a technical point of view, the "Stargate" project not only represents the most cutting-edge trend in the development of AI computing, but also reveals the future competitive direction of global technology companies. Potential impacts include:
1. Advancing AI models: More computing power will support more complex and intelligent AI applications, such as more advanced natural language processing, autonomous driving systems, simulation technologies, and more.
2. Promote the upgrading of the semiconductor industry: The surge in demand for AI chips will accelerate the technological breakthroughs of global semiconductor companies and promote the continuous optimization of the supply chain.
3. Promote the deep integration of cloud computing and AI: In the future, cloud computing platforms will be more AI-driven, forming a more powerful computing service ecosystem.
Conclusion
The launch of the "Stargate" project has undoubtedly injected unprecedented impetus into the development of the AI industry. Faced with the explosive demand for AI computing power, global technology companies are investing in infrastructure at an unprecedented rate in an attempt to take the lead in the new round of technological revolution. However, huge capital investments, accelerating technological change, and energy and sustainability challenges also make this sector uncertain.
In the future, who will win the AI computing power race? Can Stargate become a true "gateway to the future of AI"? The answer is still unrevealed, but what is certain is that the competition in the global AI industry has only just begun.