OpenAI and Microsoft plan $100 billion data center project for AI supercomputer

Microsoft and OpenAI are reportedly working on a massive data center to house an AI-focused supercomputer with millions of GPUs. The Information claims that the project could cost more than $115 billion, and that the supercomputer, now called Stargate, will be located in the United States.

The report says Microsoft will foot the bill for the data center, which could be “100 times more expensive” than some of the largest centers in operation today. Stargate will be the largest in a string of data center projects the two companies hope to build over the next six years, with executives aiming to have it up and running by 2028. OpenAI and Microsoft are building these supercomputers in stages, the report says, and Stargate will be a Stage 5 system. The Phase 4 system will cost less and could be operational as early as 2026 in Mount Pleasant, Wisconsin.

The system may require so much power that Microsoft and OpenAI are considering alternative power sources such as nuclear power. Sources believe a data center of this scale will be challenging, in part because current designs require placing many more GPUs in a single rack to improve the efficiency and performance of the chips. This also means developing new ways to cool all equipment.

Companies are also reportedly using this design phase to overcome the problem of dependence on Nvidia.

Given that there is no concrete plan yet, a lot can still change. The Information also notes that it has yet to be determined where this computer will be located and whether it will be built in a single data center or in multiple data centers located close to each other.

Earlier this year, it was reported that OpenAI CEO Sam Altman plans to create chips for AI and is looking to raise up to $7 trillion to build factories to make them. Last year, Microsoft introduced its 128-core Arm processor for data centers and Maia 100 GPUs specifically designed for AI. There have also been reports that Microsoft is developing its own networking equipment for AI data centers.

With AI still gaining momentum, Nvidia GPUs are in high demand, so it makes sense that companies like Microsoft and OpenAI want other hardware options. Especially given the tension around Taiwan, where Nvidia chips are manufactured.