Sam Altman wasn’t exaggerating when he declared at Davos that “compute is the currency of the future.” Now, OpenAI appears ready to mint that currency at planetary scale.
West Texas, July 22, 2025- OpenAI has formally announced that its next-generation AI supercomputer, known as Stargate, will be built in partnership with Oracle, signalling a major step forward in its infrastructure ambitions. The project, which will eventually deploy more than 2 million specialized AI chips, represents one of the most expansive supercomputing builds in history, and underscores the growing race to support frontier models like GPT-5 and beyond.
The upcoming Stargate I facility, the first in a potential series, is expected to come online as early as 2025, with scaling through 2028. Oracle, which has steadily become a major player in AI infrastructure via its Oracle Cloud Infrastructure (OCI), will host and manage this deployment in collaboration with OpenAI.
Projected buildout phases for Stargate I–IV (2025–2028), powered by NVIDIA GB200s and Oracle Cloud Infrastructure.
Phase |
Year |
Key Milestone |
I |
2025 |
Stargate I goes live (initial capacity) |
II |
2026 |
Expansion with 500K–750K additional GPUs |
III |
2027 |
Deployment of Blackwell-Next architecture |
IV |
2028 |
Full-scale rollout exceeding 2M AI chips |
“This is a big deal. OpenAI needs massive computing infrastructure, and Oracle is one of the few that can help deliver it,” said Oracle Chairman Larry Ellison during a recent earnings call.
OpenAI and Oracle’s roadmap is believed to span multiple Stargate-class facilities across the United States, forming a distributed supercomputing backbone.
While Stargate I is the first to break ground, industry sources familiar with the planning process indicate that as many as ten hyperscale sites are under consideration, strategically placed to optimize for power availability, fibre connectivity, and regional resilience.
The facility will reportedly be powered by NVIDIA's GB200 Grace Blackwell Superchips, the most powerful AI chips available to date. Unveiled by NVIDIA at GTC 2024, each GB200 integrates Grace CPUs with Blackwell GPUs in a dual-chiplet architecture. Configured into NVIDIA’s SuperPODs, these systems deliver exascale performance, the kind of compute OpenAI’s next leap will demand.
According to NVIDIA CEO Jensen Huang, “Blackwell is the engine to power this new industrial revolution, the generative AI era.”
Achieving this scale hinges on an equally ambitious supply chain. NVIDIA’s GB200 chips are manufactured using advanced packaging techniques that require collaboration across TSMC, ASE, and other backend fabs.
Scaling to millions of units will test not only wafer availability, but also high-bandwidth memory supply and liquid cooling integration at industrial scale. Analysts have warned that fulfillment bottlenecks, particularly in CoWoS packaging and AI-grade HBM modules, could become the critical path for deployment.
While Microsoft remains OpenAI’s exclusive cloud provider for commercial inference, including support for current GPT deployments, Stargate represents a dedicated infrastructure path for model training and experimentation.
As confirmed in OpenAI’s announcement: “Microsoft will continue to provide cloud services for OpenAI, including through Stargate.”
Interestingly, this strategic expansion includes a second partner, Japan’s SoftBank Group, which plans to invest alongside OpenAI to accelerate global AI compute capacity.
SoftBank, already active in semiconductor and AI ventures through Arm and its Vision Fund, is reportedly coordinating with CoreWeave, a U.S.-based cloud provider specializing in GPU compute, to deploy complementary facilities. CoreWeave itself has seen explosive growth, having recently raised billions from private equity to build AI-optimized data centers.
In total, the Stargate initiative is expected to create over 100,000 jobs in the U.S. across construction, data center operations, high-voltage electrical contracting, precision cooling, AI infrastructure, and engineering roles.
The sheer energy footprint of Stargate signals both the intensity of AI workloads and the scale of investment required to support them. OpenAI has laid out plans to add an estimated 4.5 gigawatts of new data center capacity, enough to match the electricity use of a mid-sized nation. A large share of this will be realized through its deepened collaboration with Oracle, which pushes the total capacity of Stargate developments beyond 5 gigawatts.
Back in January, the company joined U.S. officials at the White House to unveil a broader ambition: a $500 billion commitment aimed at delivering 10 gigawatts of AI-ready infrastructure within just four years. Such figures place Stargate not merely as a technical undertaking but as a transformative force in America’s digital and energy landscape.
Early estimates suggest that a fully scaled Stargate deployment may demand between 600 MW and 1 GW of continuous power, on par with the output of a small nuclear plant. Industry insiders believe the first site will secure multi-phase energy contracts across wind, solar, and grid-connected sources to ensure resilient uptime.
While the specific location remains undisclosed, multiple industry sources, including The Information, suggest Nebraska is the likely state. The region offers ideal access to energy, land, and fibre connectivity, as well as favourable permitting conditions.
Although neither OpenAI nor Oracle has officially confirmed the site, the White House’s involvement indicates the project’s national significance. In a closed-door roundtable earlier this year, President Biden was briefed on the compute needs of frontier AI models and was “personally supportive” of efforts to ensure America leads in AI infrastructure, according to sources familiar with the meeting.
OpenAI CEO Sam Altman has previously argued that training future models may require “energy and capital inputs at a scale unlike anything in tech’s history.” Stargate seems to embody that vision.
As he said at Davos this year, “To make AI safe and useful at planetary scale, we need infrastructure that matches the ambition, and we're building exactly that.”