Every leap in artificial intelligence brings us closer to a physical ceiling. The smarter our models become, the more power they demand, and the infrastructure required to sustain that demand is already under strain.
In Northern Virginia, often known as “Data Center Alley,” and home to the world’s largest cluster of data centers, AI projects are already being delayed. Not for lack of GPUs or skilled professionals, but because the power grid is nearing its breaking point.
According to a North American Electric Reliability Corporation (NERC) report, on July 10, 2024, a lightning arrester, a device used to protect the transmission lines of the system from a power surge, failed on a 230 kV line in the eastern interconnection, resulting in major grid failure that forced 60 data centers, consuming roughly 1,500 megawatts of power, to drop offline simultaneously.
Emergency systems barely managed to prevent a full-scale blackout. But the message was clear: This wasn’t a fluke. It was a warning.
Rightly said by Brad Smith, Vice Chair and President at Microsoft Corporation
“America’s advanced economy relies on 50-year-old infrastructure that cannot meet the increasing electricity demands driven by AI.”
On June 24, 2025, Deloitte published a forecast that highlighted the urgency: by 2035, AI data centers in the U.S. could demand more than 123 gigawatts of electricity, a 30-fold increase compared to just 4 gigawatts today. That’s more power than some industrialized nations consume in total.
"These are major investments that require planning well beyond just a few years.”
Kelly Marchese, Principal, Deloitte Consulting LLP
If we can’t power the next wave of intelligence, we can’t deploy it. And for regions like Northern Virginia, that reality is no longer in the future. It’s already here.
From Models to Megawatts: Why AI’s Appetite Is Different
Unlike traditional IT workloads, which are often scattered and relatively lightweight, AI workloads are dense, continuous, and increasingly everywhere, redefining what our physical infrastructure needs to look like.
Training a single large language model like GPT-4 or Gemini can consume millions of kilowatt-hours of electricity. But the real shift begins once these models are deployed and used at scale. Every time AI powers a chatbot, fraud detection engines, a copilot, or real-time recommendation systems, it creates a new layer of persistent, high-intensity compute demand. These are not occasional queries. They are consistent, 24/7 workloads.
At Computex 2025, Nvidia CEO Jensen Huang captured the sense of change in a single sentence:
“We’re no longer building servers. We’re building AI factories.”
It’s not a mere metaphor. These facilities are often stacked with thousands of high-performance GPUs like Nvidia’s H100 or AMD’s MI300X, each GPU alone draws 700 to 800 watts of power under sustained load.
Now multiply that across a row, or an entire data hall, and a modern AI deployment can exceed 100 megawatts of capacity. That’s roughly the same energy demand as a medium-sized manufacturing facility or a steel plant.
But Computing is only half the story. The other half is heat. AI hardware runs hot. High utilization rates, dense chip packaging, and prolonged inference loads generate enormous thermal pressure.
According to the U.S. Department of Energy, 30–40% of a typical data center’s energy usage goes into cooling systems, and that number climbs even higher in AI-focused environments.
Traditional air cooling systems are already reaching their limits. Liquid and immersion cooling promise better thermal efficiency, but they introduce added costs, complexity, and infrastructure challenges, especially in legacy environments.
This is no longer a niche concern for hyperscalers. As AI adoption spreads across financial services, healthcare, logistics, and energy sectors, scaling AI is not just a software decision; it’s a real estate, electricity, and thermodynamics decision too.
The Grid Isn’t Ready, and Time Isn’t on Our Side
If AI is the engine of digital transformation, then electricity is the fuel that powers it. And right now, the tank is far from full.
According to the U.S. Department of Energy’s “Grid 2030” vision, the United States operates one of the world’s most congested, aging, and complex power grids, much of which was designed in the mid-20th century for a centralized, predictable energy model.
It wasn’t built to support dozens of 100+ MW data centers clustered in fast-growing tech corridors. Nor was it built to flexibly accommodate AI workloads that can double or even triple a region’s power demand in just a few years.
As Deloitte highlights, the gap between AI’s energy demands and existing infrastructure is reaching a breaking point. With AI data center power consumption projected to surge from 4 GW to over 123 GW by 2035, utilities, municipalities, and data center developers are working to adapt, but not fast enough.
Addressing the issue, Martin Stansbury, principal in Deloitte & Touche LLP’s US Energy, said, "There is an opportunity in infrastructure development to support the national strategic priorities of AI and energy dominance.”
These challenges do vary by region, but the pattern is clear. Back in July 2022, in Northern Virginia, Dominion Energy, an integrated energy utility, warned that it might not be able to support new data center projects in Loudoun County due to transmission capacity issues.
Despite explosive growth from 100 MW to 600 MW and projections to 5,000 MW by 2030, major tech companies in Georgia and Ohio have faced multi-year delays for just getting grid access approved for new AI campuses.
At the same time, transmission infrastructure, which is vital for powering these facilities, faces years-long delays. High-voltage lines require federal, state, and local approvals, a process that can stretch over 7 to 10 years.
The result is a growing disconnect; AI projects are being greenlit at the cloud level and capital budgets but are stalled at the infrastructure layer.
This isn’t just an American problem. In Dublin, one of Europe’s major data center hubs, regulators have imposed a halt on new data centers considering the rising electricity strain on the grid.
Even Frankfurt, long considered a stable hyperscale, is now under pressure from both city planners and regulators to slow data center expansion.
All of these points point to a single, unavoidable truth: the limiting factor for AI may no longer be innovation; it’s infrastructure.
Innovation Under Pressure: Rewiring the Stack to Survive
AI isn’t just pushing the limits of software. It’s forcing the entire digital infrastructure stack to evolve rapidly under pressure.
To navigate through the situation, companies are getting creative. Some, like Crusoe Energy Systems, are deploying modular data centers near stranded energy sources like remote hydropower or oil and gas sites where excess gas is flared.
Others are investing in on-site generation, co-locating with solar and battery farms to bypass overloaded regional grids. From Google’s Mesa campus, powered by a 260 MW solar farm and a 1 GWh battery system, to Meta’s large-scale renewable energy deals totalling 1,800 MW.
With projects like Fermi America’s “Hypergrid”, combining nuclear, solar, and gas into an integrated compute-energy complex, companies are increasingly building their own energy sources.
These efforts help them avoid overloaded regional grids, secure reliable power, and keep up with AI’s rising energy demands.
But energy isn’t just about supply, it’s also about how efficiently it’s used. And that’s pushing the conversation toward cooling systems. In traditional facilities, air cooling can't keep up with the heat dissipation alone. Hyperscalers and industry giants are now experimenting with liquid and immersion cooling systems, which can reduce power usage and physical footprint but come with added complexity in design, operations, and capital costs.
In power-constrained regions like Singapore, these shifts are already visible. With land and energy limits forcing a pause on new data center builds, operators are now required to meet strict energy-efficiency standards, specifically a Power Usage Effectiveness (PUE) of 1.3 or lower, before receiving deployment approval.
Meanwhile, space-based data centers, once a sci-fi concept, are now in early development. Startups like Lonestar and Thales Alenia Space are exploring ways to harness the vacuum of space for natural cooling and uninterrupted solar power, bypassing terrestrial limitations entirely.
But not all innovation is limited to the physical layer. AI itself may offer part of the solution. Chipmakers like Nvidia and AMD are racing to increase performance per watt, while hyperscalers optimize model architectures and hardware scheduling to reduce energy overhead at scale.
Ultimately, the companies that will thrive in the AI decade aren’t just the ones with the smartest models; they’re the ones with the most resilient infrastructure strategies.
Final Thoughts: Intelligence Needs Infrastructure
AI is no longer just a software revolution. It’s a physical transformation, one that’s pushing the boundaries of power, cooling, land use, and regulatory timelines across the globe.
The Deloitte forecast isn’t a distant warning; it’s a present-day wake-up call. A 30× increase in power demand by 2035 doesn’t just challenge utilities and cloud providers. It challenges every enterprise, every city, and every boardroom that’s investing in AI to drive growth, productivity, and competitive advantage.
If the next decade of intelligence is going to deliver on its promise, smarter algorithms or better chips won’t be enough.
We’ll need to build a physical world that can support them efficiently, reliably, and sustainably.
The question isn’t whether AI will scale.
It’s whether everything else can keep up.