AI’s Bottleneck Is Power. The US and China Feel It Differently.
The US needs more generation. But China needs more efficiency.
Recently I shared a Macquarie estimate on X (x.com/ruima/status/1989… that captures the scale difference in U.S.–China AI power capacity planning:
Current forecasts through 2030 suggest that China will only need AI-related power equal to 1–5% of the power it added over the past five years, while for the U.S. that figure is 50–70%.
In other words, China has added at least twenty times the power its AI sector is expected to require in the next five years; the U.S., by contrast, will need to expand at a dramatically steeper slope.
This aligns with a point I made back in August—which unexpectedly went viral—that China has effectively “solved” the domestic AI power problem, at least for the near term. And the broader industry conversation has only moved further in that direction. As Microsoft CEO Satya Nadella (techspot.com/news/11011… and others have noted, the real bottleneck for AI is increasingly electricity and energized data center capacity, not the number of GPUs a company has on order.
Still, the underlying story is more nuanced than any single chart or quote suggests. The piece below—translated from our friends at Weijin Research—offers a useful, grounded look at how the U.S. and China are encountering very different power constraints as AI scales.
I’m keeping their article intact below. For those who want the key points, here is the summary.
TLDR: Clear Power Gap Between US and China
The power gap is material: in 2023 the U.S. added about 51 GW of new capacity, while China added 429 GW—more than eight times as much.
China already produces over 9,000 TWh of electricity annually, more than double U.S. generation. For large-scale AI, physical power availability is becoming the primary constraint.
To read the whole post, please visit