Mark Zuckerberg AI Data Center GPU Shortage is Easing, Future Bottleneck is Power Supply

TapTechNews May 13th news, Meto CEO Mark Zuckerberg recently said in an interview with the YouTube channel Dwarkesh Patel that the GPU shortage in AI data centers is being alleviated, and the future bottleneck will be the power supply.

Zuckerberg first mentioned that for a period of time, even if IT companies had sufficient funds, it was difficult to buy all the required quantities of AI GPUs, but this situation has now begun to ease.

Currently, the overall power consumption of newly built single data centers can reach 50~100MW or even 150MW;

But Zuckerberg believes that data centers with a capacity of 1GW will not appear soon, after all, this is equivalent to using the entire generating capacity of a nuclear power plant for AI training (TapTechNews note: for reference, a single unit of our country's Hualong One nuclear power plant has a capacity of about 1.2GW).

Zuckerberg stated that overall, countries are more strict in their management of the energy industry, which means that the approval of supporting energy facilities (including power plants, substations, and power transmission systems) for the construction of large data centers is more slow. At the same time, the construction period of these facilities is also longer.

The growth of AI data centers cannot sustain the current speed in the long term, and will eventually encounter power bottlenecks: the energy industry is different from AI, and capital investment cannot yield results in a short period of time, and the delivery of additional power supply is far slower than the data center itself.

Digital infrastructure investment management company Digital Bridge also holds a similar view, stating at a recent earnings conference that the company will exhaust its power quota in the next 18 to 24 months.

Likes