Driven by AI, data centers are multiplying, with daunting forecasts for electricity use ahead. A new wave of efficiency improvement is needed to accompany this growth in computing.
Data centers are now the largest source of load growth for many utilities today, contributing to rising electricity costs. The facilities accounted for approximately 4% of U.S. electricity sales in 2023 (the latest available data), with projections ranging from 6.7% to 12.0% in 2028. About 70% of the projected growth in data center loads is projected to be in hyperscale facilities (deploying services at massive scale, including AI).
Read the White Paper Achieving higher levels of energy efficiency is critical for preventing power outages and constraining rising power costs. Opportunities to improve efficiency include optimizing chip design in servers, refining software and algorithms, implementing heat recovery systems, and enhancing cooling and electrical systems. Our new white paper, available today, examines the numerous opportunities to improve efficiency and leverage demand flexibility.
Data center efficiency is commonly measured using a metric called power usage effectiveness (PUE), which looks at auxiliary systems, such as cooling, but not the server component of data centers. This has helped drive substantial improvements in data center cooling and power system efficiency. In order to address server energy use, new metrics are needed to capture all data center energy use in relation to the computations they perform, particularly for AI. One potential pathway is a new metric developed by Google called compute carbon intensity (CCI), which uses data on electricity use, the work conducted (measured in floating point operations per second—FLOPS), and emissions per kWh. Such a metric could be used internally by data centers to improve design and performance, and externally to require or reward good performance.
Complementing energy efficiency are opportunities to reduce loads during periods of peak electric demand using demand flexibility. A study from Duke University researchers estimates that curtailing data center loads for just 0.25% of their uptime would free up enough capacity to accommodate 76 GW of new load (a large nuclear plant is about 1 GW). As one example, a test of a new software platform reduced the peak power consumption of an Oracle data center by 25% during peak grid demand hours.
Utilities and states could implement policies to help spur efficiency and demand flexibility, such as creating energy or other targets (mandatory, voluntary, or tied to electric rates) and creating programs that provide incentives for energy-saving improvements.
To mitigate the impact of new data centers on energy demand and costs, power system planners would be wise to review new facility proposals and apply probabilities to each proposal. Many new data centers have been proposed, but most developers seek permitting and interconnection for many more sites than they expect to build. Several experts estimate only 10–20% of proposed data centers will actually be built; others estimate a somewhat higher percentage.
Given the rapid construction of new data centers to support aggressive AI plans, data center energy use is expected to grow substantially in the 2020s. However, if efficiency and demand flexibility are employed, this growth rate can be slowed in the 2030s. We’ve seen a version of this before: Strong growth in data center energy use from 2000 to 2010 was followed by much slower growth in the 2010s due in significant part to more efficient IT devices, improved system software, and more efficient cooling systems. A new wave of efficiency improvements is not only possible but a must.