
Scholars argue that energy disclosures increase efficiency and competition without slowing AI development.
Consumers frequently consider energy efficiency data in their decision making, yet accurate information on the electricity used by publicly available artificial intelligence (AI) models remains elusive. Some industry advocates claim that AI’s potential to increase workplace efficiency and solve environmental issues means that development should be prioritized over regulation, including required disclosures on energy use.
In a forthcoming article, Michael P. Vandenbergh, Ethan Thorpe, and Jonathan Gilligan, three scholars at Vanderbilt University, challenge the assumption that energy disclosures would slow AI development. They argue that increased transparency would inform discerning consumers, create a more competitive market, and make data centers and AI programs more efficient, decreasing energy demands and benefiting the environment in the process. And they contend that, despite the current deregulatory trend, voluntary disclosures could lead to greater transparency that would benefit the AI industry.
AI computations use processing power from data centers that require large amounts of water for cooling and electricity for operation. Vandenbergh, Thorpe, and Gilligan predict that these demands will only continue to grow. A small one-megawatt data center annually draws about 7 million gallons of water for direct cooling and indirectly requires approximately 10 million gallons to cool the power plants that supply it with electricity. In 2023, U.S. data centers consumed 176 terawatt hours of electricity—enough to power 16.3 million homes—resulting in 61 million metric tons of carbon dioxide emissions.
Vandenbergh, Thorpe, and Gilligan cite estimates that AI is currently responsible for approximately 15 percent of the total demand on data centers, but they also predict that share could grow threefold by 2028. They discuss advances such as multi-step “reading” models, AI chatbots, and image and video generation, all of which increase energy consumption.
Although significant uncertainties exist surrounding AI’s total power consumption, Vandenbergh, Thorpe, and Gilligan describe why even less is known about the energy required to perform individual AI tasks. They explain that underlying this uncertainty is an evolving web of variables—many of them proprietary to each AI model—including the program code, the chip executing the code, the data center hosting the process, and the overall size and complexity of the model. And all of these variables affect energy efficiency. They state that rapid advances in technology may also render research on energy consumption obsolete. Proprietary algorithms that route requests to specialized models and highly efficient chips that perform increasingly complex computations greatly affect energy consumption.
Looking beyond existing research, Vandenbergh, Thorpe, and Gilligan also compare the results obtained from four AI footprint calculators—programs that estimate the energy used for a given task—but find them to be highly unreliable. Despite controlling input variables and standardizing output, they observe that footprint calculators vastly differ in their estimates, with the highest energy estimate being 58 times greater than the lowest for the same AI model. Although these tools are available to consumers and policymakers trying to understand AI energy consumption, Vandenbergh, Thorpe, and Gilligan conclude that the inability to validate estimates renders AI footprint calculators unlikely to compensate for a lack of industry transparency.
Vandenbergh, Thorpe, and Gilligan also question the claims of those arguing against disclosure regulations, either because they claim the technology’s utility outweighs even the most severe environmental consequences, or because the gains in workplace efficiency result in a net benefit for the environment. Vandenbergh, Thorpe, and Gilligan find it plausible that certain tasks, such as coding, are more efficient when performed by AI rather than by employees. But they also note that without information on actual energy consumption, it is impossible to know whether a programmer coding on a desktop in an air-conditioned office uses more electricity than an AI model tasked with the same job.
The authors contend that transparency “need not become a barrier to rapid development” and point to likely benefits for consumers, industry, and the environment. Publishing data on energy use and environmental impacts would allow conscientious consumers to choose models that reflect their preferences, and would provide some companies with a comparative advantage, making the AI market more competitive overall. Vandenbergh, Thorpe, and Gilligan explain that helping consumers understand the energy tradeoffs between different models could also help them decide between large general-purpose AI programs and smaller task-specific ones.
Public disclosures and consumer selectivity would also create benefits for data center operators, who could process the same number of queries with a reduced energy cost. Vandenbergh, Thorpe, and Gilligan predict that savings could be passed down to companies that prioritize developing efficient AI models. And this process would reduce the energy required and the carbon emitted by AI, lessening its environmental impact.
Vandenbergh, Thorpe, and Gilligan concede that in the current “deregulatory era,” transparency is unlikely to result from government pressure. President Donald J. Trump’s AI Action Plan calls for broad deregulation and reflects the hesitancy of lawmakers and administrators to impose restrictions on a lucrative and cutting-edge industry. But given the huge variability in energy use across different AI models, Vandenbergh, Thorpe, and Gilligan remain cautiously optimistic that some developers may have enough incentives to make disclosures on their own, potentially prompting an industry shift.
Vandenbergh, Thorpe, and Gilligan conclude that greater energy transparency is “necessary to accelerate the net contributions of AI,” regardless of whether it results from government regulation, consumer demand, or industry self-interest. They argue that transparency “can align the needs of users, model developers, and data center operators” all while reducing AI’s sizeable environmental footprint.


