Compiling the Future of U.S. Artificial Intelligence Regulation

Experts examine the benefits and pitfalls of AI regulation.

Recently, the U.S. House of Representatives, voting along party lines, passed H.R. 1—colloquially known as the “One Big Beautiful Bill Act.” If enacted, H.R.1 would pause any state or local regulations affecting artificial intelligence (AI) models or research for ten years.

Over the past several years, AI tools—from chatbots like ChatGPT and DeepSeek to sophisticated video-generating software such as Alphabet Inc.’s Veo 3—have gained widespread consumer acceptance. Approximately 40 percent of Americans use AI tools daily. These tools continue to improve rapidly, becoming more usable and useful for average consumers and corporate users alike.

Optimistic projections suggest that the continued adoption of AI could lead to trillions of dollars of economic growth. Unlocking the benefits of AI, however, undoubtedly requires meaningful social and economic adjustments in the face of new employment, cyber-security and information-consumption patterns. Experts estimate that widespread AI implementation could displace or transform approximately 40 percent of existing jobs. Some analysts warn that without robust safety nets or reskilling programs, this displacement could exacerbate existing inequalities, particularly for low-income workers and communities of color and between more and less developed nations.

Given the potential for dramatic and widespread economic displacement, national and state governments, human rights watchdog groups, and labor unions increasingly support greater regulatory oversight of the emerging AI sector.

The data center infrastructure required to support current AI tools already consumes as much electricity as the eleventh-largest national market—rivaling that of  France. Continued growth in the AI sector necessitates ever-greater electricity generation and storage capacity, creating significant potential for environmental impact. In addition to electricity use, AI development consumes large amounts of water for cooling, raising further sustainability concerns in water-scarce regions.

Industry insiders and critics alike note that overly broad training parameters and flawed or unrepresentative data can lead models to embed harmful stereotypes and mimic human biases. These biases lead critics to call for strict regulation of AI implementation in policing, national security, and other policy contexts.

Polling shows that American voters desire more regulation of AI companies, including limiting the training data AI models can employ, imposing environmental-impact taxes on AI companies, and outright banning AI implementation in some sectors of the economy.

Nonetheless, there is little consensus among academics, industry insiders, and legislators as to whether—much less how—the emerging AI sector should be regulated.

In this week’s Saturday Seminar, scholars discuss the need for AI regulation and the benefits and drawbacks of centralized federal oversight.

  • In an article in the Stanford Emerging Technology Review 2025, Fei-Fei Li, Christopher Manning, and Anka Reuel of Stanford University, argue that federal regulation of AI may undermine U.S. leadership in the field by locking in rigid rules before key technologies have matured. Li, Manning, and Reuel caution that centralized regulation, especially of general-purpose AI models, risks discouraging competition, entrenching dominant firms, and shutting out third-party researchers. Instead, Li, Manning, and Reuel call for flexible regulatory models drawing on existing sectoral rules and voluntary governance to address use-specific risks. Such an approach, Li, Manning, and Reuel suggest, would better preserve the benefits of regulatory flexibility while maintaining targeted oversight of areas of greatest risk.
  • In a paper in the Common Market Law Review, Philipp Hacker, a professor at the European University Viadrina, argues that AI regulation must weigh the significant climate impacts of machine learning technologies. Hacker highlights the substantial energy and water consumption needed to train large generative models such as GPT-4. Critiquing current European Union regulatory frameworks, including the General Data Protection Regulation and the then-proposed EU AI Act, Hacker urges policy reforms that move beyond transparency toward incorporating sustainability in design and consumption caps tied to emissions trading schemes. Finally, Hacker proposes these sustainable AI regulatory strategies as a broader blueprint for the environmentally conscious development of emerging technologies, such as blockchain and the Metaverse.
  • The Cato Institute’s, David Inserra, warns that government-led efforts to regulate AI could undermine free expression. In a recent briefing paper, Inserra explains that regulatory schemes often target content labeled as misinformation or hate speech—efforts that can lead to AI systems reflecting narrow ideological norms. Inserra cautions that such rules may entrench dominant companies and crowd out AI products designed to reflect a wider range of views. Inserra calls for a flexible approach grounded in soft law, such as voluntary codes of conduct and third-party standards, to allow for the development of AI tools that support diverse expressions of speech.
  • In an article in the North Carolina Law Review, Erwin Chemerinsky, the Dean of UC Berkeley Law, and practitioner Alex Chemerinsky argue that state regulation of a closely related field—internet content moderation more broadly—is constitutionally problematic and bad policy. Drawing on precedents including Miami Herald v. Tornillo and Hurley v. Irish-American Gay Group, Chemerinsky and Chemerinsky contend that many state laws restricting or requiring content moderation violate First Amendment editorial discretion protections. Chemerinsky and Chemerinsky further argue that federal law preempts most state content moderation regulations. The Chemerinskys warn that allowing multiple state regulatory schemes would create a “lowest-common-denominator” problem where the most restrictive states effectively control nationwide internet speech, undermining the editorial rights of platforms and the free expression of their users.
  • In a forthcoming chapter, John Yun, of Antonin Scalia Law School at George Mason University, cautions against premature regulation of AI. Yun argues that overly restrictive AI regulations risk stifling innovation and could lead to long-term social costs outweighing any short-term benefits gained from mitigating immediate harms. Drawing parallels with the early days of internet regulation, Yun emphasizes that premature interventions could entrench market incumbents, limit competition, and crowd out potentially superior market-driven solutions to emerging risks. Instead, Yun advocates applying existing laws of general applicability to AI and maintaining a regulatory restraint similar to the approach adopted during the formative early years of the internet.
  • In a forthcoming article in the Journal of Learning Analytics, Rogers Kaliisa of the University of Oslo and several coauthors examine how the diversity of AI regulations across different countries creates an “uneven storm” for learning analytics research. Kaliisa and his coauthors analyze how comprehensive EU regulations such as their AI Act, U.S. sector-specific approaches, and China’s algorithm disclosure requirements impose different restrictions on the use of educational data in AI research. Kaliisa and his team warn that strict rules—particularly the EU’s ban on emotion recognition and biometric sensors—may limit innovative AI applications, widening global inequalities in educational AI development. The Kaliisa team proposes that experts engage with policymakers to develop frameworks that balance innovation with ethical safeguards across borders.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.