
Effective AI governance demands strong federal standards that preserve state authority.
Artificial intelligence (AI) has burst upon us at a pace, scale, and magnitude never seen before in modern history. It dominates news media, business, finance, entertainment, and political attention. Unfortunately, the public policy debate has been poorly informed by facts and filled with grand promises of economic paradise or projections of doom. Balancing AI policy requires more care, caution, and public accountability.
Two dominant policy inflection points demand our attention. First, we must decide whether and how much federal regulation should preempt state and local laws. Second, the engine for AI—data centers—requires management and governance that are directionally similar to the evolved regulatory infrastructure of railroads, electricity, and modern communications. The nuanced details for this cluster of issues require open, cooperative, and public debates between states and the federal government. This starts with bipartisan agreement that these topics are too important to be addressed through backroom legislative deals, questionable presidential executive orders, or amendments adopted quietly in unrelated legislation.
The concept of using an executive order to repeal and remove state laws is wrong on multiple levels. States possess police powers over public safety and consumer protection. There is a presumption against preemption, unless the U.S. Congress has made its intention clear and manifest. Here, Congress has taken no action justifying preemption, and repealing or nullifying state laws without congressional approval is illegal. Under recent U.S. Supreme Court precedent, the executive branch lacks authority unless Congress has explicitly, and in a detailed manner, enacted a law providing regulatory authority. Justice Ketanji Brown Jackson has even stated concerning the role of a President that “the Constitution limits his functions in the lawmaking process to the recommending of laws he thinks wise and the vetoing of laws he thinks bad.”
There are arguments, such as in the case of giving the states generous matching funds for Medicaid under the Patient Protection and Affordable Care Act, which might run afoul of bars on coercing or commandeering the states. There are also serious questions about how to reconcile the Tenth Amendment, which states that any powers not delegated to the federal government by the Constitution are reserved for the states, with an executive order.
Unless and until Congress enacts comprehensive AI rules, the use of an untethered and unauthorized executive order to repeal state laws should be discarded to a landfill of well-intended bad ideas.
U.S. politicians are primarily AI boosters. Many political leaders view AI as an existential struggle for industrial dominance with China. Others promise everything from healthier and longer lives to better living conditions and faster business growth.
Yet according to consistent public opinion surveys and recent local and state actions regulating AI—concerning electricity and water use, children’s privacy, job protection, and the safety of large AI systems—there exists either skepticism or outright hostility toward AI. This disconnect poorly serves our country. We ought to lead in AI adoption, development, and use, but we must also lead in sensible, straightforward, and balanced regulation.
The best way forward is to share responsibility between the states and the federal government by using basic principles. The federal government should have the predominant role in matters related to national security and defense, as well as cybersecurity. This includes infrastructure security, biological and chemical weapons prevention, nuclear materials and facilities, and transnational crimes such as bank fraud, human trafficking, and terrorism. Federal hegemony also encompasses national security infrastructure, including dams and water systems, the electrical grid, and transportation systems such as federal highways, rail, aviation, and interstate bus transit.
We need strong, explicit federal regulatory standards to assure public safety. In the short term, state measures such as California’s Senate Bill 53—introduced by State Senator Scott Wiener (D-San Francisco) and signed by Governor Gavin Newsom—offer a good start. These legally binding safety checkpoints are far less onerous than current European laws and more focused on large AI systems than an earlier bill vetoed by Governor Newsom. Because this approach has the force of law, it is preferable to voluntary industry standards. To be clear, these state-based AI safety laws are temporary and could be preempted by a directionally similar federal law on this topic.
However, states and localities have constitutionally protected roles in health and public safety authority that must be preserved in key areas. This level of government is more than a laboratory of democracy to test different policy solutions. States and localities have authority and experience in regulating matters affecting electricity rates, water usage, and other matters with impacts on local environments and economies. With trillions of dollars in capital expenditure planned over the next decade, the federal government should not preempt rules governing the operation of AI data centers within individual states.
Consider the concrete realities of AI data centers. These massive facilities consume extraordinary amounts of electricity and water for cooling. They affect local power grids, water supplies, property values, and employment patterns. States have decades of experience managing these exact issues through their utility commissions and environmental agencies. Virginia, for example, has become a data center hub precisely because state regulators worked with industry to develop workable frameworks. Preempting state authority in this area would strip away the very expertise and local knowledge needed to balance economic development with community interests.
Other strong examples of areas where state law should not be preempted include common law liability for fraud, negligence, intentional infliction of emotional distress, and other established tort doctrines. These traditional state law causes of action do not impose conflicting technical standards; rather, they provide remedies for harms that states have addressed since the country’s founding. The sheer number and variety of these cases suggest why they cannot and should not be exclusively federal claims.
The Trump Administration and some industry players have proposed that federal standards consist only of voluntary compliance with industry-developed standards. Self-regulation of this nature is insufficient and fails to provide accountability mechanisms. History teaches us that voluntary standards work only when backed by the credible threat of regulation. The 2008 financial crisis demonstrates what happens when industries are allowed to regulate themselves without meaningful oversight.
Instead, the U.S. Congress must enact detailed legislation and subject its implementation to congressional oversight. A bill to achieve this result would require input and buy-in from multiple relevant committees following legislative hearings and committee markups. This approach appeals to legislators’ institutional interests in maintaining their role as AI policy arbiters rather than deferring entirely to executive branch discretion. It preserves Congress’s constitutional role in technology governance while ensuring the United States remains competitive in AI development.
The stakes are enormous. AI promises genuine benefits in healthcare, scientific research, education, and countless other fields. But realizing these benefits while managing the risks requires getting the governance structure right from the start. We cannot afford to repeat past mistakes where technology raced ahead of regulation, leaving citizens vulnerable and requiring costly interventions after damage was done.
The path forward requires rejecting false choices between innovation and regulation. We can have both robust AI development and strong safeguards. We can maintain federal leadership on national security and safety standards while respecting state authority over local impacts and traditional areas of state competence. We can encourage industry innovation while insisting on real accountability through laws with teeth, not voluntary guidelines.
Most importantly, we need transparency and public participation in these decisions. It requires open hearings, public comment periods, and genuine deliberation by elected representatives accountable to their constituents. The technology may be new, but the principles of democratic governance remain timeless.
AI will transform our economy and society. Whether that transformation serves the broad public interest or primarily benefits a narrow set of corporate interests depends on the choices we make now about governance structures. We have an opportunity to get this right—to create a framework that promotes innovation while protecting citizens, that respects both federal and state roles, and that provides real accountability through law rather than empty promises of self-regulation. We must seize this opportunity.



