
The constitutional prohibition on extraterritorial regulation restricts democratic experimentation in the AI era.
States have long claimed the mantle of “laboratories of democracy,” experimenting with novel policies that might later inform national approaches to complex challenges. This metaphor, coined by Justice Louis Brandeis nearly a century ago, has become so embedded in American political discourse that we rarely question its constitutional foundations or practical limits.
Yet as the regulation of artificial intelligence (AI) emerges as a key challenge, our collective failure to honestly reckon with the constitutional constraints on state authority to tinker with the trajectory of this transformative technology threatens both individual liberty and the general welfare. The U.S. Constitution does not grant states unlimited license to experiment when that experimentation extends beyond their borders—a reality that becomes particularly acute when state AI laws effectively govern the conduct of out-of-state actors who have no voice in those states’ political processes.
The private sector, particularly startups and technology companies, may serve as superior laboratories of democratic experimentation in the AI era, offering more nimble, accountable, and constitutionally sound approaches to governance than extraterritorial state regulation. This argument requires careful qualification. Private governance tends to succeed only in competitive markets and may necessitate new forms of public–private coordination. Mechanisms such as regulatory sandboxes—where entrepreneurs can deploy products under flexible standards and information-sharing obligations—offer one promising model. Even with these caveats, private governance deserves serious consideration as both a constitutional imperative and a pragmatic path forward. Further research into the optimal design of such hybrid governance models remains essential, but the constitutional case against unchecked state regulatory expansion is clear enough to warrant immediate attention. To understand why private governance may offer superior alternatives, we must first examine the constitutional limits that constrain state regulatory authority—limits that current AI legislation threatens to breach.
Even in the wake of the federal AI Action Plan, there is no denying that states have taken the lead in regulating AI. Such as in the Responsible AI Safety and Education Act in New York or the Colorado AI Act, states across the country are weighing or are implementing AI legislation. Concerns about a patchwork of state laws developing have spread as a result. Much of this debate has turned on the law and policy around federal preemption. Others have focused on how such a patchwork could hinder national security by slowing AI advances. A different set of actors worry that discordant state laws may interfere with economic growth. Little attention, however, has been paid to whether certain state AI laws may violate the Constitution’s prohibition on extraterritorial legislation, which derives from the Dormant Commerce Clause, Import-Export Clause, the Privileges and Immunities Clause, and the Full Faith and Credit Clause.
No state has the authority to project its laws into another. States convinced that their laws will serve the interests of their state sisters, the nation, and even the globe do not receive an exception. There is also no exception for regulatory experimentation under the guise of states’ acting as laboratories of democracy if that experiment includes out-of-state participants. In short, a state’s regulatory authority is not indefinite. The Founders deliberately removed certain authorities from the states to avoid the immense fragmentation and chaos experienced by conflicting state laws under the Articles of Confederation.
Certain kinds of state AI laws may run afoul of this prohibition. As noted above, legislators in New York and Colorado have passed or proposed laws that will likely require leading AI labs to make changes to their model training processes. The nature of training frontier AI models means that such changes will necessarily have national—not to mention global—effects. Labs cannot train models specific to the policy requirements of each state, a handful of states, or even a single state. Training is not a modular process. It is done over the course of several months and does not easily lend itself to jurisdiction-specific differentiation.
Laws related to AI agents, AI tools capable of taking action on behalf of users without oversight, may present similar concerns. AI agents can autonomously pursue activities on behalf of the user. Commonly discussed tasks for AI agents include booking travel, coordinating commercial projects involving multiple individuals and institutions, and even representing individuals in sensitive decision-making contexts, such as voting. The potential for AI agents to achieve these tasks hinges on encountering few to no barriers as they move across the Internet. States opting to enact disparate requirements around when and how agents must disclose their agentic status or otherwise navigate the web would frustrate that potential. Although the U.S. Supreme Court has cautioned against state laws that deliver few intrastate benefits relative to their negative impacts on interstate commerce, it is unclear how the Court will apply that logic to a technically complex and emerging field.
The constitutional foundations of the prohibition on extraterritoriality remain unsettled. Scholars and courts have drawn on a range of provisions—from the Commerce Clause to the Privileges and Immunities Clause—to justify limits on a state’s power beyond its borders, but there is little consensus on the precise source of this restriction. This uncertainty is compounded by the Supreme Court’s sparse and often imprecise case law on extraterritoriality, which has left states, litigants, and lower courts with little clear guidance. The practical difficulties are even greater: Drawing a definitive line between permissible state regulation and unconstitutional overreach is no easy task, particularly in a digital economy where nearly every law affecting in-state actors may also ripple across state or even national boundaries. Some commentators have gone so far as to argue that the Constitution forbids any state law with effects outside its territory, though that position has not been widely adopted.
Muddled as these essential principles of our constitutional order may be, a lack of clarity is distinct from a lack of enforceability. Horizontal federalism, the constitutional mandate that states exist on equal footing, is unquestionably a part of our constitutional order, even if it is derived from disparate constitutional provisions such as the Privileges and Immunities Clause and Commerce Clause. The ban on extraterritorial state laws is as much a part of the Constitution as vertical federalism and the associated limits on the federal government’s power to command state governments.
Prohibition of state action under the extraterritoriality principle, however, does not mandate a congressional response—even on issues that are the exclusive domain of Congress pursuant to enumerated powers granted under Article I of the Constitution. Congress does not have an obligation to exercise its full powers, regardless of whether some actors think it should. Put differently, there is no exception to the extraterritorial principle when states believe that Congress is failing to confront a matter of national concern. A state savior complex—where legislators justify extraterritorial overreach as necessary national leadership—is not countenanced by the Constitution.
This savior complex frequently appears when state legislators defend their AI proposals. In New York, for example, the sponsor of the Responsible AI Safety and Education Act contended that the bill will “keep everyone in the city, state, country, and world safe from some pretty extreme risks.” When state legislators explicitly frame their actions as necessary to protect not just their own constituents, but all Americans from AI’s most catastrophic risks, they are tacitly admitting to attempting to regulate beyond their borders.
California legislators have likewise indicated an intention to step into the shoes of Congress with an eye toward protecting the whole of the nation. These legislators cast themselves as reluctant heroes, forced to exceed traditional state boundaries because Congress refuses to act on urgent national concerns. Yet constitutional structure cannot bend to accommodate even well-intentioned legislative heroism.
When states assume the role of national savior, they effectively disenfranchise millions of Americans who had no say in selecting those supposed legislative heroes—a result fundamentally incompatible with democratic governance. In contrast to these institutional limitations of state experimentation, private policy frameworks not only avoid these drawbacks but also amplify many of the benefits that make regulatory experimentation valuable in the first place.
There are constitutional limits on states’ ability to claim the “laboratories of democracy” mantle in the AI era given the underappreciated—and often breached—prohibition on extraterritorial regulation. We have seen how certain state AI laws risk not only imposing compliance burdens nationwide but also eroding the very federalist structure that safeguards both liberty and democratic accountability.
If state-led AI governance is constrained by these constitutional boundaries, then we must look elsewhere for lawful and effective experimentation. The private sector could serve as a nimbler, more accountable, and potentially more innovative locus for policy testing—particularly when paired with public–private mechanisms such as regulatory sandboxes. Companies, from garage startups to tech giants, can run governance experiments at scale, generate robust data, and adapt in real time, and there are ways to harness those advantages without sacrificing transparency, competition, or the public interest.
This essay is the first of two parts. The second part will appear tomorrow.