
Private governance and regulatory sandboxes are the key to democracy, efficiency, and innovation in AI regulation.
In a prior essay, I traced the constitutional guardrails that cabin state authority to regulate artificial intelligence (AI)—limits rooted in the extraterritoriality principle and designed to preserve both liberty and democratic accountability in a federal system. That essay identified a growing tension: As AI’s inherently borderless nature strains those limits, state lawmakers face strong incentives to legislate beyond their borders, often justifying such overreach as necessary “national leadership.” But constitutional structure does not bend to expediency. The question, then, is not whether states should fill the regulatory void left by the U.S. Congress, but what can fill that void lawfully and effectively.
The answer is private governance. When situated in competitive markets and reinforced through targeted public–private mechanisms such as regulatory sandboxes —a permissive regulatory framework that allows innovators to deploy new products subject to flexible regulatory oversight, private governance can outperform state-led experimentation in both agility and accountability. Here, “the garage”—shorthand for the areas where entrepreneurs launch and operate their startups—is more than a romanticized birthplace of innovation; it is a scalable, data-rich, and consumer-responsive policy laboratory.
The private sector not only has the capacity to address AI’s harms and benefits—it may actually serve as a superior laboratory for democratic experimentation. This is a preferable path to governance with respect to maintaining individual liberty and furthering the general welfare, compared to some sort of implicit acceptance of extraterritorial state laws. Extraterritorial laws are incompatible with individual liberty because the non-resident made to comply with another state’s laws has relatively no means to hold that state’s officials accountable. Americans can no longer vote with their feet and move to a political community that aligns with their values and priorities if, regardless of where they go, one state’s laws effectively govern them. In contrast, private governance suffers no such flaws, assuming a healthy market.
A dynamic market characterized by regular entry and exit of small, medium, and large firms presents users with a litany of different governance regimes. They can vote with their dollar to reward the companies that develop reliable and effective rules, standards, and norms. States can prioritize a healthy market by, for example, increasing the supply of innovators via high-quality education, devising incubators that get tinkerers out of the garage and into the market, and incorporating related measures that align with a vibrant AI economy. This agenda presents no risks of infringing the autonomy of non-residents. If successful, it can reduce market concentration and the chance of lock-in effects that limit the ability of users to switch products.
Beyond undermining individual liberty, extraterritorial state laws also threaten the general welfare in ways that private governance avoids. This complex regulatory ecosystem risks quashing the innovation essential to the scientific discoveries, educational gains, and knowledge diffusion that characterizes the general welfare. Even if states worked closely to harmonize the text of their laws, there is no guarantee that subsequent judicial opinions and administrative enforcement of those laws will remain consistent.
The inevitable uncertainty as to what each state requires of each actor will have economic consequences. The scale and duration of those consequences are unknown. Yet, even small compliance costs, such as updating a privacy policy, can result in lawyer fees that eat up about ten percent of a startup’s monthly budget. These are the very startups that are relied on to create jobs during economic downturns, introduce new technology, and otherwise perpetuate America’s innovation ecosystem. Extreme care is warranted to ensure that these economic engines can operate at full speed.
Private governance, with the admittedly large caveat of a strong market, may even create a new competitive edge for smaller, younger firms to take on incumbents. These firms can, for instance, develop different data sharing practices, apply different standards to proper uses, and so on. These choices can have significant market effects and provide nascent companies with a strategic advantage over others—an advantage that will surely close as other firms adopt similar provisions.
This private policy innovation allows for regulatory experimentation that would likely go missed at the state level. Although states hold themselves out as “laboratories of democracy,” they can neither collect nor act on nearly as much data and insights as to the pros and cons of any one policy as a startup or even tech giant can do. States often lack robust mechanisms for retrospective review of different regulations. They may also lack the political will or institutional interest in amending regulations found to fall short of their intended purpose. Finally, some of their “experiments” never come to an end, with states often failing to remove ineffective laws from the books due to regulatory capture and institutional inertia favoring zombie laws over regular refreshes of the code.
Tech companies do not suffer from those same issues. They methodically and relentlessly qualify and analyze business decisions. They can also quickly reform policies in light of consumer preferences and technological capabilities. Google, for one, does not have to go through notice-and-comment rulemaking or any equivalently burdensome and slow process to revise or issue a regulation.
In addition to avoiding these drawbacks of expansive and potentially unconstitutional state regulation, private policy frameworks carry many of the same benefits as idealized state policy experiments. First, they are done at a large scale. Even a small startup may boast more users than residents of Wyoming. When companies tinker with various safeguards and frameworks, they can collect meaningful data on their effectiveness. This is especially true for companies with millions, if not billions, of users. In fact, an experiment by Google, involving individuals from around the world with different political, economic, and cultural backgrounds, probably generates more robust insights than one conducted solely on the residents of Maine.
Second, tech companies thoroughly document the policy change in question. Although a private company may not maintain as detailed records as a state agency when evaluating a policy shift, they have increasingly deliberative processes for determining when and how to better police themselves and their users. There is a reason OpenAI has a whole product policy team tasked with constantly evaluating which checks and balances are essential to a product’s success. Finding the right balance likely involves working with some of the same stakeholders, such as unions and civil society groups, that legislators and regulators would usually consult.
Finally, approaches may differ wildly from one app to the next, just as states may approach policy issues from myriad angles. States, on the other hand, seem to be losing their creativity—on some of the most important issues of the day, they seem willing to adopt model legislation without having first “kicked the tires.”
Of course, this is a rosy description of private policy experimentation. In practice, some companies are just fine keeping their tinkering under wraps, away from public scrutiny. They may even have an incentive not to disclose an approach that works tremendously well from a public policy perspective but perhaps proves untenable from a business point of view. These are solvable problems. As states such as Utah and the federal government lean more heavily into the idea of regulatory sandboxes, it may soon be the norm that states reward good private actors for running myriad policy experiments and sharing the results in a timely and transparent fashion.
These limitations of pure private governance point toward a hybrid approach that captures the benefits of market-driven experimentation while addressing transparency concerns. Regulatory sandboxes, which allow for a high-degree of public-private coordination, represent precisely this kind of solution. Rather than prohibit new products from entering the market until a regulatory ecosystem is firmly in place, sandboxes permit companies to begin selling their goods and services subject to a unique form of oversight. Although sandboxes can take many forms, they include a few common elements. In exchange for reduced regulatory consequences for certain harms, participating companies agree to increased information sharing with the government. Companies may also have to facilitate means for consumers to quickly report any issues with their goods or services. States can take a more hands-on, experimental approach to these initiatives by running policy trials among participants. For instance, they could direct companies to test various forms of privacy disclosures and user controls over their information.
The constitutional case against extraterritorial state AI regulation demands immediate attention from courts, policymakers, and legal scholars. Although some regulations may qualify as sufficiently intrastate, it is important to clearly distinguish when and if state AI laws exceed the state’s jurisdictional authority before state legislators enact too many new laws. This is not an easy task. An interdisciplinary set of lawyers, lawmakers, technologies, and laboratory officials should collaborate to analyze when certain state mandates will effectively require labs to upend their model development process.
In the interim, state legislators should redirect their energies from direct AI regulation toward market-supporting activities that enhance private experimentation without constitutional overreach. This includes investing in education in science, technology, engineering, and mathematics, expanding university research programs, creating business incubators that help entrepreneurs transition from garage tinkering to viable companies, and streamlining regulations that unnecessarily burden startup formation. Such policies could meaningfully influence AI development while respecting jurisdictional boundaries.
Regulatory sandboxes represent the most promising hybrid approach, allowing states to facilitate private experimentation while maintaining appropriate oversight. Utah’s fintech sandbox and the federal government’s emerging AI initiatives provide models worth expanding. These frameworks should prioritize transparency requirements and data sharing to address legitimate concerns about corporate accountability while preserving the agility that makes private governance superior to traditional regulation.
The stakes extend beyond AI to any emerging technology where innovation outpaces regulatory capacity. As biotechnology, quantum computing, and other frontier fields develop, the temptation for states to project their laws nationally will only grow. Establishing clear constitutional boundaries now will prevent a future where the most restrictive state effectively sets national policy, undermining both federalism and innovation.
The garage-based startup remains America’s most dynamic laboratory of democracy—nimble, accountable to users, and capable of rapid iteration based on real-world data. Preserving this ecosystem requires vigilant protection of constitutional boundaries, even when well-intentioned legislators claim to act for the greater good. True democratic experimentation flourishes best when guided by constitutional limits, not despite them.
This essay is the second of two parts. The first part can be found here.