
The federal government must implement regulatory sandboxes to stay at the forefront of AI development.
As President Donald J. Trump continues to explore an executive order on artificial intelligence (AI)—one that will likely call for the U.S. Department of Justice to challenge state laws that exceed their respective constitutional authority to regulate AI—it is worth analyzing the affirmative steps the U.S. Congress can take to facilitate evidence-based AI governance. One of the most promising steps forward comes via regulatory sandboxes.
A regulatory sandbox formalizes what has long been the best approach—and America’s usual approach—to governing novel technologies: trial and error. To do otherwise is unwise, as “one could well ask whether any technology, including the most benign, would ever have been established if it had first been forced to demonstrate that it would do no harm.”
The introduction of every new technology presents risks and results in harms. Society should attempt to mitigate both. Such mitigation, though, must be tailored to the benefits of the technology. In the case of AI—despite a bias among academics to publish negative AI scholarship and journalists to report on the alleged downsides of technologies—its demonstrated benefits warrant a trial-and-error approach through regulatory sandboxes.
Although there is no single way to design regulatory sandboxes, they typically permit such trial by allowing participants to deploy their products subject to few or no additional legal limitations. Sandbox participation, however, is not a free pass to operate recklessly. Participation is usually contingent on increased oversight, information-sharing obligations, and adherence to specific consumer protections.
Although some states have opted to apply the “try-first” mentality to AI governance called for by President Trump, many others are instead imposing a “no trials until there are no errors” approach—conditioning AI development and deployment on vague and subjective factors. The patchwork that is already being sown by these states is not without consequence. As Representative Ted Lieu (D-Calif.) pointed out earlier this fall during a congressional hearing on AI governance, the imposition of even two different standards for AI training on labs would result in a compliance impossibility: The financial and computational resources associated with training frontier AI tools prohibit labs from training models pursuant to two standards, let alone 50. If the United States cannot remain at the frontier of AI, then it will find itself at an economic and national security disadvantage. Our adversaries will not pause their AI initiatives to allow us to learn from our regulatory mistakes.
Absent federal preemption, labs will not be able to make full use of the regulatory sandboxes available in a handful of innovative states. As has been documented in myriad contexts, technology companies often defer to the laws of the jurisdiction with the most onerous provision. Congress can pursue legislative action to remedy a world in which the innovative spirit of Utah and Texas, which have embraced a try-first mentality, is undermined by California and New York, which have legislated out of fear of speculative risks.
One approach would be to create a legal safe harbor for AI labs that opt to participate in a federal regulatory sandbox or a state sandbox that meets certain qualifications. This safe harbor would prevent one or two states from blocking the full potential of existing sandboxes or a future federal sandbox to uncover the full extent of AI’s benefits and risks.
Alternatively, Congress could move forward with a temporary moratorium on certain state laws, such that lab participation in existing state sandboxes would become feasible.
Both routes would not foreclose future regulation of AI. Instead, they would inform future AI discourse and ensure that biased scholarship and media do not cloud the judgment of legislators. Attempting to put AI back in the bottle or significantly limit its utility by conditioning its use on manifold procedural checks will not lead to a full understanding of its risks and benefits. Notably, this is exactly the sort of evidence-based governance called for by the AI experts empaneled by California Governor Gavin Newsom.
Time is of the essence. The longer that extraterritorial state AI laws remain on the books, the more likely it is that labs will alter their behavior to comply with those particular regulations—perhaps forever altering the pace and direction of AI innovation.



