How to Regulate, or Not Regulate, AI

AI regulations should be guided by humility and continuous learning.

This past fall, the European Commission announced its intention to revise the Artificial Intelligence Act, which had not yet come into full force.

Astute political commentators noted the hand of the Trump Administration behind the decision. Apparently, the Administration had put significant pressure on the European Union to loosen its regulatory grip, especially on very large U.S. artificial intelligence (AI) companies. Numerous civil society organizations balked. Having worked hard to get the AI Act passed, groups such as European Digital Rights, Access Now, or Witness feared that the result of their intense lobbying work was up for grabs. In press conferences and media appearances, they warned that changing the AI Act in response to U.S. pressure could undermine hard-won protections and expose Europeans to serious societal risks.

None of this is surprising. AI regulation, like regulation more generally, reflects shifting political priorities and power dynamics. Civil society organizations had seen the AI Act—despite its shortcomings—as a significant, if imperfect, victory that they did not want to slip away. At the same time, the Trump Administration has consistently opted to put short-term business interests over long-term strategic ones. In the end, regulation is always the product of a political give-and-take across time: Some rules endure, others get adjusted as the general political climate changes.

And yet, an element of the debate over revising the AI Act makes us uneasy. It is the conviction, on both sides, of the righteousness of their path forward.

Despite its global popularity, data-driven generative AI is fairly new. Although experts understand how it works, for most users, AI is a black box that they ask for information and advice, utilizing this technical tool much like a human expert, such as a doctor, a lawyer, or an engineer. Increasingly, AI is shaping all kinds of human decision-making. But to what long-term effect? We do not know.

Much has been written about the weaknesses of AI, including discriminatory bias and shocking hallucinations. Does this necessitate regulatory action, even though advice originating from humans is frequently saddled with similar shortcomings? Are these the most important problems AI can cause? Or are there others? And what is the baseline we are comparing AI to when gauging its inherent weaknesses? All of this ultimately leads to one fundamental question: What are we aiming to protect against with AI regulation, and what are we hoping to facilitate with it?

The mechanisms we ought to apply should be at least as tentative as the goals of AI governance. What are the regulatory tools that are most effective in the AI context? Are we regulating outcomes or focusing on process? Or both? And, if so, how do we shape the interface between these two? Equally important is identifying the institutional structures needed for effective AI governance.

In the early days of steam trains, authorities worried primarily about people dying of lack of oxygen at speeds above 20 miles an hour—and neglected the far more dangerous problem of exploding pressure vessels. Are we going to make a similar mistake as we regulate AI?

Some, especially Silicon Valley libertarians, argue that it is far too early to enact rules governing AI. They suggest that we should wait until the fog lifts and understand the problems caused by AI. But that is a bit like letting pollution run its course before we begin to clean up. Not acting is not just a pause but a decision: It shapes the trajectory of society. These dynamics are visible in the United States, where President Donald J. Trump recently issued an executive order seeking to preempt state-level AI regulation in the name of national competitiveness. We cannot simply wait our way out of this challenge. It seems that we are between a rock and a hard place: We can act now but may pass ineffective rules focused on the wrong goals, or we can wait and thereby relinquish our chance to govern this technology and its use in our society.

This binary contrast between wrong-headed governance and dangerous inaction is, of course, a fallacy. It implies that we only have one shot at regulating AI, and if we fail at that one shot, everything is lost. That is wrong. Regulations are human acts that can be changed when we have gained a better view and decided that adjustments are in order. Which brings us back to the beginning: Putting the politics of AI regulation aside, modifying how we govern AI once we have gained a better sense of what is needed and how is not an embarrassing bug but a crucial feature of regulating any area of repeated change.

We suggest in our recent book that effective AI regulation must be grounded in humility and learning. Humility is necessary to acknowledge that the long-term effects and consequences of AI are not yet fully understood. Learning, in turn, requires regulatory systems that can incorporate new evidence and experience over time and adjust their course accordingly. Although this may sound straightforward, implementing such an approach is difficult in practice. In our modern, complex societies, regulatory consensus is often hard-won, and governance institutions tend to privilege stability over revision. As a result, rules are more likely to persist than to be revisited, even when emerging evidence suggests the need for change.

To make our suggestion operational, regulatory learning needs to be embedded as a continuous process across the entire regulatory lifecycle. This shifts the focus from one-off rulemaking to cumulative evidence-generation and iterative adjustment. It also requires equipping regulatory bodies with the capacity not only to learn from new information but to learn how to learn, meaning they can reflect on, refine, and improve their own learning processes over time. In this sense, regulatory institutions function as adaptive learning organisms whose methods and tools evolve alongside the technologies they govern.

The real governance challenge that AI forces us to confront may be quite different from our initial impulses. Rather than quickly addressing a few obvious goals, real—and really smart—AI regulation may require a rewiring of our regulatory processes and institutions, not only to facilitate but also to embrace with humility the constant need to observe, learn, and adjust.

Viktor Mayer-Schönberger

Viktor Mayer-Schönberger is professor of internet governance and regulation at the University of Oxford.

Urs Gasser

Urs Gasser is a professor of public policy, governance and innovative technology and the dean of TUM School of Social Sciences and Technology.