
AI regulation in the EU needs structural changes promoting adaptability and innovation.
Italian economist Mario Draghi has called for a pause on the European Union’s Artificial Intelligence (AI) Act. Draghi’s concern is clear: The EU is moving too fast without grasping the risks of regulatory drawbacks. The instinct to slow down is understandable. But pauses do not solve design flaws, they expose them. Unless the EU builds in mechanisms to adjust the law in light of experience, a pause that merely delays rigid rules is no solution. The AI Act needs built-in adaptations with the capacity to track effects, both good and bad, and to change course as these effects emerge.
The problem starts with how the AI Act presents itself. It claims to be “future-proof,” a law fit for all of tomorrow’s technologies. That is an illusion.
The Act offers a broad definition of AI. But anyone who works with machine learning knows that the field does not move in straight lines. It evolves in unexpected directions. A definition that looks broad in 2025 may be irrelevant by 2027. By mistaking breadth for resilience, the Act risks freezing innovation in a framework built for yesterday’s technology. Or it risks irrelevance.
The problem is also structural. The AI Act began in 2020 with a risk-based logic, with obligations based on the sector of deployment. But then quickly emerged ChatGPT and other large-language models that cut across sectors and applications. The risk-based approach collapsed. Policymakers had to add a capabilities-based regulatory layer to catch AI systems with general-purpose power. The lesson here is blunt. No matter how broad the design of a regulation, technology can break its architecture.
Worse still, the power to fix the Act’s weaknesses is limited. There is no system for continuous monitoring of the Act’s effects, so revision is limited to yearly reviews. Delegated acts give Brussels the flexibility to adjust technical elements, but, in practice, the process is slow and often politicized. And the European Commission plays gatekeeper in deciding how the AI Act should be revised, which creates bottlenecks. With hundreds of technological advances each month, no single authority can react quickly enough. Innovators may find themselves waiting for guidance that arrives long after the technology has moved on.
The consequences are serious. European firms already trail their American and Chinese counterparts in AI technology. A slow, rigid regime will only widen the gap. Instead of creating a competitive edge in trustworthy AI, the EU could create a trust gap, resulting in a region where innovation is slower and less contested.
It does not have to be this way. One sustainable response would be to make regulation adaptive. That means building sensors into the law, collecting data on outcomes, requiring machine-readable data reporting, and setting clear thresholds, such as on the number of reported incidents or compliance costs for small companies, above which the law would be revised. With these mechanisms switched on, the Act could observe its effects, learn from them, and evolve accordingly. It could stop being a static map and start working like a GPS, recalculating when conditions change while still pointing to the destination.
Draghi is right to worry. But whether we pause or press ahead, one illusion must go: There will be no future-proof AI Act. Technology moves too fast and in directions no one can predict. The choice is not between speed and innovation-friendly regulation but between rigidity and adaptability. Only a future-responsive law, one that monitors its effects and evolves, can both protect rights and give the EU a real shot at becoming a leader in innovation. The choice is stark. Either EU regulation evolves with the technology, or the technology evolves elsewhere.



