Regulating the Safety of Autonomous Vehicles

Experts examine how regulators can ensure the safe and accountable deployment of autonomous vehicles.

Autonomous vehicle technology is advancing faster than the regulatory systems designed to oversee it. Recent high-profile developments—including California’s decision to suspend one corporation’s driverless taxi operations following a series of safety concerns—have intensified scrutiny of how autonomous vehicle (AV) companies test and deploy their technologies. Meanwhile, as firms such as Waymo expand their services in cities including Phoenix and Los Angeles, policymakers and residents weigh tradeoffs between innovation, mobility benefits, and public safety.

AVs promise substantial public health benefits: Approximately 40,000 people die in traffic crashes each year, according to the National Highway Traffic Safety Administration. But automated systems also introduce new risks and uncertainties. Their real-world performance can be difficult to assess. And many communities lack meaningful insight into how companies validate safety, share data, or respond to system failures. As AV pilots expand nationwide, growing concerns about safety, transparency, and equitable deployment place increasing pressure on regulators at all levels of government.

Oversight of AVs is divided across multiple agencies. The National Highway Traffic Safety Administration issues Federal Motor Vehicle Safety Standards and investigates or recalls unsafe technologies. The Federal Motor Carrier Safety Administration oversees commercial autonomous trucking, while the Federal Communications Commission manages connected-vehicle communication. States also regulate licensing, insurance, and road rules. Additionally, cities handle on-the-ground issues such as emergency response and traffic flow. This regulatory diversity leads to divergent approaches. For example, California imposes strict testing requirements on AVs, while Arizona and Texas maintain more permissive frameworks.

With federal, state, and local governments responsible for different aspects of AV oversight, gaps and inconsistencies emerge, fueling debates over the future of AV regulation. Scholars and policymakers disagree on whether federal performance standards should be mandatory, how much safety data AV companies should disclose, and how liability should be allocated when autonomous systems fail. A central question is whether voluntary federal guidance—such as the National Highway Traffic Safety Administration’s Automated Vehicles 4.0 framework—provides sufficient public protection, or whether regulators at any level should adopt binding rules requiring companies to meet defined safety benchmarks before deployment.

In this week’s Saturday Seminar, experts discuss emerging regulatory strategies for strengthening the safety of and oversight over AVs.

  • In an article in IEEE Transactions on Intelligent Vehicles, Lars Ullrich of the Friedrich-Alexander University Erlangen-Nürnberg and several coauthors argue that existing regulatory and safety-assurance frameworks are inadequate for AI-based automated vehicles. The Ullrich team explains that traditional automotive safety standards cannot address the black-box nature of machine-learning systems, making it difficult for regulators to verify performance or predict failures. Thus, regulators in the United States, the European Union, and China rely on fragmented rules even as AI development accelerates, Ullrich and his coauthors stress. The Ullrich team argues that policymakers need ongoing, data-driven tools to monitor AI systems throughout a vehicle’s lifecycle, ensuring they operate safely.
  • The safety challenges of AVs stem from both the shortcomings of underlying AI technology and from the uncertainty of deploying complex AI in real traffic, argue Marcus Nolte of the KTH Royal Institute of Technology and co-authors in an article. The Nolte team distinguishes between safety risks that arise in complex traffic environments and AI-specific risks, such as black-box models and deficient data inputs. To address both risks, Nolte and his coauthors recommend programmers and regulators focus on how AVs should act in specific driving scenarios—connecting clear descriptions of where a vehicle may operate and what capabilities it therefore needs—to the data and models used to train the AI technology.
  • In a working paper, Milin Patel and Rolf Jung of the Kempten University of Applied Sciences, and Marzana Khatun of Robert Bosch Engineering explain that the usefulness of the Safety of the Intended Functionality (SOTIF) framework—the global standard for automated driving systems—is questionable because AI-driven systems can behave unpredictably in real-world conditions. The Patel team argues that existing studies of SOTIF in automated driving rely on narrow case examples, depend on highly specific datasets and simulations, and overlook ethical considerations and the role of human drivers. To address these concerns, Patel and his coauthors call for a unified and scalable SOTIF approach that can better incorporate human factors.
  • In an article in the European Transport Research Review, David Fernández Llorca of the Joint Research Centre and several coauthors argue that integrating AI into AVs requires regulators to reconsider how the EU AI Act aligns with motor-vehicle safety rules. The Llorca team explains that the Act classifies certain AI systems as “high risk,” triggering strict risk management and transparency obligations, including for AI used as a safety component in AVs. Yet Llorca and his coauthors note that AVs blur the line between operational and safety components, making it difficult to determine which AI systems qualify as high-risk. The Llorca team urges clearer definitions so that developers know which systems must meet heightened requirements and can strengthen compliance.
  • In a recent article, Tina Sever of the University of Ljubljana and Giuseppe Contissa of the University of Bologna argue that current automated driving regulations in the U.S. and several European countries have not kept pace with the shift toward fully AVs. Sever and Contissa find that most existing regimes assume a human driver as the primary decision-maker, creating uncertainty around responsibility, system failure, and compliance if no driver is present. Sever and Contissa conclude that effective AV regulation requires modernizing statutory definitions of driving-related concepts, clarifying liability among manufacturers, software providers, and operators, and ensuring international uniformity of driving safety standards.
  • In a research paper, Anton Kuznietsov of the Technical University of Darmstadt explains that deep-learning-based systems often function as black boxes, limiting transparency and undermining public trust. The Kuznietsov team describes how explainable AI can help developers understand AI behavior, detect failures, and assess safety. Kuznietsov and his coauthors propose a framework to integrate explainable AI across the three core components of an autonomous driving system: perception, how the AV interprets its surroundings; planning, how it selects appropriate actions; and control, how it executes actions. Their unified approach would ensure each stage of autonomous driving is certifiable and understandable, the Kuznietsov team contends.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.