Regulating Artificial Intelligence Across Borders

Lee Tiedrich discusses both global and domestic regulatory approaches to general-purpose AI.

In a discussion with The Regulatory Review, Lee Tiedrich offers her perspective on the evolving international and domestic landscapes of artificial intelligence (AI) regulation.

Tiedrich identifies some of the challenges facing policymakers in the United States and abroad, including international organizations, tasked with developing AI policies. She contrasts the United States’s deregulatory approach with the European Union’s more prescriptive AI legal framework. She is optimistic about AI’s potential for rapid and remarkable scientific progress across sectors and recognizes the need to address the risks posed by AI, and particularly general-purpose AI models.

Lee Tiedrich serves as a senior advisor and contributor to the International Scientific Report on the Safety of Advanced AI. She also co-chairs the Global Partnership on Artificial Intelligence’s Responsible AI Strategy for the Environment and Intellectual Property committees.

She is the inaugural AI interdisciplinary fellow at the University of Maryland and maintains a faculty appointment at the Duke Pratt School of Engineering. Tiedrich spent nearly thirty years in the private sector at Covington & Burling LLP, where she led the firm’s global AI initiative. She also served as a Senior AI Advisor at the Natonal Institute of Standards and Technology. Tiedrich holds both a law degree from the University of Pennsylvania Carey Law School and a degree in electrical engineering from Duke University.

The Regulatory Review is pleased to share the following interview with Lee Tiedrich.

The Regulatory Review: In your experience, working both in the United States and with multilateral organizations, what are the biggest challenges that policymakers face when deciding whether and how to regulate AI?

Tiedrich: AI has tremendous promise to enhance society’s prosperity and security and to advance social good. But if not developed or deployed appropriately, AI can cause tremendous harm. Policymakers face the grand challenge of developing regulatory approaches that enable AI innovation and unlock its benefits while safeguarding against harms and risks. AI’s rapid development and myriad applications—including in health care, government, agriculture, and entertainment—compound the challenge. The regulatory approach should anticipate technological change and reflect that AI’s benefits and risks typically vary among use cases. To develop suitable and enduring approaches, policymakers should consider the role of both hard law, such as statutes and regulations, and soft law, including standards, guidelines, and tools, which typically can adapt more nimbly to address new developments.

Unsurprisingly, policymakers across jurisdictions have responded differently. This increases the need for cooperative efforts that can effectively encourage sufficient regulatory interoperability to enable responsible AI products and services to flow across borders.

TRR: The European Union adopted the Artificial Intelligence Act in 2024. What do you see as its strengths and weaknesses? What are the main challenges facing European regulators in implementing this law?

Tiedrich: In terms of strengths, the Act has a tiered approach that regulates AI systems based on their level of risk. It bans a limited set of AI systems that pose unacceptable risks. For high risk AI systems, it imposes pre-market and post-market regulations. It also establishes transparency obligations for lower risk AI systems, such as chatbots, so users know when they are interacting with AI. The Act includes obligations for general-purpose AI systems and does not regulate minimal and no risk systems.

The Act’s implementation challenges highlight some of its weaknesses. The Act’s required technical standards have not yet been finalized. Also, many stakeholders, including some small and medium sized businesses, have criticized the AI Act as too burdensome. In response, the EU recently introduced the Digital Omnibus AI Proposal seeking to delay the effective date of some of the Act’s provisions and streamline certain requirements.

TRR: How would you compare the approach that the United States has taken with respect to AI regulation with the approach that the EU has pursued?

Tiedrich: In contrast to the EU, the United States has focused on maintaining its AI dominance to advance economic competitiveness, national security, and human flourishing through a “minimally burdensome” AI framework. This is reflected in several recent executive orders and other executive branch actions, including the AI Action Plan, which underscores the importance of winning the AI race. The plan contains three pillars: accelerating AI innovation; building AI infrastructure; and leading in international AI diplomacy and security. This deregulatory approach seeks to “dismantle unnecessary” federal AI regulations and streamline permitting for data centers, energy production, and other infrastructure. It also aims to limit state AI regulation, including through judicial challenges, facilitate the export of the US AI tech stack, and prevent AI misuse by malicious actors.

These policies contemplate many actions, including building an AI evaluation ecosystem, supporting the workforce, and accelerating government adoption of truthful and ideologically unbiased AI systems.

TRR: The International AI Safety Report is the result of the collaboration of over 100 AI experts and represents the largest global collaboration on AI safety to date. As one of the participating experts, could you describe the process of creating the report?

Tiedrich: At the 2023 International AI Safety Summit, participants acknowledged the rapid development of frontier AI and that its risks were not sufficiently understood. Consequently, the 30 participating nations agreed to collaboratively develop an independent and inclusive report examining advanced AI risks. Turing Award winner Yoshua Bengio chairs this UK-commissioned project. I am a senior adviser, along with Nobel Prize winner Geoffrey Hinton and others. There is also a writing team and an expert advisory panel appointed by the nations and multilateral organizations participating in the 2023 summit.

Following the release of the July 2024 interim report, the first International AI Safety Report was published in January 2025 and presented at the Paris AI Action Summit. The report focuses on what general-purpose AI can do, its risks, and mitigation techniques. Because AI advances so quickly, key report updates were published later in 2025. The 2026 International AI Safety Report was released in February 2026. I am participating in a panel at the AI Impact Summit in India launching the new report.

TRR: The EU’s Artificial Intelligence Act and the International AI Safety Report both focus on general-purpose AI. What is general-purpose AI? What types of systems fall within and outside of that category?

As explained in the AI Safety Report, general-purpose AI refers to AI that can perform a range of tasks, such as generating computer code, text, video, or images across different subject areas. The Act embraces this concept and highlights the generality of these systems. The Act imposes obligations on all general-purpose AI systems, with additional requirements for those posing systemic risk.

General-purpose AI systems include large language models such as ChatGPT and Gemini as well as image generators, such as as Stable Diffusion, video generators, such as Sora, and sector specific applications, such as AlphaFold, that perform many functions.

General-purpose AI differs from “narrow AI,” which is tailored for a specific domain or function. Narrow AI includes AI designed specifically to detect financial fraud, AI-enabled grammar checkers, such as Grammarly, AI-enabled resume screening tools, and spam filters.

TRR: Why has AI, and more specifically general-purpose AI, emerged as such a central focus of policymaker efforts?

Tiedrich: This focus reflects policymakers’ urgency to solve the grand challenge to unlock AI innovation and realize its benefits while mitigating risks, all in a manner that reflects their priorities and norms. Since AI has developed far faster than the law, time remains of the essence.

This task demands tremendous attention given the stakes. AI could help achieve the UN Sustainable Development Goals, significantly grow the economy, and enhance global security. But this necessitates navigating many issues. From an infrastructure perspective, it requires adequate access to chips, data centers, energy, and other resources. Many countries embrace “AI sovereignty” to try to prevent over-reliance on foreign capabilities. AI also needs data, which raises privacy, cybersecurity, intellectual property, and other issues. Other concerns include protecting the workforce, preventing unlawful concentrations of market power, increasing AI literacy, research and talent, and protecting people’s rights and well-being. General-purpose AI presents added challenges given its many applications.

TRR: The AI Safety Report describes many of the risks associated with general-purpose AI, including scams, child sexual abuse material, privacy violations, and non-consensual intimate imagery. To what extent do you think that these and other risks can be effectively addressed in the United States under existing laws?

Tiedrich: The United States lacks broad federal privacy and AI legislation, although it criminalizes child sexual abuse material and has some narrower federal AI laws, including the Take It Down Act addressing non-consensual intimate imagery. When AI harms arise and no such laws apply, parties can try to pursue redress under state laws or federal laws not specific to AI, including anti-discrimination laws.

This situation presents challenges. The lack of uniformity among state laws can increase difficulties when enforcing rights across jurisdictions. It also raises the cost and complexity of compliance, motivating the executive branch’s efforts to limit state AI regulations.

Uncertainty exists on how to interpret some federal laws in AI contexts. For example, pending lawsuits question whether copyright law’s fair use exception permits the scraping of third-party data without consent for AI model training. Although there are some court decisions, uncertainty still lingers. Views differ about whether policymakers should provide more clarity.

TRR: Is there a role for tech companies to play in informing or supporting AI regulation and policy in the US and more broadly? If so, what is it?

Tiedrich: Stakeholders, including tech companies, civil society, and academia have important roles. They can respond to requests for public input as policymakers consider legislation, regulation, or other actions, including the AI Action Plan’s implementation.

There are other ways to engage, too. Several tech companies are providing discounted AI services to the federal government through OneGov deals to help implement the AI Action Plan and expedite government AI adoption. At least two dozen companies have entered into government collaborations to support the Genesis Mission to leverage AI for science, national security, and energy innovation. Companies can also participate in the American AI Export Program for U.S. AI technology.

Some tech companies have implemented voluntary codes of practice, such as the EU AI Codes of Practice and the G7 Hiroshima AI Code of Conduct and reporting framework, which further support AI policy. Companies can also lead through self-regulation as the AI policy landscape evolves.

TRR: Before becoming a lawyer, you obtained a degree in electrical engineering. How has your engineering background shaped the way that you approach legal and regulatory challenges around AI governance?

Tiedrich: My engineering background reinforces my approach of pursuing multi-disciplinary solutions to solve AI governance challenges. For AI policy to achieve its goals, it must be capable of being operationalized across many organizations and work in practice. Developing these solutions requires the legal and policy community to coordinate closely with technical and other experts. Oftentimes, these approaches include developing technical standards or other guidance which can help inform, implement, or interpret laws or regulations applicable to AI.

My multi-disciplinary background, including my U.S. National Institute of Standards and Technology experience, helps me facilitate this critical and often challenging collaboration. Technical experts frequently approach issues differently than legal and policy experts. Nomenclature can vary across disciplines, too. It requires significant effort to convene different experts, forge common understandings, and craft holistic AI governance solutions. My decades of experience across sectors and disciplines—including engineering, public policy, and law—helps me unite diverse experts and address these challenges.

Our Spotlight interview with Lee Tiedrich was conducted on February 11, 2026.