The United States Regulates Artificial Intelligence with Export Controls

The United States shifts from unregulated cyberspace to cross-border controls.

With a Biden Administration regulation issued early this year, the United States became one of the first countries to regulate artificial intelligence (AI) through export controls.

The move is consistent with similar efforts by other countries to place limits on where their most advanced technology may be exported to, including Canada, the European Union, the United Kingdom, and Australia. But the U.S. rule is innovative for its specific focus on AI. The Carnegie Endowment for International Peace, a foreign policy think tank, described the rule as an “ambitious act of economic and technological policymaking.”

The U.S. Department of Commerce published the final rule in January 2025, with the aim of enabling U.S. companies to “safely export AI technology abroad,” as described by Michael C. Horowitz at the Council of Foreign Relations.

Broadly, the rule enables U.S. companies to export AI chips and capabilities abroad, and exempts U.S. allies from regulations that limit the quantity of exportable chips. By contrast, under the rule, the United States would not export AI chips to “countries of concern,” meaning countries where the United States has arms embargos. The rule was reinforced by the current Administration when President Donald J. Trump issued an executive order intended to “identify and eliminate loopholes in existing export controls.”

Historically, the regulation of AI in the United States has been limited by the fear that excessive regulation might kill AI just as it is taking off, as industry points out.

But, in recent years, the United States has shifted toward controls on emerging technology. Late last year, the Biden Administration issued a final rule amending export controls for semiconductors. Gregory C. Allen at the Center for Strategic and International Studies writes that the 2024 rule aims to prevent China from accessing advanced AI chips and to restrict China’s ability to obtain or domestically produce alternatives.

This year’s Commerce Department rule follows similar measures adopted by the U.S. government to restrict China’s access to artificial intelligence and advanced semiconductor technologies, which has been a government policy focus since 2022.

Digital sovereignty or “the ability to have control over your own digital destiny,” as Sean Fleming at the World Economic Forum puts it, is an issue of growing importance. As scholars argue, the competition for control over the infrastructure, data, and design of technology reflects broader debates over sovereignty by countries wishing to control their own affairs.

In August of last year, the European Union passed a regulation seeking to “harmonize” the development, commercialization, and use of AI technologies. According to the European Commission, the rules aim to “foster trustworthy AI” in Europe. The regulation classifies AI applications into different risk categories based on their potential impact on individuals and society.

Unlike the U.S. rule issued this year, the European approach does not target particular countries and instead focuses on regulating the risk category of the type of AI. Scholars argue that the European regulation has the potential to become a global benchmark for governance and regulation of AI.

By contrast, China takes a state-run approach, with a top priority to “retain control of information.” China is also encouraging domestic innovation in generative AI. In a report to Congress, the U.S.–China Economic Security Review Commission reported that China is investing in nonstate actors, including corporations, to further its technology development goals and policy objectives. The Commission was established in 2000 to report to Congress on the national security implications of the U.S.–China relationship.

China’s government-led approach to digital sovereignty and regulation of AI has contrasted with the traditional approach in the United States, where advocates for an unregulated cyberspace, especially within big tech, play a dominant role.

But, the United States’ new rule limiting the export of AI reflects a fundamental shift in how the United States approaches the regulation of internet and technology, bringing it closer to a government-led approach.

Google has stated that it builds compliance with regulations into product development and prefers regulations to be stable and predictable. For this reason, whatever approach big players in AI technology take moving forward, companies such as Google would prefer the approach to be coordinated, as noted by the Digital Watch Observatory. The Centre for International Governance Innovation, a Canadian thinktank, states that corporations as non-state actors may play a role in either new efforts at multilateral cooperation or in standard setting efforts.

Ultimately, the United States’ unilateral rulemaking effort may push other countries to enact similar measures, pulling the world closer together on AI regulation, as the Brookings Institution points out. But, as scholars argue, universal collaboration on AI regulation is unlikely.