Examining the New Artificial Intelligence Executive Order

Font Size:

President Biden issues an executive order directing federal agencies to regulate emerging AI technologies.

Font Size:

In the thick of months-long discussions on regulating artificial intelligence (AI), President Joseph R. Biden relaxed one weekend by watching a movie. He picked out the new Mission: Impossible film, which features a rogue AI villain.

Rather than providing a chance for the President to unwind, though, the film reportedly gave President Biden “plenty more to worry about,” according to deputy White House chief of staff Bruce Reed.

Soon after, President Biden issued an executive order focused on ensuring safety, security, and trustworthiness of AI. The order marks the first of its kind concerning the use of AI in the United States.

Building on a set of existing voluntary commitments by firms leading the AI field, the new executive order establishes eight “guiding principles and priorities,” including safety and security, privacy, and advancement of civil rights.

The order focuses on both private industry and the federal government, including research conducted or funded by the government.

The order takes a limited approach to the direct regulation of private industry. Under the order, U.S. Department of Commerce must establish detailed reporting requirements, and companies developing certain types of AI models must adhere to these requirements.

To ensure “responsible and effective” government use of AI, the executive order directs the Office of Management and Budget to develop guidance for agencies on risk management, AI testing, and the mitigation of bias and discrimination in AI models.

Vice President Kamala Harris also explained that the administration seeks to “promote global order and stability” in the regulation of AI. To this end, the order directs the U.S. Department of State and Commerce Department to “lead an effort to establish robust international frameworks” for managing AI.

The order’s AI safety and trustworthiness agenda focuses on research. For example, it enlists federal agencies, including the Commerce Department, U.S. Patent and Trademark Office, U.S. Copyright Office, and the Federal Trade Commission, to research and develop guidance for AI safety, AI-generated content, and markets trading semiconductors, which provide the processing power necessary for artificial intelligence.

The order also seeks to address equity and civil rights goals by directing various federal agencies, such as the U.S. Department of Labor and the U.S. Department of Housing and Urban Development, to research AI-generated biases and inequities in labor markets and housing markets. For example, if an AI model is trained using data from employers that disproportionately hire men, the model may recommend against hiring women. After conducting research, agencies must develop guidelines and policies to address these issues.

The order also empowers many of these agencies to use their existing rulemaking and enforcement powers to achieve the order’s policy goals. Some practitioners note that this “call for regulatory attention” will affect businesses in many regulated sectors, not just technology firms. An online clothing retailer, for example, could be required to identify and understand the risks associated with their use of AI in email marketing.

Several industry stakeholders responded positively to the executive order. The Vice Chair and President at Microsoft, Brad Smith, praised the order as “another critical step forward in the governance of AI technology.” Another commentator remarked that the issue of AI regulation “has a lot of facets,” and that the executive order is “trying to move all the facets forward.”

Trade association NetChoice, however, called the order a “red tape wishlist.”

Some commentators have critiqued the order for the topics that it does not include. One expert claims that the order “largely ignores” the U.S. Department of the Treasury and other financial regulators. Another stakeholder reports that the order does not cover licensing requirements for advanced models, reporting requirements for training data, or the application of intellectual property law to AI-created works.

For future regulation of AI, observers are focused on Capitol Hill. Congressional action, according to some, is needed next.

A visiting fellow at the Brookings Institution, Tom Wheelerexplained that effective AI oversight will require action exceeding the president’s executive powers. In addition, executive orders, vulnerable to rescission by future presidents, are less stable than legislation.

President Biden, sharing this concern, expressed a desire for bipartisan legislation in his remarks introducing the order. Many experts, however, are pessimistic about the prospect of federal legislation in the near term because of the existing polarization in the U.S. Congress.

Although the executive order signals some government action on AI, the future of AI regulation in the United States still remains uncertain.