Biden’s Artificial Intelligence Legacy

Regulators discuss proposals to promote responsible use of artificial intelligence.

President Biden recently signed Executive Order 14,110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

The order outlines eight key areas of interest for the implementation and regulation of artificial intelligence (AI) throughout U.S. society and national security, including safety and security, responsible competition, “supporting American workers,” furtherance of equity, strong consumer protections, and the protection of civil liberties.

One major concern highlighted by previous administrations is the so-called mosaic effect, where portions of de-identified or anonymous information can be stitched back together by hackers or criminals—a concern that has also been raised with respect to AI technologies and consumer protections.

Many institutions within the United States already use AI-based algorithms, including in criminal sentencing, education, and employee recruiting. But some authors have argued that a range of pitfalls could potentially emerge as society relies increasingly on AI for its decision-making. For example, one scholar argues that AI “cannot incorporate emotion, morality, or value judgments” into its algorithms. Other scholars have similarly met the idea of a hard shift to quantitative decision-making with skepticism because AI currently lacks the flexibility and sentimentality inherent in human decision making.

But some scholars have sharply criticized the executive order. For example, some scholars have commented on the sheer scope of the executive order and the difficulty of enforcing its various provisions, especially given the lack of workers with the appropriate subject matter expertise to staff the federal government. In addition, some have noted that the order could create potential barriers to entry from new companies or employees in fields affected by AI.

Scholars have also discussed issues related to agency coordination and the implementation of the order. One author describes a so-called “regulatory crossfire” which would arise if the entire executive branch of the government implemented the order at the same time. This could result in different departments and independent agencies claiming authority over the same set of conduct.

Others have pointed out the risks associated with a laissez-faire approach, recommending a path forward where Congress would set “clear guardrails” for agency action while simultaneously allowing for innovation among key companies in the AI field.

As an alternative way of describing the Biden Administration’s approach, Cary Coglianese, a professor at the University of Pennsylvania and director of the Penn Program on Regulation, has called it a “people and processes” or “management approach” to AI governance, according to which both private and public sectors will need to contribute to the oversight of an ever-changing landscape of AI technologies.

In this week’s Saturday Seminar, scholars discuss Executive Order 14,110 and the future of U.S. AI regulations.

  • Executive agencies should act to bring AI use-case data collection and reporting into alignment with federal law, the Government Accountability Office (GAO) argues in a recent report. GAO notes that AI capabilities and government uses of AI are fast expanding, as 75% of agencies that GAO surveyed were able to give comprehensive information on their use of AI. However, inaccurate and incomplete data collection by agencies, GAO team argues, still hampers the ability of the federal government to manage AI usage. Although 10 of the 19 agencies listed in the report have agreed to implement their recommendations, GAO team urges agencies to improve the development and implementation of AI by complying with federal law.
  • Members of the U.S. Congress should provide legislative guidance to regulators of foundation AI models, U.S. Marine Corps Judge Advocate Steven Arango  recommends in a forthcoming article in the Georgia State University Law Review. Arango explains that foundation models are powerful AI models that can be put to multiple uses and integrated with other AI systems. The versatility of foundation models, means they can be repurposed for nefarious uses, Arango asserts. Arango urges regulators to limit risk of misuse by restricting widespread access to foundation models and requiring developers to build in technical safeguards preventing malicious uses. Only a proactive legislative response, , is capable of effectively containing the risk posed by the rapid development of foundation AI modes, Arango argues.
  • In an article published by the Cato Institute, Jennifer Huddleston argues that the best method to regulate emerging AI technology is through a “light touch approach” such as the one taken to regulate the Internet. Huddleston argues that the federal government must undertake a significant review of all currently enforced regulation to determine which rules do or do not apply to AI technologies. She also suggests that new regulations should be designed to prevent overlapping regulation by states. Too much state regulation, will lead to a “patchwork” of laws that will stifle innovation in AI technology, she argues. Huddleston notes, however, that laws governing AI use in state governments would be best regulated by the states themselves.
  • In an article published by The Brookings Institution, Darrell M. West argues for a multi-step approach to ensure responsible AI use by the federal government. West urges regulators to develop a set of codes that outline ethical standards for AI use, much like those developed to restrict the use of chemical and biological weapons. West also proposes that regulators develop tools that can be used to fight against biases of AI algorithms which could otherwise present users with “false or dangerous” information. Protecting individuals against AI biases as it relates to sensitive personal information, such as demographic data, should also be a high priority of future regulation, he argues.
  • In a recent Congressional Research Service Report, analysts Laurie Harris and Chris Jaikaran discuss Executive Order 14,110. Harris and Jaikaran explain that in recognition of the potential risks of using AI—such as fraud, discrimination, and the spread of disinformation—the order directs over 50 federal agencies to take steps to promote AI safety and security, consumer protection, and privacy, among other areas of concern. For example, the authors note that the order instructs the S. Department of Homeland Security to establish an Artificial Intelligence Safety and Security Board and to report on any discovered vulnerabilities of government AI systems. Harris and Jaikaran conclude that these efforts represent a significant step toward regulating the potential harms of AI.
  • In a recent article issued as a part of the Texas A&M University School of Law Legal Studies Research Paper Series, Professor Hannah Bloch-Wehba observes that under the current system of algorithmic governance, private technology companies are responsible for algorithms used in many areas of public governance, such as law enforcement, immigration, and national security. The use of algorithms in state-sanctioned activities allows private companies to exert significant power over the lives of ordinary citizens, creating problems of democratic accountability, explains Bloch-Wehba. Democratizing algorithmic governance, she argues, requires increased civilian oversight. Bloch-Wehba concludes that bottom-up solutions, such as alliances across labor and social movements, could become effective tools in solving the democratic accountability problem created by the entanglement of public and private power.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.