Recent Developments in Artificial Intelligence Law and Policy

Font Size:

A House bill on a national artificial intelligence initiative joins a wave of related AI developments.

Font Size:

Editor’s Note: Each week in our Saturday Seminar, members of The Regulatory Review staff highlight important papers, reports, or commentary on a specific regulatory topic. This week’s Saturday Seminar, however, features guest contributors, Lee Tiedrich, B.J. Altvater, and James Yoon of Covington and Burling LLP, who discuss recent changes in artificial intelligence regulation. 

***

Members of the House Committee on Science, Space, and Technology introduced the National Artificial Intelligence Initiative Act (NAIIA) on March 12, 2020. Chairwoman Eddie Bernice Johnson (D-Texas) and Ranking Member Frank Lucas (R-Okla.) sponsored this bipartisan bill with Representative Jerry McNerney (D-Calif.), Representative Pete Olson (R-Texas), Representative Dan Lipinski (D-Ill.), and Representative Randy Weber (R-Texas). This development should interest the growing number of digital-health companies and other entities seeking to capitalize on the benefits of artificial intelligence (AI).

The NAIIA is a new effort to increase coordination among U.S. agencies to support AI innovation. The bill sets forth a deregulatory approach to AI by, for example, encouraging the development of voluntary standards for AI trustworthiness and for risk assessment. This approach to regulation is consistent with that set forth in President Donald J. Trump’s Executive Order, “Maintaining American Leadership in Artificial Intelligence.” It also is aligned with the Trump Administration’s draft guidance document for “Regulation of Artificial Intelligence Applications,” (Draft AI Guidance) which sets forth 10 high-level principles for agencies to consider when proposing new regulatory or non-regulatory approaches to private sector use of AI technology. Eighty-one individuals and entities submitted comments last month on the Draft AI Guidance. A recently released annual report from the Trump Administration provides a comprehensive overview of how the Draft AI Guidance and other efforts fit into the broader national AI strategy.

Although the NAIIA and Trump Administration have taken a light-touch approach to AI regulation, other legislative proposals at both the state and federal levels such as California’s Automated Decision Systems Accountability Act of 2020 take a more proactive approach to AI regulation. In addition, on April 8, the Federal Trade Commission (FTC) released a blog post, “Using Artificial Intelligence and Algorithms,” that, among other things, highlights existing FTC guidance currently applicable to AI and algorithms and sets forth five principles for using AI and algorithms.

The United States is not the only jurisdiction considering how to regulate AI. For example, the European Commission recently released a white paper that outlines a proposed framework for regulating AI. The white paper reflects European Commission President Ursula von der Leyen’s commitment to develop a legislative proposal for a “coordinated European approach to the human and ethical implications of AI” during the launch of the new Commission.

Summary of NAIIA’s Key Provisions

The NAIIA would support AI innovation in the United States through a variety of means, including federal funding. Some key elements of the bill include:

  • Establishing the National Artificial Intelligence Initiative, which would—in part—support AI research and development, educational programs, agency coordination, public-private partnerships, and the creation of standards. The Initiative would be housed within the Office of Science and Technology Policy.
  • Requiring the Director of the Office of Science and Technology Policy to establish an interagency committee with representatives from the National Institute of Standards and Technology, the National Science Foundation (NSF), and the U.S. Department of Education, among other agencies. The committee would oversee the planning, management, and coordination of the National Artificial Intelligence Initiative.
  • Establishing a National Artificial Intelligence Advisory Committee composed of members from academia, civil society, and other key stakeholders to advise on the National Artificial Intelligence Initiative.
  • Directing NSF to fund a study on AI’s impact on the workforce across sectors and provide recommendations regarding research gaps which could provide better information about those impacts.
  • Directing NSF to create and oversee a network of AI research institutes, with every agency receiving the ability to award financial assistance to establish and support such institutes.
  • Authorizing $4.8 billion (over fiscal years 2021 through 2025) for NSF to award grants for AI-related research and education activities.
  • Authorizing $391 million (over fiscal years 2021 through 2025) for the National Institute of Standards and Technology to develop voluntary standards for trustworthy AI systems, establish a risk assessment framework for AI systems, and develop guidance on best practices for public-private data sharing.
  • Authorizing $1.2 billion (over fiscal years 2021 through 2025) for the Education Department to award grants for education programs and research on AI systems, including research on societal, ethical, safety, education, workforce, and security implications of AI systems.

Recent Federal, State, and Local Efforts to Regulate AI

The introduction of the NAIIA follows the introduction of other federal and state legislative proposals and ordinances addressing AI, some of which are summarized below. Some of these proposals take a light-touch approach to regulation, similar to the NAIIA, while others set forth a more prescriptive approach. With Congress focused on the COVID-19 crisis and the upcoming national elections, AI legislation is a lower congressional priority for now. Bills such as the NAIIA, however, provide insight on how the debate might take shape after the crisis and upcoming election.

  • The Artificial Intelligence Initiative Act would “establish a coordinated federal initiative to accelerate AI research and development.”
  • The Growing Artificial Intelligence Through Research Act would direct the President to establish and implement the “National Artificial Intelligence Initiative” to create a “comprehensive…research and development strategy” and increase “coordination among federal agencies.”
  • The AI in Government Act of 2019 would create an “AI Center of Excellence” to advise and “promote the efforts of the federal government in developing innovative uses of AI” and also require the Director of the Office of Management and Budget to issue guidance to federal agencies on developing AI governance plans.
  • The Consumer Online Privacy Rights Act would require companies to conduct annual impact assessments if they engage in algorithmic decision-making, or assist others in such decision-making, for three purposes: to determine eligibility for “housing, education, employment, or credit opportunities;” to facilitate advertising for these opportunities; or to “determine access to, or restrictions on the use of, any place of public accommodation.”
  • The Data Protection Act of 2020 would create a new federal agency tasked with, among other things, requiring impact assessments of “high-risk data practices,” including “systematic or extensive evaluation of personal data that is based on automated processing…on which decisions are based that produce legal effects concerning an individual.” The proposed new agency also would regulate “consumer scoring” and other practices that determine consumer eligibility for rights, benefits, or privileges in certain contexts such as employment.
  • A California bill would require the Secretary of the Government Operations Agency to appoint an AI working group that would report to the legislature on “the uses, risks, benefits, and legal implications associated with the development and deployment of artificial intelligence by California-based businesses.”
  • A Washington state bill would direct the chief privacy officer of a public agency to “adopt rules…regarding the development, procurement, and use of automated decision systems by a public agency.”
  • States and municipalities are actively addressing facial recognition technologies by introducing or enacting legislation or ordinances aimed at regulating uses of such technologies.
    • Washington state recently enacted Senate Bill 6280, which creates a legal framework by which agencies may use facial recognition technologies to the benefit of society—for example, by assisting agencies in locating missing or deceased persons—but prohibits uses that “threaten our democratic freedoms and put our civil liberties at risk.”
    • California enacted Assembly Bill 1215, which creates a three-year moratorium on law enforcement agencies’ use of any biometric surveillance system in connection with police-worn body-cameras.
    • Maryland’s state legislature recently passed House Bill 1202, which would prohibit the use of facial recognition technologies during job interviews without the applicant’s consent.
    • In addition to these examples of states that have enacted legislation, Arizona, Massachusetts, New Hampshire, Vermont, San Francisco, and Somerville are just a few of the other state and local governments that recently have considered the regulation of facial recognition technologies.
Lee Tiedrich

Lee Tiedrich is a partner at Covington & Burling LLP and co-chair of the firm’s global and multi-disciplinary Artificial Intelligence Initiative.

B.J. Altvater

B.J. Altvater is an associate at Covington & Burling LLP.

James Yoon

James Yoon is an associate at Covington & Burling LLP.

This article is for general information purposes and is not intended to be and should not be taken as legal advice.