Assessing Algorithms for Public Good

Requiring algorithmic impact assessments would promote responsible decision-making and inform future policies.

This summer, the Biden Administration announced that leading AI developers—including Google, Meta, and OpenAI–agreed to minimum safeguards to promote safe and trustworthy AI. This announcement shows that both the public and private sectors are invested in understanding AI’s vulnerabilities and how to address them.

So far, algorithmic accountability has occurred—to mixed effect—through investigative research and reporting, voluntary standards, scattered audits, government enforcement actions, private litigation, and even hacking. But a comprehensive regulatory regime has yet to emerge, in part because it is challenging to design rules that can account for the wide variety of AI technologies and contexts that present different levels of risk.

At the same time, doing nothing is increasingly untenable for regulators. The lurking dangers of AI are easy to see. AI systems can lead to people being denied a home loan, a job opportunity, and even their freedom. With the recent release of ChatGPT and generative AI tools, heightened concerns have swept through society about AI’s potential to undercut or eliminate jobs.

At this stage of AI development, algorithmic impact assessments (AIA) can be a key tool to promote accountability and public trust in algorithmic systems while maintaining flexibility to support innovation.

The U.S. Congress has expressed interest in AIAs, proposing legislation to require them since at least 2019 and most recently last summer in a bipartisan, bicameral data protection bill. AIAs would compel AI developers to consider and define key aspects of an algorithmic system before releasing it in the world.

Requiring AIAs in the United States would be a timely and politically viable intervention because it would support two important goals: responsible decision-making in organizations­­­­ and informed policymaking in the government.

For example, AIAs can help organizations by providing a structure for gathering information about social impacts in the technical design process to make responsible AI decisions.

Historically, AI developers worked separately from auditors and ethicists and did not consider fairness, accountability, and transparency until much later, if at all, in the development process. But that trend has shifted. AIAs would reinforce the trend toward earlier consideration of AI risks by requiring developers to discuss any new system’s purpose, training data, input data, outputs, risks, and steps to mitigate those risks. A subsequent AIA evaluation can also occur after deployment to account for any new impacts from technology updates or changing environmental conditions.

From the outset, organizations should also consider potential uses, capabilities, and downstream effects that are not related to the AI system’s intended purpose. For example, Meta’s purpose was to support social connection, but the platform has led to documented instances of damage to public safety, mental health, and democracy.

Current law immunizes platform companies, such as Google and Twitter, from liability for harms associated with use of their platforms, making it important to find other ways to promote greater company responsibility in AI. Familiar and well-documented harms from AI over the past two decades should lead companies to consider the downstream effects and make responsible AI design choices.

AIAs could also support transparency that would inform future policymaking. By requiring organizations to document critical design decisions and publish their results, regulators and policymakers can learn more about the designs of algorithmic systems and their risks.

To inform future policy, regulators will need to include clear definitions and standards for AIAs. For example, even the definition of what constitutes “artificial intelligence” or an “automated decision-making system” remains under debate. Regulators must use a definition that is broad enough to cover relevant technologies and use cases but narrow enough to support innovation and small businesses.

Regulators will also need to define which impacts to compel companies to look for and track. This will involve deciding what kinds of impacts matter and what interests the AIAs should help to protect.

For instance, Canada already requires AIAs in a questionnaire format that highlights risks from governmental AI use related to human rights, health, economic interests, and sustainability. The United States offers a similar voluntary AIA questionnaire that is intended to help evaluate the disproportionate impacts of new technology systems on different groups, as well as to assess any impacts on rights, freedoms, economic interests, health, well-being, healthcare, and the environment.

Once an AIA is complete, it can provide public transparency about the AI system and how it works. Some AIA proposals require less detailed answers but still call for full publication of the resulting documentation. Meanwhile, other AIA proposals require organizations to provide detailed answers in confidence to protect trade secrets, but still require notice to the public about the existence of AI systems that may impact their lives. Regulators must weigh these tradeoffs when designing AIAs.

Ultimately, the time has come for the government to take action to address the risks from AI. Algorithms already shape and determine human experiences, and their transformative effects on our lives will grow exponentially as organizations overcome previous restraints and race to deploy algorithmic systems that impact millions of people.

Although AIAs are not a silver bullet, not requiring them is no longer an option. The United States should develop a comprehensive AI accountability regime that includes a range of regulatory interventions, such as watermarks, consumer-friendly labels, and pre-market reviews. Such an accountability regime would make for an important, viable step to controlling AI risks and generating information that supports responsible AI decision-making. Regulatory action is necessary now to protect the public.