Redesigning Automated Legal Guidance

Font Size:

Agencies should take care when using chatbots, virtual assistants, and preset menus to share legal information.

Font Size:

Federal agencies perform many functions and responsibilities. One of these functions is to help members of the public understand and apply the law. Increasingly, agencies provide this aid using automated legal guidance tools, such as chatbots, virtual assistants, and other automated systems.

For instance, when individuals have questions about their immigration status, they can turn to Emma, the U.S. Citizenship and Immigration Services’s computer-generated virtual assistant. When they have questions about their student loans, they can ask Aidan, the U.S. Department of Education’s virtual assistant that answers questions about federal student aid. And when they have questions about how personal and business activities affect their U.S. federal tax liability, they can consult with the Internal Revenue Service’s Interactive Tax Assistant to answer taxpayers’ personal tax questions.

There are a number of explanations for federal agencies’ increasing use of automated legal guidance tools. Under the Plain Writing Act of 2010, agencies are required to communicate complex legal rules and procedures to the public in “plain language.” Still, the formal law—comprising statutes, regulations, and judicial decisions—is often so complex that it is difficult, if not impossible, for most members of the public to comprehend.

Agencies also often lack sufficient resources to explain legal matters entirely through the use of human customer service representatives. Furthermore, agencies face pressure to provide comparable service with the private sector, where automated customer service tools have become commonplace. Automated tools appear to help agencies respond to a number of these problems by translating complex legal material and by making it more accessible.

As a result, several federal agencies are now using automated guidance tools to respond to tens of millions of inquiries regarding the law each year. Other agencies are considering introducing these tools as a supplement or replacement for human customer service representatives. Despite the prevalence of this shift, scholars who have studied technology and artificial intelligence at government agencies have not focused on agencies’ use of automation to explain the law.

To address the growing use of automated legal guidance tools by federal agencies, the Administrative Conference of the United States (ACUS) charged us with examining federal agency use of automated legal guidance and offering recommendations for reform.

In our study, we reviewed the use of automated guidance tools by all federal agencies and conducted in-depth research about two principal models of automated legal guidance. The first of these—a decision tree “answer” model—requires users to click through topics online to find answers to their questions. In contrast, the second model—the natural language “sorting” model—allows users to type their questions in natural language and then uses artificial intelligence to sort questions into categories and provide corresponding information.

We explored how these different models of automated legal guidance give answers that dovetail with or deviate from the underlying law. To learn more about how agency officials themselves think about such tools, we also interviewed federal agency officials who have direct responsibility for well-developed automated legal guidance tools in use by the federal government, or supervisory responsibility for guidance in the agencies that have developed such tools. In addition, we conducted interviews with the U.S. Government Accountability Office officials who work with agencies to develop such tools.

We found that automated legal guidance tools offer many benefits to agencies and to the public. They allow agencies to respond to public inquiries more efficiently than human customer service representatives, help the public navigate complex legal regimes, and, for certain inquiries, provide accurate responses based on the underlying law. Automated legal guidance tools also allow agencies to reveal their views to the public in an easily accessible format.

But automated legal guidance tools also have drawbacks. In their attempt to offer simple explanations about complex law to the public, automated legal guidance tools can give advice that deviates from the underlying law. This outcome occurs in both decision tree “answer” models and natural language “sorting” models. For instance, both models can portray unsettled law as unambiguous, add administrative gloss to the law, and omit discussion of statutory and regulatory exceptions and requirements. These deviations can mislead the public about how the law applies to their personal circumstances.

At present, agencies’ automated legal guidance tools also provide users with little notice of the underlying laws that the agency guidance relies on, and few to no warnings about the limited authority of the automated legal guidance or the inability of users to rely upon it as a legal matter. We also found that no federal agencies publish archives of changes made to automated legal guidance.

The potential for automated legal guidance to mislead members of the public, coupled with the inability of the public to rely on such guidance in a meaningful way, may exacerbate equity gaps between members of the public who have access to reliable advice through legal counsel and those who do not.

Interviews with federal agency officials also revealed that agency officials are not adequately apprised of some of the drawbacks of automated legal guidance. We heard little concern from agency officials about reliance issues because they took the position that members of the public were not relying on, or should not be relying upon, automated legal guidance. Agency officials held this belief in part because they believed that automated legal guidance just provided “information” and was not a source of law. This reaction was common even though millions of individuals turn to automated legal guidance annually to obtain answers about the law from federal agencies.

Automated legal guidance has an important role to play in advising members of the public about the law and, in any event, will be used by agencies in the future to explain the law to the public. Agencies must, however, be mindful of the potential drawbacks of such guidance, especially as uses of automated legal guidance expand.

Based on our report, the full ACUS assembly adopted 20 recommendations earlier this year on agency use of automated legal guidance in the following topic areas: design and management, accessibility, transparency, and reliance.

These recommendations include, among other things, a call for agencies to consider when and if a user’s good faith reliance on guidance from automated legal guidance tools should serve as a defense against penalties for noncompliance. The recommendations encourage agencies to allow users to obtain a written record of their communication with automated legal guidance tools, including date and time stamps.

In addition, agencies should explain the limitations of the advice that users receive when the underlying law is unclear or unsettled. To the extent practicable, agencies should provide access through automated legal guidance tools to the legal materials underlying the tools, including relevant statutes, rules, and judicial or adjudicative decisions. More generally, agencies should design and manage automated legal guidance tools in ways that promote fairness, accuracy, clarity, efficiency, accessibility, and transparency.

It is critical that agencies follow these best practices when implementing automated legal guidance tools. As our study revealed, automated legal guidance can enable agencies to convey complex law to the public efficiently. But it may also cause the government to present the law as simpler and clearer than it is—a phenomenon that current agency practices threaten to make worse, including by making automated legal guidance seem more personalized than it is.

In the end, our report and the ACUS recommendations provide agency officials with a guide to maximize benefits and minimize costs as they introduce automated legal guidance to help members of the public learn about and follow the law.

Joshua D. Blank

Joshua D. Blank is a professor and director of strategic initiatives at the University of California, Irvine School of Law.

Leigh Osofsky

Leigh Osofsky is a professor and associate dean for research at the University of North Carolina School of Law.

This essay is part of a three-part series on the Administrative Conference of the United States, entitled Using Technology and Contractors in the Administrative State.