Scholars examine the benefits, challenges, and best practices of evidence-based policymaking.
Each day, people’s lives are affected by regulation. But how can government officials know whether their regulatory policies are working? What kind of information do they need to design effective policy interventions?
At a workshop held in Washington, D.C. earlier this year organized by Colleen V. Chien of Santa Clara University School of Law, a collection of policy experts from federal agencies, academic institutions, and non-profit organizations addressed these exact questions. The event was co-sponsored by the Administrative Conference of the United States, the Partnership for Public Service, the Columbia Center for Constitutional Governance, the Santa Clara University High Tech Law Institute, and the Penn Program on Regulation. The workshop sparked important conversations about the benefits of implementing well-designed policy experiments so that the effects of regulations and other public policies can be accurately measured and analyzed.
Following on the workshop, The Regulatory Review has been pleased to work with Professor Chien to involve leading scholars and practitioners from across the country in this series of essays. Each essay offers insight on how government can conduct meaningful policy pilots and should prioritize evidence-gathering and rigorous evaluation of regulatory policy.
The contributors to this series are: Omri Ben-Shahar, a professor at the University of Chicago Law School; Reeve T. Bull, research director of the Administrative Conference of the United States; Colleen V. Chien, a professor at Santa Clara University School of Law; Adam Chilton, a professor at the University of Chicago Law School; Miguel F. P. de Figueiredo, a professor at the University of Connecticut School of Law; Christopher L. Griffin, Jr., a visiting professor and research scholar at the University of Arizona James E. Rogers College of Law; Christian Grose, a professor at the University of Southern California; Jed Herrmann, vice president for state and federal policy implementation at Results for America; Sally Katzen, a professor at New York University School of Law and former administrator of the Office of Information and Regulatory Affairs; John A. List, a professor at the University of Chicago; Aditi Prahbu, attorney-adviser at the U.S. Environmental Protection Agency; Todd Rubin, attorney advisor at the Administrative Conference of the United States; Neel U. Sukhatme, a professor at Georgetown University Law Center and visiting scholar at the U.S. Patent and Trademark Office; and Abby K. Wood, a visiting professor at the University of Chicago Law School.
November 18, 2019 | John A. List, University of Chicago
In recent years, citizens and lawmakers have become increasingly enthusiastic about the adoption of evidence-based policies and programs. And yet these programs, when expanded, have not always delivered the dramatic societal impacts promised. The entire science-based community, from scholars to funders to policymakers, must join forces to tackle the weakest link in successful evidence-based policy: the scale-up effect.
November 19, 2019 | Colleen V. Chien, Santa Clara University School of Law, and Neel U. Sukhatme, Georgetown University Law Center
We all want policies that work. But too often academics seeking opportunities to study important problems and policymakers seeking analytical muscle to evaluate policies cannot find one another. We propose a way to connect them.
November 20, 2019 | Adam Chilton, University of Chicago Law School
Randomization is a key ingredient of rigorous causal inference. But convincing government agencies to randomize policies does not necessarily ensure that researchers will reach consensus about their effectiveness.
November 21, 2019 | Abby K. Wood, University of Chicago Law School, and Christian Grose, University of Southern California
Rigorous policy evaluation often involves randomization, and both federal and state governments have used randomization for a variety of purposes for approximately two centuries. But in the particular context of random government audits, transparency of process is crucial—especially when non-compliance can have reputational effects.
November 22, 2019 | Christopher L. Griffin, Jr., University of Arizona James E. Rogers College of Law.
Regulatory agencies either use clinical or actuarial judgment to set priorities and develop internal policy. Scholars have long debated the relative merits of each. Yet lawyers are less accustomed to probing this distinction. They need to be more aware so they can find ways to improve regulatory decision-making.
November 25, 2019 | Sally Katzen, New York University School of Law
There are important competing interests beneath the Paperwork Reduction Act’s incredibly detailed provisions. Too many requests can impose an intolerable burden on the public, and too cumbersome a process for approving those requests in advance can reduce the availability of valuable information for government decision-makers. There is a way forward, but it requires changes from both the agencies and the White House Office of Management and Budget.
November 26, 2019 | Reeve T. Bull, Administrative Conference of the United States
Government regulators can make much more effective use of trial and error than they currently do, learning from existing variations and explicitly designing rules to allow for variation that will promote ongoing improvement.
November 27, 2019 | Aditi Prabhu, U.S. Environmental Protection Agency
Nobody doubts that policy pilots are one way that agencies can collect and analyze data while developing new regulations and examining those that are already on the books. There are, however, barriers unique to the regulatory context that can make successfully developing, implementing, and defending a pilot program challenging for government agencies.
December 2, 2019 | Omri Ben-Shahar, University of Chicago Law School
The cost of IRB expansion is undeniable: more burden on researchers, slowdown of research, fewer studies, and inevitably less progress. I developed a model webtool to screen exempt research that could reduce this burden without increasing risks to subjects.
December 3, 2019 | Todd Rubin, Administrative Conference of the United States
By requiring agencies to create evaluation plans for their programs, the Evidence Act empowers agencies to think rigorously. By publishing these evaluation plans and soliciting input from the public, agencies can refine their research methodologies and develop better-informed regulatory programs.
December 4, 2019 | Colleen V. Chien, Santa Clara University School of Law, and Miguel F. P. de Figueiredo, University of Connecticut School of Law
Although the idea of rigorous evidence for policy has few detractors, a lack of clear policies for evidence can pose real practical obstacles to getting the work done. Rigorous piloting and evaluation in many cases represent a departure from the status quo, and policies that mandate, clear the way, or specify resources for piloting can play a crucial role in evidence-based policy.
December 5, 2019 | Jed Herrmann, Results for America
More widely implementing approaches for building and using evidence would make sure that the federal government is getting the best results for its funds. The alternative, after all, is policy based merely on hunches, inertia, and unproven notions. Better use of evidence means better lives for Americans.
December 6, 2019 | Colleen V. Chien, Santa Clara University School of Law
The embrace of performance-based or results-based policymaking has shifted attention away from policy processes and toward policy outcomes that are discoverable through trial and error or disciplined experimentation. As such, the policy environment for regulatory learning is improving.