By taking three feasible steps, the administration could help build a regulatory culture more supportive of ex post evaluation.
In 2003, Cass Sunstein published a book called The Cost-Benefit State, and in it he argued that policymaking in the United States has transformed itself into one that emphasizes the use of economic analysis to inform government decisions before adopting new regulatory policies. That book appeared a little more than two decades after President Reagan issued Executive Order 12,291, which established the White House review process that continues to exist in largely the same form today, calling upon federal agencies to use cost-benefit analysis to look carefully at significant regulations before they adopt them.
Eight years after publishing The Cost-Benefit State, Cass Sunstein found himself heading the White House regulatory review office that oversees agencies’ economic analyses of new regulations. At that time, President Obama issued Executive Order 13,563 directing agencies to apply economic analysis to look back at significant regulations after they have adopted them. The President said that agencies “must measure, and seek to improve, the actual results of regulatory requirements” and he directed agencies to develop plans to evaluate their existing stock of regulations.
My question is simply this: Is there a way to ensure that two decades from now a scholar of Sunstein’s stature will write a book entitled The Evaluation State, arguing that policymaking in the United States has transformed itself still further to emphasize, in addition to conducting economic analysis before adopting new rules, the importance of systematically evaluating regulatory policies after they have been adopted?
The answer is yes. Steps can and should be taken now to move the government toward a policymaking culture in which agencies not only continue to conduct prospective analysis of regulations through a cost-benefit lens, but that they also take seriously the need for rigorous retrospective review of regulation.
Today, much of the rhetoric around retrospective review focuses on getting rid of unneeded and burdensome rules – the bad regulations. That is certainly one reason for engaging in retrospective review. But it’s only one of three reasons. The second reason is to revise and refine the so-so regulations. Retrospective review helps in determining where and how we could fix things so that regulations that are doing some good do more good and where rules that are doing some bad do less bad. The third reason to support retrospective review, and I think this is underappreciated, is to identify the good regulations. These are the rules that not only need to be kept in place, but perhaps expanded and replicated too. By identifying the good, the bad, and the so-so regulations, retrospective review helps policymakers learn.
Learning calls to mind another mantra about regulation today: the need for smart regulation. Yet if regulation is to grow smarter, that will require more learning. It will require careful study of which policies have worked and which ones haven’t worked. What are the conditions under which regulations succeed? What are the conditions under which they don’t? In our cost-benefit state, we also need to know more about how accurate are all those prospective economic analyses anyway. The only way to find out is to take a look after the fact and see how things actually turned out and compare it against the prospective estimates. Smart regulation, in short, depends on retrospective review both to identify the good, bad, and middling regulations and to help us improve our prospective tools.
Unfortunately, retrospective review is undersupplied today. Admittedly, administrations have periodically called upon regulatory agencies to look back at existing regulations and eliminate rules that are outmoded. The Clinton Administration, through its National Performance Review, called upon agencies to look at their existing stock of rules. The second Bush Administration asked members of the public to identify regulations that might be problematic. And so far the Obama Administration has done much the same with what it is calling a regulatory “lookback” initiative.
But each of these initiatives has been ad hoc, episodic, fleeting, and unsystematic. Interestingly, all of these words could have described the state of play with respect to prospective regulatory analysis in the decades prior to the establishment of President Reagan’s Executive Order 12,291. What changed with the Reagan order was that prospective regulatory review became institutionalized. The Reagan order marked what might be called a cultural transformation when it came to having government agencies look ahead before making decisions. Now what we need a the cultural transformation about looking back.
Fortunately, that’s a stated goal of the Obama administration. When announcing the administration’s regulatory lookback initiative a few years ago, Cass Sunstein made clear that this would not be “a one-time endeavor.” He said he wanted it to be an effort to build “a regulatory culture of regular evaluation.” Even after issuing Executive Order 13,563, President Obama signed yet another order, Executive Order 13,610, that calls for action to “institutionalize regular assessment of significant regulations”. In addition, the 2013 draft report from OIRA to Congress also calls for creating and fostering a culture of evaluation.
But how can the government do this?
We can’t continue to have government rely just on ad hoc, episodic efforts. We do indeed need to think about institutionalizing. Toward this end, big changes through new legislation and new institutions are worthy ideas that ought to be entertained. For example, the proposed Regulatory Improvement Act of 2013 would create a new independent agency designed specifically to assess existing regulations. That said, in today’s political climate anything that requires legislation will face an obvious and steep uphill battle – especially on an issue as controversial as government regulation.
So allow me to offer three simple but concrete and entirely feasible steps that could be taken right now, without new legislation, without any additional funding, to move in the direction of establishing a regulatory culture of evaluation.
First, OIRA could issue guidelines to agencies about how to do good evaluations. The Obama Administration has launched its expansive regulatory lookback, but far too many of the looks back have been simply glances in the rearview mirror. Anecdotal, expert-based assessments of whether regulations are working might be better than nothing, but they are no substitute for the kind of rigorous empirical evaluations that will help policymakers really learn. One step forward, then, would be for OIRA to create guidelines akin to Circular A-4, which lays out guidelines for agencies to follow in order to conduct high-quality prospective benefit cost analysis. We need the equivalent of A-4 for retrospective analysis.
The second feasible step is to get more explicit about planning for retrospective analysis. Every time an agency issues a significant regulation, it should be required to accompany that new regulation with a plan for evaluating it down the road. Under the current executive orders, OIRA has that authority to demand that of agencies. And it actually isn’t a huge demand anyway, because much of the work that goes into establishing the prospective regulatory impact analysis can be readily used to answer the core questions of an evaluation plan, namely: How will the public know if the new rule is successful? Should the rule be evaluated in five years, ten years, or after some other period of time? What are the key sources of data that need to be identified that can be used to evaluate the rule’s impact? What are the research designs and strategies that could be used down the road in a systematic evaluation? These questions, if answered in a regulatory plan, would get agencies thinking at the early end about what they would do later, looking back. Even without requiring that agencies implement each of their evaluation plans, the process itself would help institutionalize a learning process and would provide a basis down the road for others either to pressure agencies to do evaluations or for outside researchers or institutions like the National Science Foundation or the National Academies of Science to support such evaluation research.
Finally, OIRA should follow the example set by John Graham, who as administrator of OIRA under the second Bush Administration established a system of regulatory prompts – by which OIRA asked agencies to consider adopting new regulations. These prompts provide a model for how OIRA could promote retrospective review. Rather than leaving it entirely up to agencies, OIRA should prompt agencies to engage in evaluation of high-priority areas where learning is especially needed. Evaluation would be especially valuable for rules that fall into one or more of the following categories: (1) close-call rules – or those where the predicted net benefits turned out to be very small in the prospective evaluation; (2) rules for which the prospective analysis contained a great deal of uncertainty over the costs or the benefits; and (3) rules that raise common concerns across agencies or involve common issues in estimating benefits or estimating costs. A good example of a common issue that cuts across different regulations would be the impacts of regulation on employment. With respect to all three categories, OIRA should be well-positioned to identify rules that especially merit retrospective review.
Whether by issuing evaluation prompts, requiring evaluation plans, and developing clear guidelines for retrospective review, the federal government can today move closer to achieving the current administration’s laudable vision of institutionalizing evaluation. To be clear, I do not expect that these three incremental proposals will, by themselves, cure everything that ails regulatory policy. But improving evaluation is an absolutely necessary, even inherent, precondition for smart regulation.
By taking even small steps today toward the institutionalization of evaluation, the practice of retrospective review can improve and deepen over time. Building a culture of retrospective evaluation is clearly a long-term proposition. It is not something we should expect to happen overnight. But I do hope that 20 years from now, scholars and others will look back at this moment and say that it marked a turning point in the reliance on retrospective evaluation and ultimately in the establishment of a smarter regulatory state.