Achieving Regulatory Success Through Failure

Font Size:

Scholar argues that regulators need “permission to fail” when adapting to private sector innovation.

Font Size:

“What chance does Gotham have, when the good people do nothing?”

Although this is a question posed in the film Batman Begins, it holds true in the real world as well. And, according to law professor Hilary J. Allen of American University, when good people—in this case, regulators—do nothing, there could be dire consequences for both the regulatory state and the public interest.

In a recent article, Allen argues that if a regulatory agency succumbs to the fear of failure in the face of technological innovation and remains stagnant, the failure to keep up with industry might become an “irremediable failure of inaction” that undermines the goals of the regulatory state.

Allen urges that as private industry becomes increasingly technologically complex, administrative agencies must be able to innovate to keep pace. To that end, Allen theorizes that regulators should be provided some “permission to fail” in their oversight of these new technologies. Without some recognition that failure might occur if regulators act, Allen warns that agencies may choose not to act at all and will increasingly be unable to comply with their statutory mandates, resulting in negative impacts on regulatory efficacy, and thus, the public interest.

To prevent these negative effects from occurring, certain types of permission to fail should be better incorporated into our regulatory system, according to Allen. Allen suggests that this permission might look like loosening strict cost-benefit requirements, promoting transparency in regulatory innovation processes, or changing the public messaging around how efficiency and effectiveness failures should be perceived.

Importantly, Allen does not argue that all types of regulatory failures should be accepted. Instead, Allen identifies the need to establish baselines around the kinds of failures that should be more or less tolerable in a well-functioning regulatory state. In this way, Allen suggests that some types of failure—such as failures of efficiency and effectiveness—should be more excusable than others, especially as compared to the failure of inaction.

To show what these types of acceptable failures might look like, Allen draws inspiration from the private sector’s “fail fast” approach. Failing fast refers to the notion that, in, a rapidly changing world, as soon as you figure out what you need to measure to solve a problem, the metrics change; therefore, building in tolerance for short-term failures can provide beneficial opportunities to troubleshoot, thus shortening the learning curve.

In the fail-fast approach, a firm—or, as Allen argues, a regulatory body—should seek to create a good-enough solution for a problem and implement it without wasting time hoping for the perfect solution. This time wasting can ultimately lead to failure to act altogether, Allen explains.

And according to Allen, this has happened before. To illustrate this point, Allen points to the 2008 financial crisis, which Allen contends resulted in large part from regulatory failure to act.

In the face of the financial crisis, the Federal Reserve, the U.S. Securities and Exchange Commission, and the U.S. Department of the Treasury all refused to “clamp down on financial excesses” created by the newfound availability of subprime mortgages. In a 2011 report, the Financial Crisis Inquiry Commission found that, although financial regulators had “ample power” to take action to prevent the crisis, they chose not to exercise it. Here, Allen argues for “more grace” around some types of regulatory failures and calls for “closer scrutiny of failures of inaction,” such as those implicated in the market crash of 2008.

Distinguishing further between acceptable and unacceptable failures, Allen argues that innovations that result in failures of equity, legitimacy, and credibility should always be avoided.

For example, Allen claims that if a reporting system purports to pull data from regulated bodies but ends up overstepping their authority and going on “fishing expeditions,” the legitimacy of the regulation will be undermined.

Failures of efficiency and efficacy, however, should be treated with some leniency, according to Allen. She points out that failures of efficiency and effectiveness, such as trial and error, cost overruns, and abandoning failed projects,” are “hallmarks of the innovation process.” They do not undermine trust in the regulatory process and should not be automatically condemned.

As evidenced by the subprime lending market in 2008, Allen claims that development in the private economic sector often happens quickly and in ways too complicated for traditional regulatory tools to address potential harm. Allen argues that regulators cannot be expected to succeed the first time they innovate to keep up with the fast-moving private sector, so there is some level of risk from regulatory intervention. Indeed, Allen concedes, regulators will fail; it is simply a matter of when and how.

Thus, according to Allen, building certain types of permission to fail into our regulatory system can head off the kinds of crises that result from expecting regulators to be perfect. By encouraging regulators to do the best with the information they have, Allen concludes, they are less likely to shirk their duty, and will better protect the country from disaster.