The Fear of Playing the Fool

Tess Wilkinson-Ryan discusses the role of human psychology in legal and regulatory systems.

In a discussion with The Regulatory Review, Tess Wilkinson-Ryan, the Golkin Family Professor of Law at the University of Pennsylvania Carey Law School, offers her thoughts on the intersection of law, regulation, freedom of choice, and the fear of getting fooled.

Decision-making is a critical component of everyday life. When making choices—both large and small—innumerable considerations come into play. Wilkinson-Ryan analyzes these considerations through the lens of human psychology.

In this Spotlight, she shares her perspectives on how fear, in particular, shapes our social, legal, and regulatory worlds. She also considers how current legal and regulatory systems could improve through policies that acknowledge decision-making pathways.

A constant flood of legally mandated disclosures and uninterpretable contract provisions, for example, might be well-intended, but they can distract consumers with unnecessary and even at times harmful information. To this, Wilkinson-Ryan provides what is often the better answer: “Tell us less, and don’t write it down!”

Wilkinson-Ryan has published prolifically at the intersection of law and psychology. Her experimental research, highlighted in her new book, Fool Proof, focuses on using psychology and behavioral economics to understand the moral intuitions that shape legal choices. Her recent research has focused on topics such as mortgage borrowing and default, retirement planning, contract precautions, and cognitive and emotional responses to breaches of contract. Wilkinson-Ryan earned her J.D., PhD, and M.A. from the University of Pennsylvania.

The Regulatory Review is pleased to share the following interview with Tess Wilkinson-Ryan.

 

TRR: In addition to studying and teaching law, you have an extensive background in psychology. Does the regulatory field currently incorporate human psychology to a sufficient degree? What might a legal and regulatory system that better fits with human psychology look like?

One of the biggest dilemmas for the psychology of regulation lies with disclosure rules. Disclosure has real normative appeal because it ostensibly makes individuals able to make better, and more informed choices, without being too heavy handed about what they can and cannot choose.

The problem is that conveying information is really hard. Sometimes, the meaning that ordinary people will derive is actually counterproductive from the perspective of the policymakers.

To take an example close to my heart: Disclosing the cost of the funds in a retirement account makes lots of sense—people need to know the cost of things. But most people have no easy way to conceptualize the difference between a fund with a fee ratio of .1 percent versus one with 1.8 percent. Those are both really small numbers!

My colleague Jill Fisch and I ran a study that tried to make fund disclosures more effective by layering on another disclosure about the importance of fund fees for returns over time. The additional disclosure seemed to steer people away from high-fee funds. But the study suggested that a lot of people who would select expensive retirement funds—thus forgoing a lot of money at retirement—didn’t know what to do with the fee information disclosed to them. In fact, I think seeing the little fractional percentages actually made it seem like fees could safely be ignored. I really do think a lot of disclosures are either useless or actively doing harm.

 

TRR: In your recently published book, Fool Proof, you discuss individuals’ fear of getting “suckered” by others and how this fear affects decision making. In your opinion, what is the connection between “playing the fool” and freedom of choice?

What is interesting to me is that the line between someone who was duped and someone who took a knowing risk and failed is pretty fuzzy. One of my favorite stories from Fool Proof is about my sister who was taking a long bike ride in Vermont, where she lives. She was really thirsty as she reached the half-way point and needed to rehydrate. She stopped at a little touristy “Ye-Olde-Vermont” type store which sold Gatorade, but with a huge upcharge. She almost didn’t buy it! She thought, “I’m no fool, I know Gatorade shouldn’t cost $6.” But then, she reframed the situation and realized that she was making a very rational choice to spend $6 on a drink, that in the moment, was probably worth ten times as much since she was otherwise in the middle of nowhere and still had to bike home. To me, it’s always worth reframing and asking: Is this really a betrayal in a way that matters to me? Or am I making a reasonable choice in light of my own values and needs?

 

TRR: What do you suggest are the ways that society has benefited, and been harmed, by our current mode of decision-making influenced by the fear of being suckered?

The case for trying to avoid being a fool is pretty uncontroversial. It makes lots of sense for people to internalize the costs of self-protection and not just post their Social Security number on Twitter or answer every spam email. If nearly everyone was utterly unwary all the time, there would be a massive transfer of wealth to the cynical and, in turn, a huge administrative burden for all of the institutions charged with sorting out remedies.

But the fear of being duped has real costs, especially for cooperation. This comes out a ton in debates around social programs such as subsidized housing, vouchers for groceries, free school lunch, and universal health insurance. People who are in favor of ending hunger or poverty relief, for example, may oppose anti-poverty policies if they are anxious about being taken advantage of. This anxiety can warn people off of giving, and it can also yield less effective giving. If a food pantry runs a campaign for donations, the same $10 donated as cash does more good than $10 worth of, say, rice. But cash donations can be harder to solicit, in part, because people feel foolish if they give cash that is then spent on something the donor thinks is frivolous. Means-testing is also a way of trying to exploitation-proof a social program, but it’s really expensive, deters uptake, and doesn’t yield a financial benefit in a world of progressive taxation.

 

TRR: If you had the ability to make instantaneous change, how would you alter the regulatory and legal system to counteract the fear of feeling duped that you discuss in Fool Proof?

I would implement something like the statute proposed by my colleague Dave Hoffman: rules that discourage written forms and disclosures for a wide swath of currently unreadable contracts. The barrage of consumer disclosures—the boilerplate that accompanies our every right-swipe, scroll, and click to agree—has distorting effects on things that really matter. It makes it seem like all of our rights and obligations are just formalities, governed by unreadable fine print. And it kind of dilutes the meaning of assent or informed consent. So: less disclosure. Tell us less, and don’t write it down! A slogan for our time.

 

TRR: In recent months, artificial intelligence has been used by practitioners to draft contracts and to produce various other legal work. Do you see the use of AI—at least as a replacement for human interaction— as a solution to the problem of distrust, or as cause for further concern?

There is some evidence that people are more sanguine about getting a bad deal randomly from a computer rather than intentionally from another human. Because computers do not have intentions, it would be weird to think of the harm caused by some AI outcome as being disrespectful. On the other hand, as three psychologists pointed out in an article from some years ago, more than zero Americans die every year from violent altercations with vending machines, which is just to say, sometimes people do feel swindled by a machine.

The other thing I would say is that all contracts involve people. There might be AI drafting or blockchain or whatever, but there are people on either end of that transaction, and also governments with authority to regulate the transactions. I think part of the appeal of cryptocurrency, for example, is that it purports to require no trust in private or state actors. It really seems, however, that the last year of turmoil should remind us that there is unavoidably going to be some amount of trust at stake when humans are involved—and humans are always involved.

The Sunday Spotlight is a recurring feature of The Regulatory Review that periodically shares conversations with leaders and thinkers in the field of regulation and, in doing so, shines a light on important regulatory topics and ideas.