The Good-Faith Assumption Online

Font Size:

Scholar argues that online platforms should foster positive user experiences through self-regulation.

Font Size:

Disinformation, cyber bullying, and online harassment—these bad-faith activities are likely what first come to many people’s minds when considering what characterizes today’s online environment.

Despite the prevalence of bad-faith activities, many online platforms assume that users act in good faith. Wikipedia, for example, announces on its project page that “it is the assumption that editors’ edits and comments are made in good faith—that is, the assumption that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful.”

Is it time, though, for policymakers and online platforms to revisit the good-faith assumption?

At least one law professor says “no.” Policymakers should not intervene, and platforms should maintain their good-faith assumptions, argues Eric Goldman in a recent article.

Goldman, a professor at Santa Clara University School of Law, explains that governmental intervention could be counterproductive in curbing bad-faith activities. He argues instead that online platforms should self-regulate and proposes several tools they could use to foster a positive online community.

Goldman identifies two characteristics of the early internet that allowed online platforms to assume users act in good faith: the homogeneous demographic of platform users and their small population. Goldman argues that users’ lack of diversity made it easier for platform designers to anticipate and discourage bad-faith activities. Users were also less likely to engage in bad-faith activities because less money and fame were at stake owing to their relatively small population.

Over the past three decades, however, platform users have diversified and the number of users has surged, making it difficult for online platforms to continue assuming users’ good faith, argues Goldman.

Goldman acknowledges that bad-faith actors are prevalent in today’s online communities. They spread disinformation for political or financial gains. Furthermore, they exploit the anonymity of the online environment to engage in unlawful conduct such as cyber bullying or harassment.

Goldman highlights the fact that policymakers around the world are increasingly requiring online platforms to manage user-generated content in the wake of rampant online harassment and disinformation. The United Kingdom’s Online Safety Act, for example, imposes a duty of care on online platforms to prevent harmful behavior. Similarly, the European Union’s Digital Services Act requires online platforms to mitigate harms arising from user-generated content by establishing a reporting system.

Goldman contends that such regulations are problematic because they fail to encourage good-faith activities. Goldman notes that because bad-faith activities inevitably occur online, such regulations require online platforms to view every user as a potential threat capable of creating liability. As a result, online platforms must harden their content moderation practices, driving good-faith actors away while failing to manage bad-faith actors, explains Goldman.

In contrast, Goldman argues that the U.S. regulatory framework—Section 230 of the Communications Decency Act—is more appropriate than its European counterparts because it allows online platforms to manage their user-generated content through self-regulation.

The Section 230 regime ensures that platforms will not be held liable for the user-generated

content that they disseminate or their decisions to take down offensive or harmful user-generated content—such as obscenity, violence, and harassment—so long as the platforms do so in good faith.

Such a design allows online platforms to assume users’ good faith without imposing liability for the inevitable bad-faith activities that come along, argues Goldman. Furthermore, Goldman explains that Section 230 provides online platforms with incentives to experiment with different site designs to attract good-faith users and combat bad-faith activities, knowing that they are immune from liability.

In light of the Section 230 regime, which allows online platforms to explore various tools to attract good-faith actors through self-regulation, Goldman argues that online platforms’ design choices should be driven by platforms’ business objectives, not legal concerns. Online platforms have strong incentives to respond to users’ interests because they profit from users’ engagement. Consequently, users ultimately benefit when platforms are able to determine the best solution for the community, contends Goldman.

Goldman proposes several self-regulatory mechanisms that he argues would allow online platforms to detect and deter bad-faith actors.

First, online platforms can embrace a “trust-and-safety by design” approach. Trust and safety refers to the set of business practices protecting platform users from harmful content and behavior. Under this approach, platforms’ in-house trust-and-safety and content review teams work in the initial stages of platform development to minimize the prevalence and impact of malicious actors after the platform’s launch, explains Goldman.

Second, online platforms can opt for a user-driven approach. Youtube, for example, allows users to report problematic content from other users. Such an approach, however, may sometimes do more harm than good because users could weaponize the reporting system to remove legitimate content, cautions Goldman.

Third, Goldman argues that online platforms could channel users toward positive behavior with structural design choices. Instagram, for example, encourages users attempting to post content that may violate community guidelines to reconsider their post by sending them notifications reminding them of the rules.

Goldman notes that market mechanisms can also serve as examples of promising structural design. For example, online platforms that pay users for content could discourage bad-faith submissions by only paying users with positive reputations or by introducing obstacles in the content submission process to make illegitimate submissions less profitable.

Finally, Goldman underscores the importance of platforms recruiting people from diverse backgrounds for their development teams. Goldman notes that a homogeneous development team can have significant blind spots. In contrast, diversifying such teams and taking differing perspectives seriously during the development process can help platforms produce more comprehensive, effective content moderation plans, explains Goldman.

Goldman concludes by conceding that although self-regulation is preferrable to governmental intervention, it is imperfect. He acknowledges that it is impossible for online platforms to lay out perfect structural designs from the beginning, and instead urges that online platforms must constantly revise their designs based on new developments, evidence, and experience.