Getting Disinformation Right

Font Size:

In this week’s Saturday Seminar, scholars explore the challenges and potential for regulating disinformation.

Font Size:

Disinformation is everywhere. The rapid spread of content via digital platforms and lack of verification from such platforms has allowed false information to proliferate online. As soon as one account spewing disinformation is taken down, another one seemingly pops up in its place, often in the form of bots. Even as traditional social media sites such as Twitter and Facebook attempt to curb the spread of fake news, slews of alternative social media platforms have emerged, further complicating the disinformation landscape.

When considering the vast, seemingly unstoppable spread of disinformation, an underlying question remains: Can regulators step in to decipher truth from fiction?

Calls for greater regulation of disinformation have increased in recent years, most recently in the context of elections and the COVID-19 pandemic. During elections, purposefully false campaigns have aimed to confuse and mislead voters to influence voter turnout. Similarly, amid a public health crisis, thousands of social media users shared incorrect information. The uncertainty even prompted heads of state to make factually inaccurate and harmful statements regarding the disease.

Although the U.S. State Department has declared disinformation to be a national security priority in the sphere of international affairs, its threat also looms large at home. Presently, U.S. lawmakers have lagged behind entities such as the European Union, which plans to crack down on disinformation through the recently-passed Digital Services Act. This legislation allows governments to request that social media companies remove any content deemed illegal.

As deliberately false information gains traction alongside major political events, key political players have begun calling for greater regulation of social media platforms such as Twitter, and a reform of Section 230 of the Communications Decency Act to demand greater accountability from said platforms. Free speech concerns, however, pose a potential barrier to ongoing efforts to regulate disinformation when considering whether the government should be able to determine what is and is not appropriate to post online.

In this week’s Saturday Seminar, scholars dissect the ways in which disinformation threatens democracy, and what could be done to stop its pervasive effects.

  • In a report released by the Congressional Research Service, Valerie C. Brannon of the Library of Congress analyzes how the government could regulate online misinformation without running afoul of the First Amendment. The U.S. Supreme Court allows limited categories of speech regulation based on categories such as defamation, fraud and false commercial speech, campaign speech, and broadcast media. Brannon considers constitutional limitations on expanding the scope of false statements regulation and how courts might respond to more protective regulations of false speech.
  • The United States must consider the impact of emerging technologies on disinformation and democratic ideals, urges Samantha Lai of the Brookings Institution in an article published by the Brookings Institution. In response to growing concern over electoral disinformation, government agencies have created new research centers and units aimed at combating disinformation, but Lai contends that they must take greater action to quell the issue. Lai suggests extending protections against voter intimidation to the online space through new legislation, bolstering online protection through a federal privacy framework, and expanding accountability mechanisms for big technology companies that spread disinformation.
  • Online campaigning techniques, which include the targeted use of personal data and the intentional propagation of disinformation, pose a serious threat to democratic political processes, contends Kate Jones of the University of Oxford Faculty of Law in a research paper issued by the International Law Programme. Jones interprets the right to privacy as incorporating a right to safeguard one’s personal information and opt out of psychological profiling. Thus, according to Jones, the current practice of “micro-targeting” voters with tailored advertisements without their knowledge or consent is problematic. Jones concludes that states should implement robust data protection laws to prevent the extensive harvesting of personal data by commercial or political entities.
  • In a memorandum by Joseph V. Cuffari, Inspector General of the U.S. Department of Homeland Security (DHS), Cuffari recommends that the agency create a unified strategy to counter disinformation on social media. Although DHS has attempted to improve national cybersecurity and build public awareness, Cuffari found these efforts were mainly targeted toward specific missions, including the preservation of election integrity, and thus lacked the cohesion necessary to effectuate broader, cross-cutting solutions. A unified, holistic approach to countering disinformation would be more successful, argues Cuffari, because each component has different limitations to its authorities and cannot individually address the broad range of potential threats.
  • In a report issued by the Coalition to Fight Digital Deception, experts argue that artificial intelligence (AI) tools employed by social media platforms have the potential to exacerbate the spread of disinformation. As reliance on AI increases, so do the potential harms to users of such platforms, according to the report’s authors. Algorithms that determine targeted advertising and content moderation strategies can maximize efficiency, but can also expand access to harmful and inaccurate content if the algorithms contain bias. To account for potential discrepancies in AI, the authors contend that the United States should bolster its regulatory scheme regarding online content, particularly by addressing the gap in legislation related to AI and disinformation. Legislation should demand greater transparency and accountability from tech companies and lend the Federal Trade Commission greater abilities to regulate and assess the AI systems of social media platforms.
  • In an article published by the Griffith Law Review, Corinne Tan of Nanyang Business School explores how regulatory tools can help mitigate the harms of disinformation posted on Facebook and Twitter. Tan acknowledges the First Amendment concerns that arise in the United States, and how such concerns often allow social media platforms to escape liability. Tan argues that a model of co-regulation between social media platforms and governments presents a middle-ground approach between the self-regulation of sites such as Twitter and Facebook and direct regulation through legislation. This model could encourage more informed and collaborative decision-making while avoiding overt government intervention.