Combating Election Disinformation

Font Size:

Experts recommend policies to contain the spread of online disinformation.

Font Size:

Intelligence officials warn that Russia is working to influence the U.S. election. Although Russia has modified its techniques since 2016, its goal is the same—create chaos and division.

The U.S. Senate Select Committee on Intelligence concluded that, in 2016, Russian operatives published propaganda and fabricated news stories on American social media sites to help elect then-candidate Donald J. Trump. Other U.S. intelligence officials reported that Russian leaders ordered the disinformation campaign as part of a broader plan to undermine American trust in government institutions and the electoral process.

After the 2016 election, Russia did not stop its efforts to interfere in U.S. politics. Rather, Russia increased and refined its disinformation techniques. According to intelligence officials, Russian disinformation poses an ongoing national security threat to the United States.

This election cycle, Americans—and sometimes even U.S. leaders—appear to have generated much of the disinformation that Russian trolls are spreading online. For example, President Trump has falsely claimed that there is evidence of widespread fraud in mail-in voting and has suggested that his loss would indicate a rigged election. Russian trolls are working to spread those falsehoods.

Online platforms such as Facebook and Google exacerbate the problem of disinformation by using algorithms that personalize content based on predicted user preference. These algorithms, which shape users’ online experience, prioritize information that is not necessarily accurate but is likely to engage readers. Research shows that, on average, an online factual story takes six times longer to reach 1,500 people than a fake story.

Although some online platforms have changed their own policies to slow the spread of disinformation, the U.S. government has taken little action to contain this national security threat. The First Amendment, which protects political speech and news, limits regulatory solutions to the problem of disinformation. Yet some experts argue that lawmakers can do more to combat online disinformation without violating the Constitution.

In this week’s Saturday Seminar, scholars propose measures to protect American elections from online information warfare.

  • Holding social media companies liable for the content that users post could raise constitutional concerns and set dangerous precedent for freedom of speech, Jill Goldenziel of Marine Corps University and Manal Cheema of the University of Virginia write in the University of Pennsylvania Journal of Constitutional Law. They note, however, several measures that social media companies could implement to regulate content on their platforms more effectively and without raising First Amendment concerns. For example, the government could require social media companies to verify all users to weed out bots. In addition, the government could require that social media companies notify and educate users about spotting fake news and flagging posts that instigate election interference.
  • Dawn C. Nunziato of George Washington University Law School argues that the United States should take a more active role in regulating online forms of speech. In an article published in Notre Dame Law Review, she explains that the United States follows the “marketplace of ideas” model of free speech, which favors self-regulation of ideas over government interference. As a result, the online marketplace of ideas is ripe with misinformation and vulnerable to foreign influence. Nunziato advocates legislation, such as the proposed Honest Ads Act, to increase transparency and accountability within the online marketplace of ideas.
  • Russian interference with the 2016 presidential election revealed the vulnerability of the U.S. democratic election process in a digital world, Joseph Thai of the University of Oklahoma College of Law claims in a paper published in Oklahoma Law Review. To protect voters against the influence of false information on the internet, Thai argues that online platforms should be required to identify and disclose speech affiliated with foreign states. Thai also suggests adding media literacy to the K–12 curriculum to prepare future voters to evaluate the information they access online.
  • International law lacks provisions to regulate information warfare—the deployment of harmful information such as propaganda and disinformation against an enemy—Waseem Ahmad Qureshi writes in the Fordham International Law Journal. Qureshi argues that one approach to regulate information warfare involves looking at European models that have successfully combated online hate speech and propaganda domestically. In addition, Qureshi suggests that the international community collaborate to create an information warfare “code of conduct” similar to what exists for other international regulatory concerns such as space exploration.
  • Darin E.W. Johnson of Howard University School of Law explains that Russian disinformation operatives exploited systemic racism in the United States to sow further racial division and decrease minority voter turnout in 2016. In an article published in the Columbia Journal of Race and Law, he argues that racism makes the United States vulnerable to foreign information warfare and should be viewed as a threat to U.S. national security. Johnson claims that this framework would place responsibility on leaders to mitigate racism through executive and legislative action. He concludes that addressing institutional racism would reduce Russia’s ability to manipulate American voters.
  • The Foreign Agents Registration Act (FARA) requires foreign agents to register with the U.S. Department of Justice. But the Act is vague and has rarely been enforced in recent decades, explains Nick Robinson of the International Center for Not-for-Profit Law. He writes in the Duke Law Journal that some experts recommend increasing FARA enforcement to combat foreign disinformation and election interference. Robinson claims, however, that ambiguities in FARA make it vulnerable to “politicized abuse.” Robinson worries that officials could use the overbroad Act to punish legitimate speech. He argues that if lawmakers seek to use FARA to combat disinformation, they should amend the Act to ensure that it applies only to agents working “at the direction or control of a foreign government” or when the activity targets “core democratic processes.”

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.