Disinformation and the Threat to Democracy

Font Size:

Scholars argue for strengthening regulations of online political advertising.

Font Size:

Three days before the 2016 presidential election, Facebook users started circulating a story from The Denver Guardian about a supposed murder-suicide committed by a Federal Bureau of Investigation (FBI) agent who was investigating presidential candidate Hillary Clinton. The author of the story suggested that this act was actually a “‘hit job’ by the Clintons in retaliation for the FBI email leaks so close to the presidential election.” The story peaked at over 100 shares per minute.

There was only one problem: Both The Denver Guardian and the story it “broke” are fake.

Fake stories like the one purportedly published by The Denver Guardian spread rampantly over the course of the 2016 presidential campaign. In a recent article, legal scholars argue that a “lack of transparency” exists for “fake news.”

In response, the scholars—Abby K. Wood of the University of Southern California Gould School of Law, and Ann Ravel of the University of California, Berkeley School of Lawpropose three ways to make online political advertisements more transparent, including a requirement that “platforms store and make available ads that run on their platforms, as well as the audience at whom the ad was targeted.”

Wood and Ravel define “disinformation”—colloquially known as “fake news”—as fabricated news articles used intentionally to spread false or misleading information. Additionally, they contend that “fake news” is political advertising because it aims to “persuade, mobilize, or suppress voters and votes.” Individuals or groups spread the disinformation by “hoax websites, partisan blogs, and satirical articles misconstrued as factual when taken out of context.”

The problem, they write, is that well-created disinformation is persuasive because it looks like trustworthy journalism. The disinformation distorts the information environment and may cause citizens to vote on candidates and initiatives that are actually against their own political preferences. Although correction of disinformation is possible, Wood and Ravel note that once the disinformation is published, the damage has been done.

They argue that the best chance to regulate disinformation advertising “will be related to transparency.” Current Federal Election Commission (FEC) transparency requirements focus on “public communications” such as those on television, print, and billboards. Because online advertising is different, Wood and Ravel argue that a new regulatory framework is needed that accounts for the differences. The new framework is essential because the current “public communications” definition explicitly excludes Internet advertisements. Due to this exclusion, the FEC does not require disclaimers for political messaging on the Internet, unless a fee is paid. And even if fees are paid, FEC’s enforcement of the disclaimer requirement has been lax. The authors argue that this system has led to fake stories, such as the “article” by the Denver Guardian, being disseminated with no indication of who authored the piece.

The authors suggest three transparency-related changes to the regulation of online advertising. The first would require advertisers using social media platforms to save and post “every political communication” on their website or user page on whichever platform was used. Among other information, this “repository” should include cost, issues, candidates mentioned, number of people targeted, and the targeting strategy used. This repository would allow users to flag disclaimer violations after they occur, assisting in the FEC’s enforcement and effectively reducing the incentives to produce disinformation.

The second suggestion is for the FEC to “close the loophole” that allows political advertisements posted for free to avoid disclosure requirements. The authors argue that regardless of the price paid for the advertisement, the “public has a right to know who paid for its creation or distribution.” The disclaimers should, at the very least, contain the same information required of television or radio advertisements. The FEC reopened its rulemaking on this issue when Facebook disclosed that it had sold roughly $100,000 worth of advertising to 470 “inauthentic accounts” operated out of Russia.

The third suggestion involves the elimination of donor anonymity for limited liability corporations (LLCs) and 501(c) organizations. Wood and Ravel note that even if the FEC closes the loophole for disclosure requirements, voters cannot “follow the money” if the requirements are not extended to corporations “making independent expenditures.” The importance of this lies in the fact that donors seek anonymity by making “independent expenditures” through LLCs or 501(c) organizations, and it is uncertain how many online advertisements are run by groups without disclosure requirements. With this loophole closed, voters could “follow the money” and learn about “candidates and policies that matter to them.”

Few routes exist for the regulation of disinformation. Because disinformation can be categorized as “political speech,” it is protected by the First Amendment. To regulate this type of speech, the government must show that the regulation is “necessary to serve a compelling state interest and is narrowly drawn to achieve that end.” Wood and Ravel contend that ensuring elections remain “fair and honest” satisfies the “compelling government interest” requirement, however the U.S. Supreme Court has yet to agree.

Although Wood and Ravel admit that their proposal is merely a starting point and that it will not “solve the problem of online disinformation” entirely, they argue that it provides a “necessary and important step in the right direction.” Private companies will also play a role. Ahead of the 2018 U.S. midterm elections, for example, Facebook reportedly banned “false information about voting requirements and fake reports of violence or long lines at polling stations” to reduce the effect of “fake news.”