The Limits of Deplatforming

Font Size:

Scholars warn of the potential difficulties of using deplatforming to curb disinformation online.

Font Size:

When it comes to regulating online speech and protecting against disinformation, taking away the soapbox may not be enough to silence the speaker.

In a new paper, three experts argue that removing harmful social media platforms from the internet may push users to seek out other, more extreme outlets. And they caution that there is a causal connection between the rise in harmful content online and an increase in real-world extremist incidents.

In their paper, Saharsh Agarwal, a professor at the Indian School of Business, Uttara Ananthakrishnan, a professor at the University of Washington Foster School of Business, and Catherine Tucker, a professor at the Massachusetts Institute of Technology Sloan School of Management, explain that content moderation efforts on social media sites often consist of self-regulation, with each individual site exercising significant editorial control over their users’ content. They note, however, that not all websites have similar incentives to moderate their content and prevent the rise of disinformation. Some platforms even trumpet their loose moderation to attract users away from mainstream social media sites.

Agarwal, Ananthakrishnan, and Tucker focus their analysis on Parler, a right-wing social media platform. They trace the rise of Parler to just after the 2020 presidential election, when mainstream social media sites increased their content moderation in an effort to combat election disinformation. Although Parler had over 15 million users at its peak in January 2021, it was also filled with posts spreading disinformation, conspiracy theories, and racist ideology.

Agarwal, Ananthakrishnan, and Tucker explain that, soon after this peak, Parler was deplatformed, which means providers of technology infrastructure, such as Google and Apple, withdrew support services for the app. Google and Apple removed Parler from their app stores after finding evidence that Parler was used by right-wing extremists to plan the attempted insurrection at the U.S. Capitol on January 6, 2021. Soon after, Amazon Web Services stopped hosting Parler on its servers, taking the app offline.

This move, however, was not the end of the extremist community that Parler had cultivated. According to Agarwal, Ananthakrishnan, and Tucker, many Parler users soon migrated to Telegram, an encrypted messaging app.

Agarwal, Ananthakrishnan, and Tucker acknowledge that a majority of Parler users did not shift over to Telegram, but they observe that the heaviest users of Parler were the ones most likely to move to Telegram. Not only did these users ended up being exposed to even more disinformation on Telegram than they had access to on Parler, but Telegram users who had never used Parler also experienced an increase in exposure to disinformation on the messaging app.

Agarwal, Ananthakrishnan, and Tucker attribute this increase to Telegram’s encryption protections, including users’ ability to have private, untraceable conversations that delete themselves automatically. These features have given the app status as a safe haven for extremist groups, including ISIS. The app also allows users to join public channels with up to 200,000 members, making it easy to spread disinformation among a large group of like-minded users.

As a result, Agarwal, Ananthakrishnan, and Tucker caution that deplatforming an extremist community may only drive that community to move deeper underground, where their activities grow even more extreme and more difficult for governments to track.

Nevertheless, the deplatforming of Parler was at least partially effective. When Parler returned online five weeks after being deplatformed by Amazon Web Services, it failed to regain its previous user base, according to Agarwal, Ananthakrishnan, and Tucker.

They suggest that the rise of Telegram use among Parler exiles exemplifies, though, the limits of current approaches to content moderation. Apps such as Parler that tout their lax content moderation to gain extremist users cannot be trusted to self-regulate, Agarwal, Ananthakrishnan, and Tucker argue. And they contend that internet infrastructure providers cannot be trusted to always deplatform an extremist community. Although Apple, Google, and Amazon acted against Parler, these companies later declined to take similar action against Telegram despite public outcry.

Agarwal, Ananthakrishnan, and Tucker explain that the deplatforming of Parler can teach both regulators and policymakers that taking action against social media sites will not eliminate online extremism. Because private firms and infrastructure providers cannot be relied on to curb the spread of disinformation and extremism, regulators must act to create uniform guidance for content moderation on digital platforms, Agarwal, Ananthakrishnan, and Tucker conclude.