Tearing Down Terrorist Content Online

Font Size:

Scholar urges policymakers to impose liability on platforms that spread terrorist content.

Font Size:

In a recent appearance before the U.S. Senate, Federal Bureau of Investigation Director Christopher Wray offered a warning: “The greatest terrorist threat we face here in the United States is from what are, in effect, lone actors.” Wray explained that terrorist organizations have successfully used social media to radicalize such individuals, increasing the likelihood that they will commit terrorist acts.

In light of such concerns, should social media companies be liable for terrorist content on their platforms? In a recent article, Michal Lavi, a postdoctoral fellow in Israel, argues that the answer is “yes.”

Some plaintiffs in the United States have filed civil lawsuits against social media companies under a federal antiterrorism law that prohibits supporting terrorists, Lavi observes. Families of victims have argued that companies cooperated with terrorists in posting, displaying, or hosting propaganda which then led to American deaths.

These lawsuits, however, face an uphill battle. Plaintiffs must overcome hurdles in proving causation and demonstrating that they have standing to have their claim considered by a federal court. In some cases, courts have refused to accept a threat or fear of terroristic harm as an “actual or imminent” injury that justifies standing.

In addition, no laws specifically require platforms to prevent terrorist content, and the First Amendment establishes strong protections for free speech—a factor which leads lawmakers to favor less content moderation online.

Plaintiffs must also convince courts to find social media platforms liable despite the immunity granted to these companies by Section 230 of the Communications Decency Act. In passing Section 230, the U.S. Congress prioritized free speech and innovation, opting to absolve content providers of liability for speech that occurs on their platforms. Courts may find liability when a company creates or develops content, but they have more often dismissed cases by interpreting Section 230 broadly to protect platforms.

Pushing back against Section 230’s sweeping protections, Lavi argues that giving immunity to platforms only enables terrorist recruitment and the incitement of violence. Lavi urges lawmakers to amend Section 230 to establish liability for platforms that host terrorist content.

Social media companies already regulate speech and moderate content according to their own policies and terms of service, Lavi explains. But according to Lavi, these companies—including Twitter, Facebook, and YouTube—fail to moderate terrorist content adequately. They have enforced their own policies inconsistently when removing terrorist hashtags, degrading statements, graphic photos, and violent videos. Lavi notes that attempts to detect and remove terrorist content using technology have also fallen short.

Any attempt to impose liability on platforms for terrorist content that incites violence must balance public safety with free speech, Lavi acknowledges. If platforms ramp up efforts to remove terrorist content, they may remove legitimate speech and make users suffer from collateral censorship. Increasing content moderation could also have a chilling effect on user autonomy, civic participation, and the free exchange of ideas, Lavi concedes.

Striking the right balance between public safety and free speech, Lavi explains, entails imposing liability with specific limitations. As a limit, Lavi suggests that platforms should only be liable when they know about terrorist speech and terrorists’ accounts but fail to act.

As a starting point, Lavi describes terrorist content as posts that “seek to cooperate, legitimize, recruit, coordinate, or indoctrinate on behalf of groups listed on the State Department’s list of designated terrorist organization.” Lavi predicts that by removing content that fits this narrow definition, companies could likely pass the strict scrutiny test that courts apply to prevent undue limits on the constitutional right to free speech.

Platforms should also be responsible for their algorithms and any content personalization that leads susceptible users to terrorist content, Lavi emphasizes. Social media companies apply algorithms to determine the recommendations, content, and advertisements that every user sees.

Algorithmic targeting can incite and reinforce violent extremism in many ways, Lavi explains.

Even if users have not searched for inciting content, algorithmic targeting may invite some users to engage with violent ideas and then recommend increasingly extreme content and connections. Algorithms may also radicalize users by siloing them in echo chambers that limit a free marketplace of ideas, reinforce prior dispositions toward radical beliefs, and strengthen extremist messages.

“Algorithms are also never truly neutral,” Lavi argues. She contends that digital intermediaries should not escape liability merely because algorithms make recommendations. Humans still intervene when they program, instruct, and update the algorithms that later make recommendations and manipulate users.

Policymakers could also implement a “safety by design” requirement for new technologies, subjecting platforms to a basic duty of care to keep users safe. Imposing a duty of care, as Lavi describes, would require engineers to design algorithms so as to avoid recommending content that incites users to terrorist activity.

Once a platform company implements an algorithm, it may still recommend terrorist content based on programming choices or unexpected machine-learning interactions. Lavi recommends that regulators require algorithmic impact assessments to catch algorithms that make harmful recommendations, providing companies with an opportunity to correct them.

Although the U.S. Department of Homeland Security seeks to combat disinformation, conspiracy theories, and false narratives on social media, government resources may not be enough. Recognizing social media platforms’ ability to either facilitate or counteract the spread of terrorist content, Lavi concludes that imposing liability on platforms creates a much-needed incentive to stop terrorist propaganda before it can circulate widely.