Regulating Beyond the Christchurch Call

Font Size:

International response to Christchurch attack shows the complexity of restricting online extremist content.

Font Size:

For 16 minutes and 55 seconds on a quiet Friday afternoon in March 2019, a gunman used a helmet camera to live-stream on social media his attack on Muslim worshippers at two mosques in Christchurch, New Zealand. Before Facebook pulled down the video, it had been viewed 4,000 times. Within the next 24 hours, Facebook removed about 1.5 million copies of the video. As New Zealand Prime Minister Jacinda Ardern later described, the attack had been clearly “designed to be broadcast on the internet.”

The shooter’s chilling use of social media to broadcast his attack has prompted a number of countries to take action against online extremist content. Although many people would agree with Prime Minister Ardern’s argument that social media should not have the “freedom to broadcast mass murder,” governments’ responses to the attack have highlighted the complexities involved with preventing the dissemination of online extremist content while not infringing on liberties such as freedom of expression.

Two months after the Christchurch attack, Prime Minister Ardern and French President Emmanuel Macron brought together 17 governments and a number of tech industry actors—including Facebook, Google, Twitter, and YouTube—for a one-day summit in Paris. The summit produced the non-binding Christchurch Call, which set out broad commitments to “eliminate terrorist and violent extremist content online.” The United States was a key absentee from the Paris summit, with the White House citing freedom of expression concerns.

In a recent article, Peter Thompson, a senior lecturer at New Zealand’s Victoria University of Wellington, argues that, although the Christchurch Call is an important first step toward a multilateral regulatory framework to control online extremist content, it needs to be complemented by robust domestic regulations that “hold social media and digital intermediaries accountable.”

Thompson notes that the size and scope of social media coupled with the “practical difficulties of brokering international regulatory frameworks” has historically resulted in governments leaving tech companies to be the default regulatory agents. Although political momentum toward social media regulation was underway before the Christchurch attack, Thompson contends that the attack “hardened” governments’ resolve to regulate social media and digital intermediaries.

For example, the U.K government recently released a paper proposing to establish a statutory duty of care requiring digital intermediaries “to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services.” A new independent regulator would be established to oversee and enforce compliance with this duty of care. The main penalties for non-compliance would be fines, but the U.K government is also considering the ability to, in the case of serious breaches, disrupt business activities or block websites.

Despite the legislation’s noble intentions, legal commentators have criticized the proposal. Andrew Murray, a law professor at the London School of Economics and Political Science, labeled the duty’s scope as “overwhelming.” Murray argues that platforms and other intermediaries will take an unduly cautious approach to removing content in order to comply with the broad duty. Furthermore, Murray contends that the duty would effectively outsource speech regulation to private corporations.

The Christchurch attack compelled the Australian government to pass legislation targeting “abhorrent violent material.” Passed within days, the Act makes it a criminal offense to fail to notify the police within a “reasonable time” about material depicting abhorrent violent conduct that is reasonably believed to have occurred or be occurring in Australia. The Act also criminalizes a social media company’s failure to “ensure the expeditious removal” of abhorrent violent material.

In a forthcoming article, Harvard doctoral candidate Evelyn Douek raises similar points to Murray by arguing that the Australian legislation encourages tech companies to over-censor in order to avoid the threat of liability. Douek also argues that the Act asks for perfect enforcement, overlooking the limits of platform content moderation, and outsources important judgments about what speech should and should not be allowed in the public sphere to companies.

New Zealand’s immediate regulatory response to the Christchurch attack was for the chief censor to classify the video and accompanying manifesto as “objectionable,” which imposed penalties for possession or distribution of the materials. The New Zealand government has not yet progressed any specific regulation, although as Douek notes, Prime Minister Ardern has said that she does not intend to follow Australia’s hardline approach.

These various regulatory responses, and associated criticisms, highlight the potential risks with taking a wide-sweeping approach to criminalizing harmful online content.

Douek argues that governments should engage with more targeted issues when designing regulatory frameworks. Potential checks on live-streaming abuses could include strengthening account validation, limiting the ability for new users to live-stream, restricting audience sizes, or monitoring accounts for community standard violations. Other options include directly inducing companies to dedicate more resources to anticipating and responding to events like the Christchurch attack, and obligating companies to be more transparent about their practices for dealing with problematic content.

Without endorsing a particular approach, Thompson similarly raises potential regulatory responses to policing extremist content. Options include pre-vetting for live-streaming access, take-down notices, advertising restrictions for non-compliant social media, and algorithmic oversight and accountability. As with Douek, Thompson highlights that there are major tensions between pre-emptively restricting extremist content and impeding other democratic freedoms.

Despite regulatory design difficulties, Thompson believes that the Christchurch Call provides a basis for progressing deliberations on regulatory measures. Although the Christchurch Call is non-binding and only makes cursory reference to broader questions of human rights and civic accountability, Thompson argues that the pledge legitimizes future regulatory interventions in the digital media sector.

Thompson contends that a “consistent, international multilateral framework of regulation” must be the ultimate goal of the Christchurch Call. Yet, due to the imminent difficulty in regulating at the international level, Thompson believes that parallel domestic action is required, even though regulations stemming from different jurisdictions may be uneven.

The question of what such domestic regulation should look like is more difficult to answer, with effective regulation requiring a nuanced balancing of interests to ensure that it targets extremist conduct while not unduly infringing freedom of expression.