Response to the Christchurch Call

The new Christchurch Call to Action is largely vague and symbolic, but it may be worthwhile since it prompts internet companies to expand their collaborations against harmful and especially terrorism-promoting content online. It is too narrow in the specific methods it proposes, however. For instance, content removal isn’t sufficient on its own.

The document was issued on May 15 at a Paris meeting led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron in response to the Christchurch massacre that was livestreamed on Facebook in March. The Call outlines a series of voluntary commitments for both governments and “online service providers” for the purpose of “eliminat[ing] terrorist and violent extremist content online.” Companies are to publish clearer rules regarding terrorist content, consequences for violating those rules, and reports on the quantity and nature of terrorist content that they remove. The companies that signed – including Amazon, Google, Microsoft, Facebook, and Twitter – also agreed to provide efficient appeals processes for users who have had content removed, and to develop methods to direct users away from terrorist content. The 17 governments that signed are to help develop ethical industry standards and – in sunny but vague terms – strengthen the “resilience and inclusiveness of [their] societies.”

Under the Call, terrorist content must be removed without impinging on freedom of expression. It also asks companies to remove such content immediately and permanently, though the agreement is voluntary and nonbinding and it imposes under no punishment for failure to remove even a single piece of illegal content.

This is a better approach than laws that try to force companies to take down all harmful content, since that simply isn’t possible. No matter what laws are passed or policies adopted, neither humans nor software can detect all harmful content quickly and accurately. Even once harmful content is identified, it’s extremely difficult to get it offline completely – there are just too many ways and places to post it. For example, Germany’s 2017 Network Enforcement Act (or NetzDG, an abbreviation of its German name) requires internet companies to remove content that is “evidently illegal” under 22 provisions of the German penal code within 24 hours of the moment they become aware of it – or face fines as high as 50 million euros. The law hasn’t advanced the goal of eliminating extremist content, according to a 2018 report.

The Christchurch Call, on the other hand, focuses on building a framework for governments and companies to exchange knowledge about effective interventions to online harms and coordinate responses to global events. The idea of industry collaboration on removing terrorist content is not new. Tech companies already share their databases of ‘hashes’ – digital fingerprints for images and video – of previously identified child pornography and terrorist content, but these efforts limit the information exchange to hashes which can only be part of the solution.

The Call encourages companies to expand this collaboration to include other forms of knowledge and expertise. While this commitment is left open-ended, potential areas for knowledge sharing could include, for example, best practices for user reporting of questionable content or notification of removal, effective processes for engaging with external researchers and experts, or joint development of algorithms for detecting terrorist content. While some of this expanded knowledge sharing may already be happening informally, formalizing it could diversify the sources of knowledge and ensure smaller companies are benefiting as well.

That said, the Call could spur overreach by governments and companies. While the document itself is nonbinding, it could be the basis for establishing future laws that are binding, and companies are likely to begin adopting new rules in light of the agreement. After the Call’s release, Facebook announced restrictions on livestreaming, and Canada announced a new ‘Digital Charter’ to guide the development of new internet regulations. As Article 19 pointed out in a statement, the Call could have “unintended consequences for freedom of expression and human rights, and prevent terrorists and other perpetrators of crimes from being held to account.“

Policy changes can affect billions of people, and the Call is much too vague. For example, it does not offer a clear definition of terrorism and violent extremism, and it groups companies together under the name “online service providers.” Failing to clearly define terrorism and extremism could allow some governments to use the Call to pressure companies to remove political dissenters from their platforms by labeling them terrorists. Failing to distinguish the many roles that different companies play in supporting the internet – from managing the physical infrastructure to hosting websites to building social media applications – paints these companies with a single, broad brush, overlooking the complex ways in which different companies enable the spread of extremism in different ways.

Indeed, academics and representatives of human rights organizations last week gave Ardern a set of useful recommendations to make the Call’s provisions more specific. For example, governments should not outsource speech regulation to private companies and should not shut down the internet entirely in the midst of public unrest, violence, or protests. They also argued that the process for developing the Call should have included civil society from an earlier stage, including a more diverse range of individuals and groups – for example, experts from the Global South and countries deeply affected by terrorism.

The Christchurch Call is also too vague in the solutions it proposes, and companies have so far responded mainly with only one: to delete content. Accurate and comprehensive deletion is impossible, as noted above, and it isn’t the only effective response, as we have written before. Governments and platforms can play a role in these alternative approaches as well. For example, Jigsaw, a research arm of Google, has developed what they call “The Redirect Method” – which uses ad targeting algorithms to deliver counter-messaging to young people who are likely to be at risk for extremist radicalization.

Governments can also identify users whose language clearly influences killers but who themselves ensure that their speech and behavior is not illegal. If authorities are confident that they have correctly tied a person’s true identity to their online personae, they can then inform the person that they’ve been unmasked. Anecdotal evidence suggests their behavior might improve – as happened when a Huffington Post journalists uncovered the identity of a Texas man who influenced the man who committed the massacre in a Pittsburgh synagogue last October. After the journalist contacted the man, his online behavior improved significantly. None of these interventions involve removing content or users, but they all have the potential to improve online discourse. We hope the Christchurch Call will foster cooperation among governments, companies, and external experts to share ideas and develop innovative responses to extremist speech, other than attempting to remove it all.

Cover photo is available under the Creative Commons CC0 1.0 Universal Public Domain Dedication