Civil Society Puts a Hand on the Wheel: Diverse Responses to Harmful Speech

This essay was published by the Berkman Klein Center for Internet & Society in a collection of essays on harmful speech online. The full collection can be accessed below.

 

A single response to harmful speech online – deletion or ‘takedown’ in industry parlance – is by far the most discussed and the most demanded, but other responses deserve notice, especially those that convince people to post less harmful speech.

Takedown focuses on the offending content alone – not on those who post it, nor on those who are harmed by it – so it doesn’t do much to convince people to stop thinking, saying, or re-posting harmful speech, and it doesn’t remedy the harm done to people who are exposed to the content before it’s removed. Takedown is an ever-expanding game of Whac-a-Mole: it can’t keep up with the staggering rate at which new content appears online except perhaps if takedown becomes automated, algorithmic prior censorship, which tends to be overbroad and would impinge on freedom of speech.  Also, takedown is a method that can be practiced only by Internet companies, making and applying their own internal rules for it, while governments pressure them to take down more content.

Meanwhile, alternative responses to harmful speech online are being invented and tested by Internet users and nonprofit organizations. Some of these efforts have persuaded people, albeit on a small scale so far, to stop posting harmful content. Other civil society responses sidestep both content and content producers, to succor the targets of harmful speech instead. Still others use humor to defuse harmful speech. Of the intriguing new civil society responses to harmful speech, a few of the most promising are described below.

A common thread among the methods is to take action offline, calling out harmful speech where people may be more susceptible to social pressure against it. If a young man or boy threatens to rape you, for example, you may get a speedy apology by telling (or merely threatening to tell) his mother. When Oliver Rawlings, then 20, sent a highly offensive sexualized tweet to the University of Cambridge classics scholar Mary Beard, one of her Twitter followers offered Beard the mailing address of his mother. Rawlings recanted instead, and Beard later invited him to lunch in Cambridge. Alanah Pearce, an Australian game critic, found the mothers of several of her trolls on Facebook and forwarded rape threats their sons had sent her. One of the mothers required her son to hand-write a letter of apology and pressed his school to teach online safety.

Not everyone relies on mothers: many outraged Internet users contact employers instead, demanding that people be fired. Such demands have succeeded – and this type of effort often spills over into vigilantism and excessive punishment. In many cases the response to harmful speech is as vitriolic and relentless as the speech it denounces.

In light of these dangers, some campaigns against harmful speech  have been thoughtfully calibrated to teach, while also protecting people from angry outrage. In Brazil, for example, the selection of Maria Julia Coutinho as the first black weather forecaster for the popular television news program Jornal Nacional in 2015 was met with a surge of online racism. The black womens’ rights organization Criola responded with a campaign called Mirrors of Racism, reproducing racist comments from Facebook on large billboards, with the slogan “Virtual Racism. Real consequences.”

Criola’s director Jurema Werneck said the organization geolocated the authors of the racist comments by studying their social media presences, and put the billboards in their own neighborhoods, so they would see their own words called out near their homes. But  Criola chose not to name them. “We omitted names and faces of the authors – we had no intention of exposing them. We just wanted to raise awareness and start a discussion, in order to make people think about the consequences before posting this kind of comments on the internet,” Criola wrote. The authors of racist posts were protected from public attack, therefore, but were still exposed to the silent shaming of seeing their own words emblazoned in large letters – and called out as harmful speech.

As a result of the campaign, 83% of the commenters deleted their accounts, according to Criola – and one came forward to apologize. “I could see just how racist I had really been, even though that wasn’t really my intention,” said Lucas Arruda, in front of his post on a billboard: “Cheguei a casa fedendo a preto” (I got home stinking of black people).

In some cases, the goal of a response to harmful speech is not to discourage that speech, per se, but instead to alleviate the harm felt by people at whom it is directly aimed. HeartMob is a web platform that allows the targets of harassment to recruit online ‘bystanders’ – other people online at the same time – to help them during an attack by giving particular types of aid, such as sending the target supportive messages, documenting the abuse, and/or reporting the content to an Internet platform. Emily May, who created HeartMob after being inspired by activism to diminish street harassment of women, points out that on the street bystanders are not always present, but bystanders are always available online. Trollbusters, a platform similar to HeartMob,  recruits “virtual S.O.S. teams” to send messages of support to the social media feed of a target of online harassment, in an attempt to drown it out.

Blockbots are one more tool that targets of harassment use to protect themselves against it, in this case by preventing themselves from seeing it at all. Blockbots are applications that allow Twitter users to block lists of other users – and to share those lists easily. When a user subscribes to a list, the bot uses the Twitter API to block the accounts on the list automatically, avoiding the tedious process of blocking each account individually. This innovation proved so useful for targets of harassment that Twitter built into the platform the option to import and export block lists in 2015.

These efforts use fundamentally different methods to diminish harm: from magnifying racist messages and displaying them, literally, in the sunlight – to trying to leave them hidden them in the dark corners of Twitter. Each of them deserves study, to determine their actual capacity to reduce harmful speech or the damage it does. Researchers must be careful, though, to protect themselves from targeted abuse in the course of their work.

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download [1.03 MB]

(CC BY 3.0)