Twitter Bans Religious Dehumanization

Earlier this week, Twitter announced a new rule against language that dehumanizes others on the basis of religion. This change is a step in the right direction, but in order to truly mitigate offline harms, the company must define dehumanizing speech by its likely effect on others in addition to the literal content of the speech.

This new rule represents an important shift in how Twitter administrators think about harm. Until now, the company has focused on regulating speech that leads to harm for individuals on Twitter – like harassment targeted at a specific person. This new rule, which has been in development since last August, acknowledges that speech on Twitter which targets groups can lead to harm offline as well. Dehumanization, which is one of the hallmarks of Dangerous Speech, is a good place to start for such a shift because the potential for offline harm is fairly intuitive, but the way Twitter defines dehumanization will determine the effectiveness of the new rule.

For now, the rule is limited to religious dehumanization, but Twitter administrators have said that they will expand it to include other groups. This is important, as many non-religious groups also face dehumanizing rhetoric.

Twitter has also limited the new rule in another way. Based on the examples that Twitter shared in their announcement to illustrate the new rule, the company is defining dehumanization as explicit comparisons of someone to an animal or non-human object. This definition and the focus on religion appear to be an intentional response to public concern that a rule against dehumanization broadly defined would restrict too much speech, like calling someone a “kitten” or “monster” in an endearing way. While dehumanization should be defined narrowly to prevent the term from coming to stand in for any form of harmful speech, Twitter’s approach limits the enforcement of the new rule to the most literal cases.

Yet dehumanization can also take place though implicit comparisons. For example, as Dr. Anna Szilagyi wrote on our blog last year, a speaker can dehumanize a group using coded language – in such a way that they don’t literally call that group subhuman and yet the audience still understands that this is the meaning of words. She writes:

 

[Polish politician] Jaroslaw Kaczynski argued in a campaign speech that the refugees from the Middle East bring “very dangerous diseases long absent from Europe” and carry “all sorts of parasites and protozoa, which […] while not dangerous in the organisms of these people, could be dangerous here.” While he could claim that he was merely pointing to potential health risks, his implication that refugees constituted a “disease” was obvious to his listeners.

 

Even if Twitter administrators expand the new rule to include protections for refugees, it seems unlikely that they would call Kaczynski’s speech dehumanization since it doesn’t literally call refugees a disease.

This false negative is the inverse of Twitter’s fear of false positives when people call others “kittens” and “monsters”. If Twitter content moderators considered the likely effect of a tweet in addition to its literal meaning, they would see that Kaczynski’s speech is likely to have a harmful, dehumanizing effect whereas, “All [religious group] are kittens,” is not. This is similar to how we determine whether an expression is Dangerous Speech – by considering the context of an expression, not just the content, in order to determine the likely effect of the speech.

The public was right to point out to Twitter that there are times when it is acceptable to use language that, when read literally, might be taken as dehumanizing. Twitter’s response to such concerns should be to incorporate context into their decision making. This is a challenge, especially when moderators have to make a quick decision about whether a tweet is allowed. It is also much more difficult for speech detection algorithms to determine context and likely effect than it is for them to identify literal comparisons to subhuman objects or animals. But incorporating context is a critical step if Twitter wants effective policies against dehumanization that also allow for innocuous tweets that meet the literal criteria of dehumanization.