Dangerous Speech: A Practical Guide

[Click here to download the PDF version]

Introduction

No one has ever been born hating or fearing other people. That has to be taught – and those harmful lessons seem to be similar, though they’re given in highly disparate cultures, languages, and places. Leaders have used particular kinds of rhetoric to turn groups of people violently against one another throughout human history, by demonizing and denigrating others. Vocabulary varies, but the same themes recur: members of other groups are depicted as threats so serious that violence against them comes to seem acceptable or even necessary. Such language (or images or any other form of communication) is what we have termed “dangerous speech.”

Naming and studying dangerous speech can be useful for violence prevention in several ways. First, a rise in the abundance or severity of dangerous speech can serve as an early warning indicator for violence between groups. Second, violence might be prevented or at least diminished by limiting dangerous speech or its harmful effects on people. We do not believe this can or should be achieved through censorship. Instead, it’s possible to educate people so they become less susceptible to dangerous speech. The ideas described here have been used around the world, both to monitor and to counter dangerous speech.

This guide, a revised version of an earlier text (Benesch, 2013) defines dangerous speech, explains how to determine which messages are indeed dangerous, and illustrates why the concept is useful for preventing violence. We also discuss how digital and social media allow dangerous speech to spread and threaten peace, and describe some promising methods for reducing dangerous speech – or its harmful effects on people.

 

Defining dangerous speech

IIn the early 2000s, Benesch (2003, p. 503) noticed striking similarities in the rhetoric that political leaders in many countries had used, during the months and years before major violence broke out. Since such messages seem to have special power to inspire violence, we have studied them, in search of ways to diminish their effect and preserve peace. We call this category dangerous speech and have defined it as:

Any form of expression (e.g. speech, text, or images) that can increase the risk that its audience will condone or commit violence against members of another group.

Importantly, the definition refers to increasing the risk of violence, not causing it. We generally cannot know that speech 1 caused violence, except when people are forced by others to commit violence under a credible threat of being killed themselves. People commit violence for many reasons, and there is no reliable way to find them all or to measure their relative importance. Often, even the person who commits violence does not fully comprehend the reasons why. To say that speech is dangerous, then, one must make an educated guess about the effect that the speech is likely to have on other people.

In the definition of dangerous speech, violence means direct physical (or bodily) harm inflicted on people, not other damaging behavior such as doxing,2, incitement to self-harm, or discrimination.3 These are also important to prevent, of course, and dangerous speech may inspire people to hurt other people in many ways, including these. We focus on physical violence between groups of people for clarity (it is usually difficult to contest evidence of physical violence) and consensus (people of very different backgrounds have endorsed the notion of dangerous speech, agreeing that intergroup violence should be prevented). Our definition also includes speech that increases the risk that an audience will condone violence, not only commit it. This is vital since, in situations of large-scale violence between people, typically only a very small proportion of the population (usually young men) actually carry out violence (Straus 2004, p. 95; Kuhl 2016, ch. 1). People close to them, however – siblings, friends, teachers for example– often condone or even encourage it.

 

Dangerous speech is aimed at groups

Dangerous speech increases the risk that its audience (the “in-group” as it is often called by scholars) will commit or condone violence against another group (the “out-group”). The out-group must have a defining characteristic that is both different from and meaningful to an audience (whether or not this accurately describes or is meaningful to members of the out-group). Common dividing lines include race, ethnicity, religion, class, or sexual orientation, but in some cases dangerous speech is aimed at groups defined by other characteristics, such as occupation, like journalists. However, merely being in the same location or attending the same school would not define a group for the purposes of dangerous speech analysis.

Speech targeting individuals is usually outside the scope of dangerous speech; however, in some cases an individual can symbolize a group so that targeting that person becomes a form of dangerous speech against the group they represent. For example, some Pakistanis called for harming the Pakistani Nobel laureate Malala Yousafzai, attacking her as an individual and also as a leader of women they saw as subversive or traitorous (Kugelman, 2017). Similarly, Hungarian Prime Minister Viktor Orbán and his government denigrate the Hungarian-American philanthropist George Soros as an individual and also as a wealthy, powerful Jew, using familiar antisemitic tropes such as referring to Soros as a puppet master (Walker, 2017).

 

Dangerous speech promotes fear

A defining feature of dangerous speech is that it often promotes fear, as much as it expresses or promotes hatred. For example, one can assert that another group is planning to attack one’s own group without expressing hatred, yet that message might easily convince people to condone or commit violence, ostensibly to fend off the attack. Violence would seem defensive, and therefore justified. For example contemporary rhetoric in many countries portrays immigrants as a catastrophic threat. Prime Minister Orbán and U.S. President Donald Trump have referred to migration as a “trojan horse” which will necessarily increase criminal activity and terrorism (Brunsden, 2017; Kopan, 2015).

Frightening messages may also spread even more widely and quickly than purely hateful ones, since many people share them without malevolent intentions, or even the desire to incite violence. They feel genuine fear.

 

Dangerous speech is often false

Dangerous speech is commonly false – not surprising, since it describes whole groups of human beings in appalling terms. Unfortunately, people can be quite easily persuaded of misinformation (false assertions) or disinformation (false assertions that are spread knowingly or intentionally). And when falsehoods are frightening, people are more likely to spread them, even when they are not sure whether they are true. In such circumstances, people can easily accept exaggerated or false messages (Leader Maynard and Benesch, 2016, p. 78).

 

Dangerous speech harms indirectly

Though dangerous speech can lead to terrible harm, it does so indirectly, by motivating others to think and act against members of the group in question. Speech can also harm directly of course, by offending, denigrating, humiliating or frightening the people it purports to describe – as when a racist shouts slurs at a person of color. One message may, of course, harm both directly and indirectly.

 

Dangerous speech and hate speech

Dangerous speech is also quite different from the term “hate speech” which, though widely used, is hard to define clearly and consistently. The simple words “hate speech” present a variety of questions. For instance, what exactly is hatred? How strong or enduring must an emotion be to count? And does the “hate” in hate speech mean that the speaker hates, or seeks to persuade others to hate, or wants to make people feel hated?

There is one common thread among definitions of hate speech, which is that it vilifies a person or group of people because they belong to a group or share an identity of some kind. Therefore it’s not hate speech to say “I hate you” without referring to a group.

Most definitions specify types of groups. To be considered hate speech, messages must be directed at people who share a religion, race, or ethnicity, for example. Other types of identity that appear in some definitions (but not others) are disability, sexual orientation, gender, sex, age, culture, belief, refugee status, caste, or “life stance”. For example section 135a of Norway’s penal code defines hate speech as “threatening or insulting anyone, or inciting hatred or persecution of or contempt for anyone because of his or her a) skin color or national or ethnic origin, b) religion or life stance, or c) homosexuality, lifestyle or orientation” (The General Civil Penal Code). The hate speech provisions of South Africa’s equality law specify groups and attributes that are absent from other countries’ laws such as pregnancy, marital status, conscience, language, color, and “any other group where discrimination based on that other ground (i) causes or perpetuates systemic disadvantage; (ii) undermines human dignity; or (iii) adversely affects the equal enjoyment of a person’s rights and freedoms in a serious manner that is comparable to discrimination […]” (Promotion of Equality, 2000, pp. 3-5). Most countries’ laws don’t prohibit hate speech at all, instead criminalizing other related forms of speech such as incitement to hatred.

Broad or vague definitions of hate speech and related crimes can jeopardize freedom of speech, since governments often use vague laws to punish their political opponents, or the very minorities against whom hate speech abounds, like the Roma in Hungary (Hungarian Civil Liberties Union, 2013). Indeed, governments in countries as varied as India, Rwanda, and Kazakhstan have used laws against hate speech or hateful speech to punish and silence journalists, dissenters, and minorities (DNA India 2015; Amnesty International, 2010, p. 28; Mchangama, 2019). This may even increase the risk of violence, by preventing people from expressing and resolving their grievances peacefully.

We focus instead on dangerous speech, since it is a narrower, more specific category, defined not by a subjective emotion such as hatred, but by its capacity to inspire a harm that is all too easy to identify – intergroup violence – and that many people can agree on wanting to prevent.

 

The Dangerous Speech Framework

One cannot make a list of words that are dangerous, since the way in which any message will be understood – like its effect on an audience4 – depends not only on its content but on how it is communicated: by whom, to whom, and under what circumstances. The very same words can be highly inflammatory, or benign.

To understand whether a message is dangerous when spread in a particular context, one must examine both content and context. It’s important, also, to be able to compare the dangerousness of different messages. To this end we have developed a straightforward and systematic way to analyze speech in context – listing and describing all of the elements that can make a particular example of speech more dangerous. The result is a five-part framework (see Figure 1) which includes the message itself, the audience, the historical and social context of the message, the speaker, and the medium with which a speaker delivers a message. Analyzing each of these five elements is not only essential for identifying how dangerous speech operates, it is also useful for designing interventions to diminish the dangerousness of that speech.

To use the framework for a particular example of speech, one asks whether each of the five elements makes it dangerous, and if so, how dangerous. For example, one might ask whether a message came from a compelling or influential source. Because the social, historical, and cultural context in which speech was made or disseminated is essential for understanding its possible impact, this analysis must be carried out with extensive knowledge of the relevant language, culture, and social conditions – or at least with assistance from advisors who have such knowledge.

After considering all five elements in turn, one asks on the basis of that analysis: did/would this message make people more ready to commit or condone violence? This is not a calculation or a formula; it is a qualitative assessment meant to help predict the behavior of other people.

Figure 1. The Dangerous Speech Five Part Framework

 

Indeed, all five elements need not be important, or even relevant in every case. For example, sometimes the speaker is irrelevant, when unknown (many messages are distributed anonymously, as in an online message or a printed flyer) or not influential with the relevant audience. Such speech may still be dangerous, if its message is inflammatory and an audience is primed to accept it. Only those two elements are always required for speech to be dangerous: inflammatory content and a susceptible audience.

Moreover, it isn’t the case that speech is either dangerous or not dangerous at all. Rather, more or less dangerous speech can be imagined along a spectrum, or like dominoes in which each piece affects its neighbor. As people come to accept a moderately dangerous message, they also become a bit more likely to accept an even more dangerous one. In this way, normal social barriers to violence erode as increasingly dangerous speech begins to saturate the social environment. 5

In general, the dangerous speech that comes just before violence breaks out is easiest to identify since its meaning tends to be clear and it often calls for, or at least endorses, violence. Months or years earlier, speech is often expressed in ambiguous, coded language, so that both its meaning and its impact are less apparent. This doesn’t mean that it can be safely disregarded.

Witnesses and scholars generally agree that speech helped to catalyze the 1994 Rwanda genocide in which thousands of Hutu men massacred between 500,000 and 800,000 people, mainly of the Tutsi ethnic group, and mainly by hand, using machetes: such a laborious way to kill that it seems they were highly motivated (Des Forges, 1999, p 5, 15). Indeed, inflammatory speech against Tutsi had circulated in Rwanda for years before the genocide, and it was believed to have played such an important role that the International Criminal Tribunal for Rwanda (ICTR) made speech crimes a major focus of its cases. One of the best-known was the Prosecutor v. Ferdinand Nahimana, Jean-Bosco Barayagwiza, Hassan Ngeze, the so-called Media Trial, at which a newspaper editor and two executives of  Radio Télévision Libre des Milles Collines (RTLM) – bitterly nicknamed Radio Machete – were all convicted of incitement to genocide, among other crimes (ICTR, 2007, p. 2). The case was complicated, however, by the fact that the evidence presented was mostly ambiguous language, not explicit encouragement to kill.

During the trial, a witness recounted the gradual, poisonous spread of what we call dangerous speech,6 over RTLM’s existence from July 1993 to July 1994. “I monitored the RTLM virtually from the day of its creation to the end of the genocide, and, as a witness of facts, I observed that the operation of the genocide was not the work done within a day.” The witness went on to describe RTLM’s effect on its audience:

“[W]hat RTLM did was almost to pour petrol – to spread petrol throughout the country little by little, so that one day it would be able to set fire to the whole country.”7

This testimony teaches that dangerous speech of all types should be analyzed carefully, to gauge its harmful effects and also to avoid defining it too broadly: some offensive or hateful speech, while odious, will not inspire violence if an audience isn’t susceptible. The framework below is meant for identifying “drops of petrol,” and making a systematic, educated guess as to where speech fits along a spectrum of dangerousness.

 

  1. Message

People express themselves in a seemingly infinite variety of ways, and dangerous speech is no exception. Dangerous speakers frequently use coded language, including terms familiar to their in-group but not to the out-group. This can be even more effective than explicit speech: shared jargon serves to bind the in-group together, and also gives the person who uses dangerous speech a basis for denying it.

Regardless of the language or images with which it is expressed, we have found that dangerous speech often contains similar ideas – rhetorical patterns that we call “hallmarks” of dangerous speech. Note that a hallmark does not, by itself, make a message dangerous.

All groups of humans use these techniques, regardless of language, country, race, color, or class – just as virtually all groups also commit violence against other people. Similarly, this kind of rhetoric is found throughout human history.

 

Dangerous speech hallmarks

We have identified five hallmarks of dangerous speech, which we call: dehumanization, accusation in a mirror, threats to group integrity or purity, assertions of attacks against women and girls, or children, and questioning in-group loyalty. All the examples of dangerous speech that we have found contain at least one of the hallmarks below.

Our list is not exhaustive however. We expect it to grow and change as researchers gather more dangerous speech and observe patterns in it. Already, others have shed vital light, especially the political scientist Jonathan Leader Maynard, in his studies of ideology that justifies mass violence. Leader Maynard  has identified six “justificatory mechanisms” that enable and encourage mass killing, by characterizing either the victims-to-be (e.g. accusing a group of being guilty of crimes) or the perpetrators (e.g. valorizing those who commit violence) (2015, p. 170).  Leader Maynard (2022) also describes four “prohibitory mechanisms” that can discourage.

 

Dehumanization:

By describing other groups of people as something other than human, or less than human, speakers can persuade their audiences to deny other people some of the moral consideration they give to those who are “fully” human (Leader Maynard and Benesch, 2016, pp. 80-81). Dehumanizing targets prepares audiences to condone or commit violence, by making their targets’ death and suffering seem less significant, or even by making it seem useful or necessary.

The philosopher David Livingstone Smith, who has written two monographs on dehumanization, argues that when people refer to others as subhumans, they’re not speaking metaphorically or strategically, but literally see certain people as another species trying to pass as human. Dehumanization is a “psychological response to political forces,” rooted in our propensity for hierarchical thinking, that allows us “to disable our inhibitions, often with catastrophic consequences.” Therefore its effects can’t be countered solely through persuasion.  (Smith, 2020, pp. 107, 100-101).

There are several types of dehumanizing messages, each of which elicits certain emotional or practical responses.8

Speakers often describe an out-group as biologically subhuman: as animals, insects, or even microorganisms such as bacteria or viruses. Persistently, in cases of genocide and mass atrocity, supporters and perpetrators have referred to their victims as vermin (rats, cockroaches, foxes, or snakes), beasts (apes or baboons), or biological hazards (a virus, tumors, or an infection). Not all language comparing people to animals or other non-human creatures is dehumanizing or dangerous, of course – it’s possible to compare a person to an animal in a way that doesn’t lower social barriers to violence.

Generally, speakers choose to compare out-group members with creatures that their audiences regard as repulsive, threatening, or deserving of violence (Leader Maynard, 2015, p. 197). It is almost instinctual knowledge, for example, how to deal with an infestation of vermin: try to eliminate the creatures completely. When Rwandan Hutu extremist media referred to the Tutsi ethnic group as cockroaches in the months preceding the 1994 genocide which left hundreds of thousands of Tutsis dead, they suggested the same action – extermination; one military training operation was even called “Operation Insecticide” (Des Forges, 1999, p. 666).

In the same way, government rhetoric during the Cambodian genocide warned that enemies of the Khmer Rouge regime were “microbes” and a “sickness” to be completely eliminated lest they “rot us from within” (Hinton, 2005, p. 147). One regime slogan declared, “What is infected must be cut; what is rotten must be removed” (Weitz, 2015, p. 156). Like depictions of humans as an infestation of insects, these messages were meant to disgust – but they also suggest that, like cancerous growth or bacterial infections, the Khmer Rouge’s opponents had to be removed completely. Indeed, government soldiers killed more than one million Cambodians between 1975 and 1979, by forced labor, torture, and mass execution.

Speakers also refer to out-groups using supernatural terms. Unlike forms of dehumanization which make targets seem lesser or weak, supernatural dehumanization makes them seem stronger than humans and threatening to them. For example, during World War II, Japanese propaganda portrayed American and British leaders as “demons,” “evil spirits,” and “monsters” (Smith, 2011, p. 22). U.S. war propaganda posters similarly demonized Japanese and German people (Brcak and Pavia, 1994, p. 682; Lane, 2014, pp. 49-53). And in the decades following the United States’ Civil War and the emancipation of slaves in the country, newspapers covering lynchings of black people by white supremacists described the victims as “inhuman” or “unnatural” monsters who terrorized white communities (Smith, 2018).

The language of environmental threats such as floods and pollution can also be used to dehumanize people. Amid anxiety about climate change, this is now common around the world. Speakers in many countries have compared present-day mass migration to environmental catastrophe, from Israel’s Prime Minister Benjamin Netanyahu, who said that if Israel took down its border fence with Egypt, it would face “attacks by terrorist groups in the Sinai and the worst thing: a flood of illegal infiltrators from Africa” (Zikri, 2018), to the United Kingdom’s Daily Mail newspaper, which ran a headline comparing the supposed threat of a “tidal wave of migrants” to that of the Second World War (Burleigh, 2015). Comparisons like these are not new: in 1920, American eugenicist Lothrop Stoddard referred to arriving immigrants as “a rising tide of color,” which would destroy the privileged status of white people in the United States (Smith, 2011, p. 97). In 1914, when 376 people from India (unsuccessfully) attempted to immigrate to Canada on the S.S. Komagata Maru, the Vancouver Sun newspaper ran a cartoon with the title “Will the Dyke Hold?” which depicted a tidal wave shaped like a man in a turban, racing toward the Canadian coastline (Mackie, 2014). While these examples of “flooding” human beings were meant to justify government policy to exclude people, similar rhetoric is also used for forcing them out violently. In 1915, clandestine plans to ethnically cleanse Armenians from the Ottoman Empire referred to uprooting “malignant weeds” (Kuper, 1981, p. 91), just as radio broadcasts during Kenya’s 2008 election encouraged the Kalenjin ethnic group to “clear the weeds” in reference to a rival Kenyan group, the Kikuyu (McCrummen, 2008). In both cases, these messages preceded widespread violence, killings, and mass displacement.

Comparisons like those above are a type of demonizing dehumanization – under which people are seen as metaphysically and physically threatening, often with greater-than-human powers (Smith, 2020, p. 177). The combination of these two perceived threats amplifies the target’s perceived dangerousness. In contrast, there is also enfeebling dehumanization, where the targets are seen as a metaphysic threat, but physically innocuous – for instance, in military combat, where soldiers or civilians are seen as prey (Smith, 2020, p. 177). Such enfeebling dehumanizing rhetoric is still dangerous, but the speaker needn’t refer explicitly to people as something other than human; a speaker may instead use terms that imply dehumanization. For example, when Brazilian politician – now President – Jair Bolsonaro visited a quilombo (a community inhabited primarily by the descendants of enslaved Africans) in 2017, he mockingly described a black man as weighing 7 arrobas – a weight unit used in the country’s agriculture industry, especially for cattle (Simões, 2018). During U.S. President Barack Obama’s term in office, officials made the case for his drone policy by referring to Al-Qaeda combatants as a “cancerous tumor” that required “surgery” to remove – describing them as a biological hazard (Bachman and Holland, 2019, p. 6).

Lastly, like all other hallmarks, dehumanization is neither necessary nor sufficient for dangerous speech. People can inflict violence on others while perceiving them as human. Paul Bloom (2017) writes that people need not dehumanize others in order to mistreat or even torture them. On the contrary, he argues, one can only take full satisfaction from inflicting cruelty when one’s victims can feel humiliated and debased – which are human capacities. “The sadism of treating human beings like vermin lies precisely in the recognition that they are not.”

 

Accusation in a Mirror

The most powerful way to foment intergroup conflict is to frame violence as the only way to protect an in-group against greater harm, even annihilation. To that end, dangerous speech often includes a special kind of justification of violence that has become known as “accusation in a mirror.”

The term comes from an anonymous manual for propaganda and recruitment found in Butare, Rwanda after the 1994 genocide. The document advises attributing to one’s enemies the very acts of violence the speaker hopes to commit against them. “In this way,” the author writes, “the party which is using terror will accuse the enemy of using terror” (Des Forges 1999, p. 66).

To predict violence by another group is especially powerful (whether the threat is real, false, or exaggerated) since it makes violence against that group seem defensive and necessary. In this sense, accusation in a mirror is a collective analogue of the defense to homicide that is available in virtually all legal systems: self-defense. To believe that you, your family, your group, or even your culture faces an existential threat from another group makes violence to fend off that threat seem not only acceptable (as dehumanization does), but necessary.

One of the Rwandan propagandists who famously used this technique is Léon Mugesera, who Canada deported after the Canadian Supreme Court found sufficient “reasonable grounds to believe” that he had committed incitement to genocide, based on a speech he gave in Rwanda in November 1992 (17 months before the genocide began) in which he told his Hutu audience that they were in mortal danger. For instance, he said a Hutu man had been summarily shot by armed men – Tutsi, his audience was meant to understand. Then he predicted much worse: “they only want to exterminate us: they have no other aim.” (Mugesera v. Canada, 2005; Straus, n.d.). Mugesera was later convicted of genocide crimes in Rwanda based on his public speech before the genocide and sentenced to life in prison.

The technique of accusation in a mirror was hardly invented by Hutu extremists: it is one of the most common hallmarks of dangerous speech. In Nazi Germany, for example, anti-Semitic propaganda repeatedly and relentlessly accused Jewish people of hatching a Mordplot (murderous plan) to eliminate all non-Jews (Streicher, 1934, p. 1). This assertion was especially preposterous since the Jews had no military or guerrilla force at all, yet it was apparently convincing.

Some of the most powerful accusations in a mirror come from speakers who suggest that their own group is in danger of being totally annihilated: that it faces genocide.

For example, Nazi SS Reichsführer Heinrich Himmler told senior officers in 1943 that “we had the moral right … to wipe out [the Jewish people] bent on wiping us out” (Leader Maynard, 2015, p. 203). And General Ratko Mladić, who became known as the “Butcher of Bosnia” for directing killings including the massacre of more than 8,000 Bosnian Muslim men and boys at Srebrenica in 1995 (Osborne, 2017), earlier claimed that Muslims, Germans, and Croatians were planning for “the complete annihilation of the Serbian people” (Kiernan, 2009, p. 591).

 

Threat to Group Integrity or Purity

Another rhetorical technique, or hallmark, of dangerous speech, is to assert that members of another group can cause irreparable damage to the integrity or purity of one’s own group. A 1931 German cartoon from Julius Streicher’s Nazi newspaper Der Stürmer shows an apple sliced open with a knife marked with a swastika. Inside the apple is a worm that has a stereotypically Jewish face. The caption reads “Wo etwas faul ist, ist der Jude die Ursache” (“Where something is rotten, the Jew is the cause”) (Bytwerk, n.d.).

A 1931 German cartoon from Julius Streicher’s Nazi newspaper Der Stürmer shows an apple sliced open with a knife marked with a swastika. Inside the apple is a worm that has a stereotypically Jewish face. The caption reads “Wo etwas faul ist, ist der Jude die Ursache” (“Where something is rotten, the Jew is the cause”)

Similarly, in the ethnic attacks following the December 2007 presidential election in Kenya, members of the Kalenjin ethnic group referred to Kikuyu people as “madoadoa” (spots) that should be removed (Truth, Justice, and Reconciliation Commission, 2008, pp. 39, 41, 63, 71, 82, 84, 92; Jenkins, 2012, p. 591)

By portraying members of the target group as a threat to an audience group, this type of message reinforces fear. Moreover, these messages indirectly (and sometimes directly) instruct people to rid their group of the supposed contaminant, to preserve the health of their own group. Notably, this hallmark need not include any prediction of physical violence. Dangerous speech may instead assert a threat to a culture, group identity, or political project (Chirot and McCauley, 2010, p. 62).

While such messages may not threaten bodily harm, they appeal to powerful emotional connections between people and their identity groups and belief systems. Norwegian mass murderer Anders Breivik, who killed 77 people in July 2011, claimed to be trying to prevent what he called a European “cultural suicide” brought upon by the influences of multiculturalism, Islam, and “cultural Marxism”(Berwick, 2011, p. 12). In his manifesto (written under the pseudonym Anders Berwick), Breivik wrote that “the fate of European civilization” depends on men like him resisting such influences (Berwick, 2011, p. 38).

Similarly, Communists in the Soviet Union encouraged and justified violence against kulaks (landowning peasants), by suggesting that the kulaks posed an existential threat to other Russians. One Bolshevik leader instructed Communist Party organizers: “beat down the kulak agent wherever he raises his head. It’s war – it’s them or us” (Figes, 2008, p. 85).

 

Assertion of Attack Against Women and Girls, or Children

Related to the previous hallmark is the suggestion that women or girls of the in-group have been or will be threatened, harassed, or defiled by members of an out-group. In many cases, the purity of women symbolizes the purity, identity, or way of life of the group itself.

This hallmark is very common in dangerous speech around the world and throughout history, likely because it is difficult to ignore a warning of violence against members of a group who are traditionally viewed as vulnerable and needing protection. For most societies, this includes children (especially girls) and women; almost universally, men are instructed to protect women and children at all costs, up to and including killing an attacker.

In the United States, false claims of attacks against white women often led to lynchings and other violence against black people, especially in parts of the country where Africans had been enslaved. In Tulsa, Oklahoma, for example, after a report that black men had assaulted white women in 1921, mobs of whites destroyed the homes of black residents (Johnson, 1998, pp. 258-259). Narratives and images of black men attacking white women also appeared in popular media such as the 1915 film Birth of a Nation. Like the book The Clansman on which it is based, the film depicts a black man attempting to rape a white woman, who escapes only by jumping to her death.

In one of many present-day examples, rumors that Rohingya Muslim men had raped a Buddhist woman in 2012 in Myanmar9 sparked riots (Gowen, 2017). In February of 2016, the conservative mass-market Polish weekly wSieci published a striking cover  image of a beautiful young blonde, blue-eyed woman wearing a dress made from the flag of the European Union. Six dark-skinned male hands grab and tear at her body (and the dress) as she screams in terror.


Though the image makes its meaning obvious, it was accompanied by the headline “Islamski gwałt na Europie” (Islamic rape of Europe). In each of these cases, men from the out-group are portrayed as criminal and/or barbaric, heightening a sense of threat (Sherwood, 2016)

Questioning In-Group Loyalty

Though dangerous speech usually describes members of the out-group or target group, some of it never mentions them, instead characterizing members of the in-group as insufficiently loyal, or even traitorous, for being sympathetic to the out-group. During atrocities, in-group members seen as disloyal are often punished as severely, if not more severely, than members of the out-group. In the Rwandan genocide, for example, for the most part Hutus killed Tutsis, but so-called “moderate” Hutus were also often killed by their fellow Hutus, for helping Tutsis or apparently wanting to do so. In the months before the 1994 genocide in Rwanda, the Hutu-led radio station RTLM spread the message “kill or be killed,” which both supported the idea that killing Tutsis was an act of self-defense and also the notion that Hutus who did not take part in the killing would themselves be killed (Yanagizawa-Drott, 2014, p. 1946). As Mary Kimani (2007, p. 113) notes, “RTLM, as well as political leaders, made it clear that killing ‘the enemy’ was the duty of every Rwandan.”

Such messages were also common earlier, in the years leading up to the genocide. In December of 1990, Kangura, a pro-Hutu newspaper whose editor was later convicted for incitement to genocide in the Media Trial described above, published the “Hutu Ten Commandments,” which called Tutsi a “common enemy” and asserted that Hutus who formed romantic or business relationships with Tutsis were traitors.10 Hutus sympathetic to Tutsis, in other words, were depicted as a threat to the unity and survival of the Hutu people.

 

  1. Audience

Even the most inflammatory message cannot inspire violence if its audience isn’t susceptible to such notions. A group may be fearful about past or present threats of violence, or already saturated with frightening messages. Economic hardship, alienation, unresolved collective trauma, or social norms in favor of obedience to authority may also make people more susceptible to dangerous speech.

Dangerous speech is often false, so audiences are more vulnerable to it when they can be duped into believing what’s false – or are not skilled at distinguishing lies from truth. As false content propagates more and more widely online, it can lead to violence, and it seems to diminish participation in civic life. Researchers are trying to understand why people are more or less easily convinced by lies, to learn how to change this for the better. A 2018 study indicated that Internet skills, photo-editing experience, and social media use were significant predictors of image credibility evaluation (Shen et al., p. 20). In sum, people with less experience on digital media are more likely to be duped by false content.

Sometimes, speakers use language that isn’t dangerous in itself, but can render other messages more dangerous, by binding the members of a group more tightly to each other, to the group itself, and/or to its leader, or by strengthening distinctions between the in-group and the out-group. A common form of this binding speech is language that gives a sense of familial belonging to members of a group. In some cases, for instance, this talk tells them that they are bound by blood, not just politics. Such messages can amplify the effects of hallmarks of dangerous speech.

Most messages reach many types of people, and each receives them somewhat differently. Some people are much more willing and able to commit violence, for instance, though almost anyone can do so under certain circumstances, especially when they perceive an imminent threat to themselves or their fellow human beings (Leader Maynard and Benesch, 2016, p. 78). When analyzing speech for dangerousness, we try to predict its effect on the groups or individuals who are most susceptible, or most likely to commit violence.

Even where a group does not seem susceptible to dangerous speech, a few of its members usually are. So-called “lone wolf” attackers can be understood either as the most susceptible members of a group, or as individual “audiences,” moved to commit violence on their own. One lone wolf inspired by dangerous speech is Timothy McVeigh, who killed 168 people by bombing a U.S. government building in the state of Oklahoma in 1995, motivated and guided (in part) by The Turner Diaries, a racist, anti-Semitic novel in which characters commit a similar attack (Thomas, 2001).

 

  1. Context

The social and historical context in which speech spreads also affects the extent to which it is dangerous, since any message may be understood in dramatically different ways in one place or time versus another. Any number of aspects of context may be relevant. When conducting a dangerous speech analysis, one should consider as many of those as possible.

For example, is there a history of violence between the groups? Messages encouraging violence, or describing another group as planning violence, are more inflammatory where groups have exchanged violence in the past, or where there are longstanding, unresolved grievances between them. Former attacks tend to weaken or remove psychological barriers to violence. The Israeli-Palestinian conflict is a striking example of this, as is recurring intercommunal violence in many parts of India. Unfortunately, there are dozens of other such cases around the world, in which old fighting and violence always form a kind of collective psychological backdrop, and it is all too easy to catalyze new violence with words.

Another question to consider is whether there are social norms, laws, and/or policies that put one group at special and persistent risk. Systemic discrimination can create a context in which it seems entirely normal – because it is officially and widely sanctioned – to regard a group of people as inferior, deficient, or wicked. For example in Pakistan the Ahmadi, a religious minority, are denounced in the law and by clerics, political leaders, and even journalists as traitors to the national religion of Islam. Since the Ahmadis’ beliefs are legally considered blasphemous, they face ostracism and even murder on account of their religion (Khan, 2003). They have been persecuted even for their efforts to defend themselves against dangerous speech.

The Pakistani Supreme Court condemned three Ahmadi men to death in October 2017 for taking down an anti-Ahmadi sign (Hashim, 2017), and a fourth man would have faced death at the hands of the state also, but a teenager had walked into the police station where he was being held in 2014 and shot him to death (Houreld, 2014).

Within this context, anti-Ahmadi speech is even more dangerous as the state has already proven its unwillingness to protect the Ahmadi or treat them as equal citizens. Discriminatory legal systems normalize persecution and create a context in which members of the in-group (usually the majority) feel protected for their personal acts of discrimination and even violence against members of the out-group.

Other aspects of social or historical context, such as competition between groups for resources like land or water, are also important to consider.

 

  1. Speaker

When an inflammatory message comes from a person with influence, it tends to be more dangerous. Influence or authority can come from a variety of sources, including personal charisma, high social status, or official status such as political office. A public post, in turn, often comes with control of resources needed by an audience, and the power to deploy force against uncooperative audience members. In other cases, a speaker’s influence may derive from status as an unelected community leader, popular entertainer, or star athlete; indeed, religious and cultural leaders have more influence over some audiences than politicians. A close family member or trusted friend might also be highly influential, in person or on a social media platform like Facebook or a digital messaging system like WhatsApp, where users connect to people who have influence over them.

Especially online, the source of a message may also be unknown, or there may be multiple sources of the same message. In that case the speaker doesn’t, of course, make the message more dangerous. The source of dangerous speech may also be an organization, company, group, or government, or a bot controlled by a person or organization. Governments often have disproportionate influence, and are powerful disseminators of dangerous speech. Moreover, governments speak not only in official statements, but also through law. For example, Russia’s 2018 law banning the distribution of “homosexual propaganda” to minors endangers LGBTQ people by vilifying their existence. The law seems designed to reinforce existing discriminatory attitudes and fears among the Russian population (Human Rights Watch, 2018). This is an all-too-common phenomenon seen in the case of the Ahmadi described in the previous section: law emerges from and also reinforces discriminatory and even dangerous social norms.

 

The Second Speaker

In many cases, a speaker makes a message dangerous not by creating it, but by distributing, and often distorting, someone else’s content. In mid-2017, a video clip began circulating virally in India on WhatsApp, a platform which was then used by 200 million people in that country (Elliott, 2018). The clip seemed to show security camera footage of a child being kidnapped. What most of the furious, frightened people who shared it didn’t know is that the clip was part of a longer video showing a mock kidnapping in which the child is safely returned – made by a Pakistani charity to raise awareness about child abductions (Rebelo, 2017). The distorted version omitted the name of the charity, the campaign, and the safe return of the child. Instead, it falsely accused people in India of kidnapping, and inspired gruesome vigilante lynchings. As many such rumors circulated online and offline, mobs killed 33 people in India between January 2017 and July 2018 (Sanghvi, 2018).

“Second” speakers may also play an important role by carrying messages to a new audience, or to a much larger one than the original speaker could reach.11 In November 2017, U.S. President Donald Trump retweeted a series of shockingly violent videos. One of them was falsely titled, “Muslim migrant beats up Dutch boy on crutches!” – the Embassy of the Netherlands in the United States indicated via its own Twitter account that the boy who did the beating was not a Muslim migrant (Netherlands Embassy, 2017).

The videos were originally shared by Jayda Fransen, deputy leader of the far-right extremist group Britain First. Fransen then had 52,776 followers; Trump had over 42 million (Data Team, 2017). By retweeting the messages, the president not only disseminated dangerous speech to a much larger audience, but increased the legitimacy of the extremist message by endorsing it. Instead of creating the content, Trump gave it his highly influential voice.

 

  1. Medium

Speech may take any number of forms, and can be disseminated by myriad means. It may be shouted during a rally, played on the radio as a song, captured in a photograph, written in a newspaper or on a poster, or shared through social media. The form of the speech and the manner in which it is disseminated affect how the message is received and therefore, how dangerous it is.

There are several factors to consider when analyzing a medium. The first is whether the speech was transmitted in a way that would allow it to reach a large audience. Private conversation around a dinner table, for example, will not reach as many people as a post on a public Facebook page with many followers.

A second question is whether the speech was transmitted in a way that would reinforce its capacity to persuade. For example, was it repeated frequently? Repetition tends to increase the acceptance of an idea. Or was the speech published in or broadcast on a media source that is particularly influential or respected among the intended audience? In the same way that an influential speaker lends legitimacy to a message, a media source that is trusted by a particular audience gives credibility to the messages it spreads.

The particular language used by the speaker may also play a role. In fieldwork on violence prevention efforts in Kenya following the 2007-2008 post-election violence there, more than one Kenyan told one of us (Benesch, 2014) that if they heard a message in English or Kiswahili (Kenyan national languages), they heard it with their heads. If the same message came in the listener’s vernacular language (or “mother tongue”), they said they heard it with their hearts⁠—suggesting the message was more liable to rouse their emotions  (Benesch, 2014, p. 25).

Messages also tend to have a greater capacity to persuade if there are no alternative sources of news available, or if other sources don’t seem credible. In Myanmar, most people relied on government-controlled radio, television, and newspapers for decades until the country emerged from military rule in 2012. Only 1.1 percent then had access to the internet. Within only four years, half the population had a mobile phone – and most of those had access to Facebook (Stecklow, 2018) which for many became synonymous with the internet itself (Beech and Nang, 2018). As a result, Facebook became a highly influential medium, used to spread frightening, false messages intended to turn the majority population against minority Rohingya Muslims, even as the country’s military has carried out a vicious campaign to drive the Rohingya out, including rape, killing, and burning villages (Specia and Mozur, 2017). A Burmese administrator of a village that banned Muslims from even spending the night there told The New York Times, “I have to thank Facebook because it is giving me the true information in Myanmar” (Beech, 2017).

For generations, the Rohingya have faced discrimination and exclusion, and have been denied legal citizenship. Violence against them increased as government officials, influential Buddhist monks, and anonymous online sources described them as dangerous. Many also spread false rumors of upcoming attacks by Rohingya and dehumanized them, calling them “dogs,” “maggots,” “rapists,” or “pigs,” and calling for violence against them. Some posts even called for genocide – one Facebook page was called “We will genocide all of the Muslims and feed them to the dogs” (Stecklow, 2018). This rhetoric, much of which Facebook’s content moderators failed to detect, intensified as Myanmar escalated its campaign of forced relocation, driving almost one million Rohingya into Bangladesh. A Facebook post from September 2017 reads “These non-human kalar dogs, the Bengalis, are killing and destroying our land, our water, and our ethnic people…We need to destroy their race” (Stecklow, 2018).12

 

Dangerous Speech Online – the role of social media

Digital media and the internet have immeasurably changed the way people spread all kinds of messages, from the innocuous to the incendiary. Those who seek to turn groups of people violently against each other can spread dangerous speech quickly – especially in places where there is already a risk of mass violence. Ideas and narratives once confined to the fringes of popular discourse – including extremist ideas – are now widely available. Speakers who could hardly find an audience offline, even those who espouse the most widely-derided ideologies, can find at least a few fellow-thinkers across the world, and can form so-called “echo chambers” in which they bolster and further radicalize each other. By forging such bonds, people can collectively disseminate harmful content further than they could have alone and with the fervor of solidarity. Others are motivated neither by hatred nor conviction, but by simply wanting more followers and/or more money (from subscribers or advertisers) (Byrne, 2016).

Online, people can also communicate anonymously. On social media platforms like Twitter or Reddit, or messaging platforms like WhatsApp or Discord, they can spread ideas that they might not dare to express offline, where their identities would be known.

As it has become increasingly obvious that online content leads to serious offline harm, governments, researchers, activists, and internet companies have sought ways to diminish the problem. The first, most obvious response is simply to remove bad content or censor it. Each country has laws prohibiting certain forms of speech (they vary) and social media companies like Facebook and Twitter also have their own rules forbidding certain kinds of content, such as hate speech, nudity, or incitement to violence (Facebook, Inc., 2020; Twitter, Inc., 2020).

Censorship, whether by governments or private companies, poses significant risks to democracy and freedom of expression since it’s almost impossible to do it without making serious mistakes. First, although some content is obviously harmful or even illegal, most is quite context-dependent or ambiguous, and it’s often difficult to agree on where to draw the lines.

Second, policing the internet for harmful content is a job so huge that its scale is hard even to imagine: every day, over 1 billion people log on to Facebook alone and post billions of pieces of information (Zephoria Digital Marketing, 2018). Although internet companies train tens of thousands of people – who are often ill-paid to look at violent and deeply disturbing content all day – to decide which posts to take down, at such a scale, mistakes are inevitable and numerous (Roberts, 2014, pp. 15-16; Ohlheiser, 2017; Shahani, 2016).

Social media companies are increasingly turning to automated methods (software) to detect a variety of types of content they want to take down, such as terrorist recruiting and hate speech. This might seem like an efficient solution but it doesn’t work well, and it threatens freedom of expression. Software makes lots of mistakes. People express hatred, denigrate others, and promote fear in a wide and creative variety of ways. Moreover, computers can’t make some distinctions that humans can, such as to distinguish hate speech from a post denouncing it (Saleem et al., 2016), and social media companies have repeatedly taken down content posted by human rights defenders to capture evidence of abuse and war crimes (Kayyali, 2020).

Another reason not to rely entirely on deleting harmful content is that it can foreclose other kinds of constructive responses. The simplest response – to express disagreement – can usefully demonstrate that the majority disagrees with hateful views. In fact, the presumed power of “counterspeech,” which we define as “direct responses to hateful or harmful speech” (Wright et al., 2017, p. 57) is one of the main reasons why United States law protects freedom of speech so vigorously, refusing even to prohibit hate speech. If the “marketplace of ideas” is left as open as possible, the theory suggests, the best and safest ideas will eventually prevail (Brandenburg v. Ohio, 1969).

Evidence to prove or disprove this theory is scarce, but there are many intriguing uses of counterspeech, offline and online. For example, when a hate group sought to post anti-Muslim signs on public buses and trains in several U.S. cities in 2010, some cities tried to refuse. The group sued, and some courts allowed cities to reject the signs while others ruled that they must be displayed. In Detroit, where the ads were suppressed, public attention focused on the signs’ author, as a victim whose free speech rights were violated. In New York and other cities where the ads appeared, members of the public spoke against them and produced Muslim-defending ads in response to the inflammatory ones (Abdelkader, 2014, pp. 81-82).

A white woman wearing a t-shirt and cardigan stands in front of a brown-skinned woman wearing a hijab, whose hands are on the white woman's shoulders. Both women are smiling broadly. Above and to the side of the women, multicolored text reads "#MyJihad is to build bridges through friendship. What's yours?"

An ad produced by the Council for American-Islamic Relations during its #MyJihad campaign. (Kuruvilla, 2013)

A striking example of successful online counterspeech is the case of Megan Phelps-Roper. Although she grew up as a fervently loyal member of the extremist homophobic Westboro Baptist Church (founded by her grandfather), Phelps-Roper changed her beliefs, mainly thanks to a few long-running individual conversations with counterspeakers on Twitter (Chen, 2015; Phelps-Roper, 2019, ch. 9).

At this writing, some internet companies are also experimenting with other alternatives to deletion, intended to limit the circulation of dangerous speech and other forms of harmful content. For example, after inflammatory rumors spread in India as described above, WhatsApp took steps to slow down the spread of dangerous messages. The company limited the number of groups or individual accounts to which one can forward a particular message to five; there was previously no limit. To help users to identify possible suspect content, WhatsApp also began labeling messages that had been forwarded (WhatsApp, 2019).

 

Responding to Hateful and Dangerous Speech Online

There are also many other ways to diminish harmful content or its damaging effects. One might try to persuade people to stop posting such content in the first place (a preventive approach, rather than a reactive one like deletion), or support those who are attacked by it.

Internet users themselves (not governments or companies) are conducting many ingenious experiments in responding to harmful content online (Benesch, 2017). Some organizations also offer ideas and resources for response. One is the nonprofit Over Zero which teaches groups how to apply the dangerous speech framework and other tools for designing interventions to decrease polarization (Brown, 2016). Another example is “Seriously,” an online program created by the French organization Renaissance Numérique, which educates people on which tone and content make the best counterspeech. In 2017 our Dangerous Speech Project, along with #ICANHELP, iCanHelpline.org, HeartMob, and Project HEAR, created a comic for youth, illustrating several “dos” and “don’ts” for effective counterspeech.

 

Conclusion

The dangerous speech ideas offered in this chapter have been used in countries as varied as Nigeria, Sri Lanka, Denmark, Hungary, Kenya, Pakistan, and the United States, in two basic ways that seem promising. First, it’s useful to collect and study dangerous speech systematically, looking for changes in its nature and volume over time, since this can serve as an early warning for violence. Second, it’s valuable to find the most effective ways to diminish dangerous speech or its harmful effects – without impinging on freedom of speech. We have made efforts of both kinds and look forward to continuing, with colleagues in many countries where, unfortunately, the topic is all too relevant.


This Guide was originally published December 31, 2018; minor revisions and additions were made on January 9, 2020, August 4, 2020, and April 19, 2021.

Contributors to this Guide

Susan Benesch, Founder and Executive Director
Cathy Buerger, Director of Research
Tonei Glavinic, Director of Operations
Sean Manion, Communications Fellow
Dan Bateyko, Research Associate

Acknowledgments

We are very grateful to many people who have made invaluable contributions to our thinking, and therefore to this Guide. They are too many to list and some must not be named for their security. We are especially grateful to those who are working in interesting and innovative ways to undermine dangerous speech around the world. Any errors are our own. Please send us critiques and feedback: dangerousspeech.org/contact

We also wish to thank the John D. and Catherine T. MacArthur Foundation, whose support made this Guide possible.

 

References: 

Abdelkader, E. (2014). Savagery in the Subways: Anti-Muslim Ads, the First Amendment, and the Efficacy of Counterspeech. Asian American Law Journal. 21. pp.43-87. Available at: http://dx.doi.org/10.15779/Z38MG46

Amnesty International (2010). “Safer to Stay Silent: The Chilling Effect of Rwanda’s laws on ‘Genocide Ideology’ and ‘sectarism’.” Amnesty International. pp. 5-38. Available at: www.amnestyusa.org/sites/default/files/afr470052010en.pdf Archived at: https://perma.cc/XU6V-R8AP

Austin, J. L. (1962). How to do Things with Words. Cambridge: Harvard University Press.

Bachman, J. and Jack Holland. (2019) “Lethal Sterility: Innovative Dehumanisation in Legal Justifications of Obama’s Drone Policy.” The International Journal of Human Rights 23 (6): 1028–47. Available at: https://doi.org/10.1080/13642987.2019.1592159.

Beech, H. (2017). Across Myanmar, Denial of Ethnic Cleansing and Loathing of Rohingya. The New York Times.Available at: https://www.nytimes.com/2017/10/24/world/asia/myanmar-rohingya-ethnic-cleansing.html. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/8752-CFGU.

Benesch, S. (2003). Vile Crime or Inalienable Right, Defining Incitement to Genocide. Virginia Journal of International Law, 48(3), pp. 485-528. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1121926. Archived at: https://perma.cc/KS59-G2B5

Benesch, S. (2013). Dangerous Speech: A Proposal to Prevent Group Violence. Dangerous Speech Project. Available at: https://dangerousspeech.org/wp-content/uploads/2018/01/Dangerous-Speech-Guidelines-2013.pdf. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/X8DP-M7Q2.

Benesch, S. (2014). Countering dangerous speech to prevent mass violence during Kenya’s 2013 election. Available at: https://dangerousspeech.org/countering-dangerous-speech-kenya-2013/. [Accessed 10 Oct. 2018]. Archived at:  https://perma.cc/9USF-5EPE.

Benesch, S. (2015). Charlie the Freethinker: Religion, Blasphemy, and Decent Controversy. Religion & Human Rights 10, pp. 244-254. Available at: https://dangerousspeech.org/charlie-the-freethinker/

Benesch, S. (2015)  Civil Society Puts a Hand on the Wheel: Diverse Responses to Harmful Speech.Available at: https://dangerousspeech.org/civil-society-puts-a-hand-on-the-wheel-diverse-responses-to-harmful-speech/. Archived at: https://perma.cc/H9Q5-TD9G

Berry, J.A. and Berry, C.P. eds. (1999). Genocide in Rwanda: A collective memory. Howard University Press.

Berwick, A. (2011). 2083: A European Declaration of Independence. Available at: https://publicintelligence.net/anders-behring-breiviks-complete-manifesto-2083-a-european-declaration-of-independence/. [Accessed 10 Oct. 2018].  Archived at: https://perma.cc/AL3P-QZ95.

Bloom, P. (2017). The Root of All Cruelty. The New Yorker. Available at: https://www.newyorker.com/magazine/2017/11/27/the-root-of-all-cruelty [Accessed 14 Dec. 2018]. Archived at: https://perma.cc/2Q8G-Z9RW.

Brandenburg v. Ohio (1969), 395 U.S. 444. Available at: https://cdn.loc.gov/service/ll/usrep/usrep395/usrep395444/usrep395444.pdf [Accessed 9 Oct. 2018].  Archived at: https://perma.cc/XP8Q-W8KC.

Brcak, N. and Pavia, J.R. (1994). Racism in Japanese and US Wartime Propaganda. Historian, 56(4), pp. 671-684. Available at: https://doi.org/10.1111/j.1540-6563.1994.tb00926.x

Brown, R. (2016). Defusing Hate: A Strategic Communication Guide to Counteract Dangerous Speech. Available at: https://www.ushmm.org/m/pdfs/20160229-Defusing-Hate-Guide.pdf [Accessed 25 Sept. 2018]. Archived at: https://perma.cc/S3BU-MJU8.

Brunsden, J. (2017). Europe refugee policy is ‘Trojan horse of terrorism’, says Orban. Financial Times. Available at: https://www.ft.com/content/538b2a0a-154e-11e7-80f4-13e067d5072c. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/Z5WD-L6SP.

Burleigh, M. (2015). Migrants could be biggest threat to Europe since the war. Daily Mail Online. Available at: https://www.dailymail.co.uk/news/article-3141005/Tidal-wave-migrants-biggest-threat-Europe-war.html. Archived at: https://perma.cc/M3QN-BVXL.

Byrne, A. (2016). Macedonia’s fake news industry sets sights on Europe. Financial Times. Available at: https://www.ft.com/content/333fe6bc-c1ea-11e6-81c2-f57d90f6741a. [Accessed 27 Jan. 2020]. Archived at: https://perma.cc/4F9P-H398.

Bytwerk, R. (n.d.). Caricatures from Der Stürmer: 1927-1932. German Propaganda Archive. Available at: http://research.calvin.edu/german-propaganda-archive/sturm28.htm. [Accessed 9 Oct. 2018]. Archived at: https://perma.cc/Y7PC-W5HU.

Chen, A. (2015). Unfollow: How a prized daughter of the Westboro Baptist Church came to question its beliefs. New Yorker. Available at: http://www.newyorker.com/magazine/2015/11/23/conversion-via-twitter-westboro-baptist-church-megan-phelps-roper [Accessed 9 Oct. 2018]. Archived at: https://perma.cc/3EJ2-AVZE.

Chirot, D. and McCauley, C. (2010). Why not kill them all?: The logic and prevention of mass political murder. Princeton, NJ: Princeton University Press.

Data Team, The (2017).“Donald Trump is crushing it on Twitter.” (2017). The Economist. Available at: https://www.economist.com/graphic-detail/2017/11/10/donald-trump-is-crushing-it-on-twitter [Accessed 9 Oct. 2018]. Archived at: https://perma.cc/3HYA-VTBS.

“Declaration on the Elimination of Violence against Women.” 1993. United Nations General Assembly. Available at: http://www.un.org/documents/ga/res/48/a48r104.htm [Accessed 13 Dec. 2018]. Archived at: https://perma.cc/YXS8-XRWD.

Des Forges, A. (1999). “Leave none to tell the story:” Genocide in Rwanda, New York, New York: Human Rights Watch. Available at: https://www.hrw.org/reports/pdfs/r/rwanda/rwanda993.pdf or https://www.hrw.org/reports/1999/rwanda/ [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/ZNB8-WRND.

DNA India (2015) “Akbaruddin Owaisi denied permission to hold rally in Pune ahead of civic polls.” DNA India, October 24, 2015 Available at: http://www.dnaindia.com/india/report-akbaruddin-owaisi-denied-permission-to-hold-rally-in-pune-ahead-of-civic-polls-2138244 Archived at: https://perma.cc/9RTX-XU4W

Elliott, J. (2018). “India WhatsApp killings: Why mobs are lynching outsiders over fake videos.” Global News. Available at: https://globalnews.ca/news/4333499/india-whatsapp-lynchings-child-kidnappers-fake-news/ [Accessed 25 Sept. 2018]. Archived at: https://perma.cc/N3LV-4G89.

Facebook, Inc. (2020). Community Standards. Available at: https://www.facebook.com/communitystandards/ [Accessed 27 Feb. 2020]. Archived at: https://perma.cc/MBH6-FQGY.

Figes, O. (2008). The Whisperers: Private life in Stalin’s Russia. 2nd ed., New York: Metropolitan Books.

Galtung, J. (1969). Violence, peace, and peace research. Journal of Peace Research, 6(3), pp. 167-191. Available at: https://doi.org/10.1177/002234336900600301.

Gowen, A. (2017). “We are going to kill you”: Villagers in Burma recount violence by Rohingya Muslim militants. Washington Post. November 15, 2017. Available at: https://www.washingtonpost.com/world/asia_pacific/we-are-going-to-kill-you-villagers-in-burma-recount-violence-by-rohingya-muslim-militants/2017/11/14/409ff59b-849d-4459-bdc7-d1ea2b5ff9a6_story.html [Accessed 20 Sept. 2018]. Archived at: https://perma.cc/U5MN-8NEJ.

Hashim, A. (2017). Three Ahmadis sentenced to death for blasphemy. Al Jazeera. Available at: https://www.aljazeera.com/news/2017/10/ahmadis-sentenced-death-blasphemy-171012081709423.html [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/VSZ4-4MBX

Haslam, N. (2006). Dehumanization: An integrative review. Personality and social psychology review, 10(3), 252-264. Available at: https://doi.org/10.1207/s15327957pspr1003_4

Hinton, A.L. (2005). Why did they kill? Cambodia in the shadow of genocide, Berkeley, California: University of California Press.

Houreld, K. (2014). Teenager kills man accused of blasphemy in Pakistan police station. Reuters. Available at: https://www.reuters.com/article/us-pakistan-blasphemy-killing/teenager-kills-man-accused-of-blasphemy-in-pakistan-police-station-idUSBREA4F0HI20140516 [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/X6QE-86FG.

Human Rights Watch (2018). “Russia: ‘Gay Propaganda’ Law Endangers Children.” HRW. Available at: https://www.hrw.org/news/2018/12/12/russia-gay-propaganda-law-endangers-children Archived at: https://perma.cc/DW3V-LQ7Z.

Hungarian Civil Liberties Union (2013). Those Racist Roma Again. Available at: https://web.archive.org/web/20150514122019/http://tasz.hu/en/romaprogram/those-racist-roma-again.

ICTR (2007). Nahimana Et Al. v. The Prosecutor.  Available at: https://www.refworld.org/cgi-bin/texis/vtx/rwmain/opendocpdf.pdf?reldoc=y&docid=48b527412 [Accessed 24 Feb. 2020]. Archived at: https://perma.cc/T32X-4PKK.

Jenkins, S. (2012). Ethnicity, violence, and the immigrant-guest metaphor in Kenya. African Affairs, 111(445), pp. 576-596. Available at: https://doi.org/10.1093/afraf/ads051

Johnson, M.S. (1998). Gender, Race, and Rumours: Re‐examining the 1943 Race Riots. Gender & History, 10(2), pp. 252-277. Available at: https://doi.org/10.1111/1468-0424.00099

Kayyali, D. (2020). Human rights defenders are not terrorists, and their content is not propaganda. WITNESS. Available at https://blog.witness.org/2020/01/human-rights-defenders-not-terrorists-content-not-propaganda/ [Accessed 27 Jan. 2020]. Archived at: https://perma.cc/WT4C-JU4Y.

Khan, A.M. (2003). Persecution of the Ahmadiyya community in Pakistan: An analysis under international law and international relations. Harvard Human Rights Journal 16, pp. 217-244. Available at: https://muslimwriters.org/wp-content/uploads/2012/06/ahmadiyya_persecution.pdf. Archived at: https://perma.cc/T7FN-Y2VP.

Kiernan, B. (2009). Blood and Soil: A World History of Genocide and Extermination from Sparta to Darfur 2nd ed., New Haven, Connecticut: Yale University Press.

Kimani, M. (2007). RTLM: the Medium that Became a Tool for Mass Murder. In L. Waldorf and A. Thompson, eds. The Media and the Rwandan Genocide. 1st ed. London: Pluto Press.

Kopan, T. (2015). Donald Trump: Syrian refugees a ‘Trojan horse’. CNN. Available at: https://www.cnn.com/2015/11/16/politics/donald-trump-syrian-refugees/index.html [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/E88C-SFLZ.

Kugelman, M. (2017) Why Pakistan hates Malala. Foreign Policy. Available at: https://foreignpolicy.com/2017/08/15/why-pakistan-hates-malala/ [Accessed 17 Dec. 2018]. Archived at: https://perma.cc/L9ZC-DKE5.

Kuruvilla, C. (2013). War of words: Muslim activists trying to wrestle ‘jihad’ away from extremists and critics. New York Daily News. https://web.archive.org/web/20151018092631/https://www.nydailynews.com/news/national/muslim-activists-wrestle-jihad-extremists-critics-article-1.1256121 [Accessed 3 Aug. 2020]

Kuper, L. (1981). Genocide: Its political use in the twentieth century. 1st ed., New Haven, Connecticut: Yale University Press.

Kühl, S. (2016) Ordinary Organisations: Why Normal Men Carried Out the Holocaust. Cambridge, UK ; Malden, MA: Polity.

Lane, J. (2014). Be afraid. Be very afraid: Exploring the rhetoric of the monster in political and horror posters of the 20th century, Edith Cowan University, Perth, Australia. Available at: https://ro.ecu.edu.au/theses_hons/198/. Archived at: https://perma.cc/29ZC-3H9C.

Lehman, J. (2010). A brief explanation of the Overton window. Mackinac Center for Public Policy. Available at: https://www.mackinac.org/overtonwindow#top. Archived at: https://perma.cc/KD2X-KVSG.

Leader Maynard, J. (2014). Rethinking the role of ideology in mass atrocities. Terrorism and Political Violence, 26(5), pp.821-841.

Leader Maynard, J. (2015). “Combating Atrocity-Justifying Ideologies,” in Sharma, S.K. and Welsh, J.M. eds., The responsibility to prevent: overcoming the challenges of atrocity prevention. Oxford: Oxford University Press. Available at: https://doi.org/10.1080/09546553.2013.796934

Leader Maynard, J. and Benesch, S. (2016). Dangerous speech and dangerous ideology: An integrated model for monitoring and prevention. Genocide Studies and Prevention: An International Journal, 9(3). Available at: http://dx.doi.org/10.5038/1911-9933.9.3.1317

Leader Maynard, J. (2022). Ideology and Mass Killing: How Groups Justify Genocides and Other Atrocities Against Civilians. Oxford University Press.

Livingstone Smith, D. (2020). On Inhumanity: Dehumanization and How to Resist It. Oxford University Press.

Mackie, J. (2014). A century ago, the Komagata Maru arrived in Vancouver and challenged racist policies. Vancouver Sun. Available at: http://www.vancouversun.com/life/century+Komagata+Maru+arrived+Vancouver+challenged+racist+policies/9868708/story.html. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/LMG5-HBAN.

McCrummen, S. (2008). No Quick Fix for What Still Ails Kenya. The Washington Post Foreign Service. Available at: http://www.washingtonpost.com/wp-dyn/content/article/2008/03/06/AR2008030603766.html?sid=ST2008030603799. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/59ED-G82H.

Mchangama, J. (2019). “The U.N. Hates Hate Speech More than it Loves Free Speech”, Foreign Policy, Feb. 28, 2019, https://perma.cc/7HMS-YQRL.

Monbiot, G. (2014). “‘Cleansing the Stock’ and Other Ways Governments Talk about Human Beings” The Guardian, October 21, 2014, sec. Opinion. Available at: https://www.theguardian.com/commentisfree/2014/oct/21/cleansing-stock-doublespeak-people-killing. Archived at: https://perma.cc/GG8R-ZYEY

Mugesera v. Canada (Minister of Citizenship and Immigration), [2005] 2 S.C.R. 100, 2005 SCC 40. Available at: https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/2273/index.do. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/GTU8-Z8PX.

Netherlands Embassy. (2017) 29 November. Available at: https://twitter.com/NLintheUSA/status/935953115249086464?ref_src=twsrc%5Etfw. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/CME6-3GYX.

The General Civil Penal Code (Act No. 10 of May 22, 1902, as last amended by Act No. 131, Dec. 21, 2005), University of Oslo Law Library Translated Norwegian Legislation online database, https://app.uio.no/ub/ujur/oversatte-lover/data/lov-19020522-010-eng.pdf. Archived at: https://perma.cc/5E2Y-EAVF

OHCHR (2018). “Report of the detailed findings of the Independent International Fact-Finding Mission on Myanmar.” United Nations Human Rights Council. Available at: https://www.ohchr.org/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_CRP.2.pdf [Accessed 9 Oct. 2018]. Archived at: https://perma.cc/4XFG-BJAD

Ohlheiser, A. (2017). The work of monitoring violence online can cause real trauma. And Facebook is hiring., The Washington Post, May 4, 2017. Available at: https://www.washingtonpost.com/news/the-intersect/wp/2017/05/04/the-work-of-monitoring-violence-online-can-cause-real-trauma-and-facebook-is-hiring/?utm_term=.de4b78129afd. [Accessed 10 Oct. 2018]. Archived at: https://perma.cc/Q7GY-5LJV.

Osborne, S. (2017). Ratko Mladic guilty: ‘Butcher of Bosnia’ convicted of genocide, crimes against humanity and war crimes, Independent. November 22. Available at: https://www.independent.co.uk/news/world/europe/ratko-mladic-guilty-genocide-verdict-latest-bosinian-war-crimes-humanity-serbian-general-a8068986.html Archived at: https://perma.cc/T9DJ-K37W[Accessed 9 Oct. 2018]

Phelps-Roper, M (2019). Unfollow: A Memoir of Loving and Leaving the Westboro Baptist Church. Farrar, Straus and Giroux.

Promotion of Equality and Prevention of Unfair Discrimination Act 4 of 2000, c. 1. South Africa. Available at: http://www.justice.gov.za/legislation/acts/2000-004.pdf Archived at: https://perma.cc/2XV5-KVR5 [Accessed 10 Oct. 2018]

The Prosecutor v. Ferdinand Nahimana, Jean-Bosco Barayagwiza, Hassan Ngeze (Trial Judgment). (2003) ICTR-99-52-T, International Criminal Tribunal for Rwanda (ICTR) available at: http://unictr.irmct.org/sites/unictr.org/files/case-documents/ictr-99-52/trial-judgements/en/031203.pdf. Archived at: https://perma.cc/7HCW-ERUH [Accessed 10 Oct. 2018].

Rebelo, K., (2017). Child Kidnapping Rumours In India Being Spread With Syria Image, Pak Video. BOOM Live. Available at: https://www.boomlive.in/child-kidnapping-rumours-in-india-being-spread-with-syria-image-pak-video/. Archived at: https://perma.cc/ZKD6-ZU68 [Accessed 10 Oct. 2018].

Roberts, S. T. (2014). Behind the screen: the hidden labor of commercial content moderators, PhD dissertation, University of Illinois at Urbana-Champaign, Available at: http://hdl.handle.net/2142/50401. Archived at: https://perma.cc/3MS4-SY75 [Accessed 10 Oct. 2018].

Russian Public Opinion Research Center (2018). Conspiracy Theory Against Russia. Available at: https://wciom.ru/index.php?id=236&uid=9259 Archived at: https://perma.cc/H4UB-URR7 [Accessed 14 Dec. 2018].

Saleem, H.M., Dillon, K.P., Benesch, S., and Ruths, D. (2016). A Web of Hate: Tackling Hateful Speech in Online Social Spaces. Proceedings of the First Workshop on Text Analytics for Cybersecurity and Online Safety. Available at: http://www.ta-cos.org/sites/ta-cos.org/files/tacos2016_SaleemDillionBeneschRuths.pdf. Archived at: https://perma.cc/N3BK-RWM2 [Accessed 10 Oct. 2018].

Sanghvi, V. (2018). India’s Lynching App: Who is Using WhatsApp as a Murder Weapon? SCMP: This Week in Asia. July 9. Available at: https://www.scmp.com/week-asia/society/article/2154436/indias-lynching-app-who-using-whatsapp-murder-weapon Archived at: https://perma.cc/22ZX-6VQL [Accessed 28 Sept. 2018]

Shahani, A. (2016). With ‘Napalm Girl,’ Facebook Humans (Not Algorithms) Struggle To Be Editor. National Public Radio. September 10. Available at: https://www.npr.org/sections/alltechconsidered/2016/09/10/493454256/with-napalm-girl-facebook-humans-not-algorithms-struggle-to-be-editor Archived at: https://perma.cc/Q2T5-EZWQ [Accessed 9 Oct. 2018]

Shen, C., Kasra, M., Pan, W., Bassett, G.A., Malloch, Y., and O’Brien, J.F. (2018). Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media and Society. Available at: doi:10.1177/1461444818799526.

Sherwood, Harriet. (2016). “Polish magazine’s ‘Islamic Rape of Europe’ Cover Sparks Outrage.” The Guardian. Available at: www.theguardian.com/world/2016/feb/18/polish-magazines-islamic-of-europe-cover-sparks-outrage. Archived at: https://perma.cc/FJ3W-8CT3

Simões, M. (2018). Brazil’s polarizing new president, Jair Bolsonaro, in his own words. The New York Times. Available at: https://www.nytimes.com/2018/10/28/world/americas/brazil-president-jair-bolsonaro-quotes.html Archived at: https://perma.cc/P4BH-TGGA

Smith, D.L. (2011). Less than human: Why we demean, enslave, and exterminate others. New York City: St. Martin’s Press.

Smith, D.L. (2018). Donald Trump, Dangerous Speech, and the Legacy of White Supremacist Terrorism. Dangerous Speech Project. Available at: https://dangerousspeech.org/donald-trump-dangerous-speech-and-the-legacy-of-white-supremacist-terrorism/. Archived at: https://perma.cc/7LQ5-XB43 [Accessed 10 Oct. 2018].

Specia, M. and Mozur, P. (2017). A War of Words Puts Facebook at the Center of Myanmar’s Rohingya Crisis. The New York Times, October, 27. Available at: https://www.nytimes.com/2017/10/27/world/asia/myanmar-government-facebook-rohingya.html. Archived at: https://perma.cc/4JXP-4JQ4 [Accessed 10 Oct. 2018].

Stecklow, S. (2018). Why Facebook is losing the war on hate speech in Myanmar. Reuters. August 15. Available at: https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/. Archived at: https://perma.cc/U9VF-GKRW [Accessed 10 Oct. 2018].

Straus, S. (2004) How Many Perpetrators Were There in the Rwandan Genocide? An Estimate. pp. 85–98. Available at: https://doi.org/10.1080/1462352042000194728.

Straus, S. (2007). What Is the Relationship between Hate Radio and Violence? Rethinking Rwanda’s “Radio Machete”. Politics & Society 35(4), pp. 609-637. Available at: doi:10.1177/0032329207308181.

Straus, S. (n.d.) African Presidential Speeches Database. University of Wisconsin. Available at: https://faculty.polisci.wisc.edu/sstraus/african-presidential-speeches-database/ Archived at: https://perma.cc/QC4M-2ZWS [Accessed 10 Oct. 2018]

Streicher, J. (1934). Jüdischer Mordplan. Der Stürmer.

Thomas, J. (2001). Behind a Book That Inspired McVeigh. The New York Times. Available at: https://www.nytimes.com/2001/06/09/us/behind-a-book-that-inspired-mcveigh.html. Archived at: https://perma.cc/4M3Y-YRAL [Accessed 25 Sept. 2018]

Truth, Justice, and Reconciliation Commission. (2008). Commissions of Inquiry – CIPEV Report (Waki Report) Available at: https://digitalcommons.law.seattleu.edu/cgi/viewcontent.cgi?article=1004&context=tjrc-gov. Archived at: https://perma.cc/7SUE-H4H8.

Twitter, Inc. (2020). The Twitter Rules. Available at: https://help.twitter.com/en/rules-and-policies/twitter-rules. Archived at: https://perma.cc/8ZKH-4SBU [Accessed 27 Feb. 2020]

Walker, Shaun. (2017) “‘A Useful Punching Bag’: Why Hungary’s Viktor Orbán Has Turned on George Soros.” The Guardian. Available at: https://www.theguardian.com/world/2017/jun/22/hungary-viktor-orban-george-soros. Archived at: https://perma.cc/T9VC-ANK6.

Weitz, E. D. (2015) A Century of Genocide: Utopias of Race and Nation – Updated Edition. 2nd edn. Princeton: Princeton University Press.

WhatsApp. (2019). More Changes to Forwarding. WhatsApp Blog. Available at: https://blog.whatsapp.com/10000647/More-changes-to-forwarding. Archived at: https://perma.cc/8FA2-XZ2Y [Accessed 27 Jan. 2020].

Wright, L., Ruths, D., Dillon, K.P., Saleem, H.M., and Benesch, S. (2017). Vectors for Counterspeech on Twitter. In Proceedings of the First Workshop on Abusive Language Online, pp. 57-62. Available at: https://www.aclweb.org/anthology/W17-3009.pdf. Archived at: https://perma.cc/5XK8-35J9.

Yanagizawa-Drott, D. (2014). Propaganda and conflict: Evidence from the Rwandan genocide. The Quarterly Journal of Economics, 129(4), pp.1947-1994.

Zephoria Digital Marketing. (2018) The Top 20 Valuable Facebook Statistics – Updated September 2018. Available at: https://zephoria.com/top-15-valuable-facebook-statistics/ Archived at: https://perma.cc/54PA-FYVV [Accessed 9 Oct. 2018]

Zikri, A.B. (2018). Netanyahu defends Egypt border fence: Influx of African migrants more dangerous than terrorism. Haaretz. Available at: https://www.haaretz.com/israel-news/.premium-netanyahu-danger-posed-by-african-migrants-is-greater-than-terrorism-1.5930984. Archived at: https://perma.cc/33K3-X6SV [Accessed 10 Oct. 2018].

Footnotes

  1. We use the term ‘speech’ to refer to any form of human communication – in keeping with the definition of Dangerous Speech.
  2. To dox is to harass or endanger someone by searching for, and then posting online, private or identifying information about that person.
  3. Other definitions of violence do include non-physical harm. Peace and conflict studies scholar Johan Galtung, for example, includes discrimination, exclusion, and exploitation in what he calls “structural violence” (1969, p.171). The United Nations Declaration on the Elimination of Violence against Women (1993) defines violence against women as “gender-based violence that results in, or is likely to result in, physical, sexual or psychological harm or suffering to women.”
  4. In linguistics a “speech act” is communication that brings about some sort of response or change in the world. The 20th-century British philosopher of language J.L. Austin (1962) pioneered speech act theory, in which he tried to capture and distinguish all the types of effects that language can have. “Perlocutionary force,” Austin proposed, is the capacity of a speech act to bring about a response in its audience. We draw on this body of thought since dangerous speech is defined by its perlocutionary force.
  5. This process can also be described with reference to the Overton Window, a theory of the way the acceptable range of political discourse, or policies, changes over time. The theory’s originator Joseph Overton imagined a window containing views or policies that are acceptable to the opinion leaders or the majority, in a group of people at a particular time. As once-radical positions or ideas become more acceptable, the imaginary window gradually moves so that even ideas that were once unthinkable can eventually be found inside it (Lehman, 2010).
  6. The three defendants were convicted of incitement to genocide, among other grave crimes. Dangerous speech is not a crime in any country’s penal code, nor do we suggest that it should be criminalized. There are already related speech crimes in most bodies of law, and we believe that criminal law is generally not a very effective way of limiting speech or its harmful effects.
  7. Prosecutor v. Nahimana et al. (Trial Judgment), para. 436. It’s important to recognize that the witness was a firsthand observer who made a subjective attempt to gauge the effect of RTLM’s broadcasts on a large number of people. Scholars who have since studied the impact of RTLM include David Yanagizawa-Drott (2014) and Scott Straus (2007).
  8. Scholars have described dehumanization in some detail, observing distinct forms of it and seeking to explain it. For example, psychologist Nick Haslam proposed two categories: animalistic dehumanization (viewing other people as animals) and mechanistic dehumanization (asserting that other people lack typical human qualities) (2006, p.258). In a monograph on dehumanization, philosopher David Livingstone Smith suggests that humans are prone to dehumanizing others because of what he describes as our “cognitive architecture” (2011).
  9. Myanmar and Burma are the same country. The British who colonized the country called it “Burma,” and the ruling military junta changed that name to “Myanmar” in 1989. Both names are still used.
  10. The Hutu 10 Commandments (or “Ten Commandments of the Bahutu”) were originally published in Kinyarwanda. This translation was taken from Berry, J.A. and Berry, C.P. eds. (1999, p.113–115).
  11. Those who carry information across social or cultural boundaries between groups are sometimes called “bridge figures,” and they can be helpful or malevolent. For further description of this, see Benesch, 2015.
  12. The term “kalar” is a slur commonly used in Myanmar to denigrate Rohingya. It implies dark skin, and foreignness (OHCHR, 2018, p. 168). Rohingya are also often called “Bengalis” to refer to their Bangladeshi ancestry and imply that they do not belong – and have no right to stay – in Myanmar.