IntLawGrrls Interviews Susan Benesch

Originally published in IntLawGrrls.

Susan Benesch is an American journalist and scholar of speech who is known for founding the Dangerous Speech Project. She is a free speech advocate, recommending the use of counter speech rather than censorship to delegitimize harmful speech. She earned a JD at Yale in 2001 and a LLM from Georgetown University Law Center in 2008. She also worked for Amnesty International and Human Rights First, and is currently faculty associate of Berkman Klein Center for Internet and Society at Harvard University. I thank her for providing her valuable insight into my questions.

Question: What interested you to study speech that inspires violence among people? Any personal experience that you would like to share?

Answer: I started out as a journalist, and worked in Latin America and Haiti when many thousands of people were murdered there for their beliefs or identities. Hoping to prevent other killings I wrote about them, but came to believe that as a journalist I was mainly a spectator. So, I became a human rights lawyer.  Working at the International Criminal Tribunal for the Former Yugoslavia (ICTY) while still a law student, I read witness statements describing brutality so awful that it stunned me in spite of what I had seen as a journalist, and although many of my relatives, including my father’s parents and his brother and nearly all my mother’s family, were murdered in an attempt at genocide. I have never stopped wondering how and why humans can do such things to each other, nor wanting to prevent it.

It isn’t normal; something must change people’s minds and behavior so that they’re able or even eager to embrace terrible violence. I became interested in speech as a catalyst for intergroup violence while in law school, and wrote a paper on incitement to genocide. Later during a fellowship at Georgetown University’s law school, I did more research and hatched the idea that certain kinds of rhetoric have special power to convince people to condone or commit violence against members of another group. If that kind of speech is a precursor to intergroup violence, or even a prerequisite, this offers new possibilities for preventing violence. I have been working on that idea for nearly a decade.

Question: What are the key parameters to adjudge that a certain speech is a hate speech or a dangerous speech?

Answer: Those are overlapping categories.

Judging that a particular speech act is hate speech is more difficult, since there’s no consensus definition of hate speech, either in law or in colloquial use. (Most bodies of law, including international law, do not codify hate speech at all.)

There is one common thread among definitions of hate speech: that it demeans or attacks people based on a shared identity of some kind, so “I hate you” isn’t hate speech. Definitions vary by including different types of  groups. Most list familiar protected categories from international human rights law like religion, ethnicity, and nationality, but others like age, caste, immigration status, sexual orientation, disability, and gender are in some definitions but not others. The United Nations has recently produced a new definition of hate speech that wisely uses a non-inclusive list of group identities, but it sets a very low and perhaps subjective bar for what counts as hate speech, by using the word “pejorative.” Here’s the definition: “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”

In sum, in all too many cases hate speech is explicit, vicious, and obvious, but the category’s boundaries are hard to define and are often subjective.

Dangerous speech (as I have defined it) is a smaller, more clearly bounded category. it is any kind of human expression that can make intergroup violence more likely by lowering normal social barriers against it. The most useful emotion for making people condone or commit violence against other people isn’t hate, it’s fear. (This has been amply demonstrated by neurobiology. For more on this, see Robert Sapolsky’s invaluable and very readable book Behave: The Biology of Humans at our Best and Worst.) Dangerous speech convinces people to fear another group of people intensely.

There are a few specific ways to do this, as it turns out, so the same rhetorical moves recur in the speech of malevolent leaders in the months and years before genocide or other mass violence. Because of these “hallmarks” of dangerous speech, it’s uncanny how similar it is from case to case, across countries, cultures, and historical periods.

The most familiar hallmark is dehumanization. The most destructive one may well be “accusation in a mirror” – telling your own group that another group is planning to attack or even annihilate them, when in fact you as a leader want your group to attack the others. Dehumanization can make killing seem acceptable; accusation in a mirror makes it seem necessary – and even virtuous, as the scholar Jonathan Leader Maynard has described.

A hallmark isn’t enough to identify dangerous speech, though, since I could say something dehumanizing about a group of people to you and it wouldn’t lower the likelihood that you will condone or commit violence against that group – because you’re not at all receptive to the message. Whether speech is dangerous depends on the context in which it is spread: whether the audience is receptive, whether the speaker is influential with that audience, and so on. I’ve identified five elements of context: the speaker, the audience, the message, the means of dissemination, and the social and historical context. To judge whether a particular act of speech is dangerous you ask questions regarding those elements. Then, based on your answers, you predict whether the speech will make other people more likely to condone or commit violence.  It’s a guess, but a systematic, educated one.

Question: Are there any legal statutes or guidelines regarding hate speech in International Law? If no, would you suggest that such provisions should be introduced?

Answer: No, international law doesn’t codify hate speech. Two human rights treaties – the ICCPR and the ICERD – have provisions related to hate speech. The ICCPR requires that states prohibit only two kinds of speech. One is propaganda for war and the other is “Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.” (ICCPR. Art 20(2) ) The latter seems to be both broader and narrower than the category “hate speech,” in different respects. In any case it is confusing and has not been adequately interpreted. Not surprisingly, it hasn’t been incorporated into municipal laws.

I would strongly oppose introducing a provision against hate speech in international law, at least until Art 20(2) is clarified, and until the treaty bodies explain how to reconcile the speech provisions of the ICCPR and ICERD in some detail. Until now they have simply declared that the two must not conflict; that’s insufficient for policymaking.

Question: Do you think a permanent suspension of any influential person’s social media account is the ultimate solution for tackling online hate speech?

Answer: I’ll reply as if you’d asked about dangerous speech, since hate speech is a vague and contested term, and some hate speech isn’t dangerous. The only ultimate solution for tackling online dangerous speech, or offline dangerous speech for that matter, is to convince people not to be interested in it. If people lose interest, the speech loses its power.

Here it’s important to realize that when we refer to “freedom of speech” or expression, what we really mean isn’t the freedom merely to speak, say in the shower or in the woods. It’s the opportunity to get someone else to hear or read you. I use the term “freedom of reach.”

It’s hard to persuade people to lose interest in what an influential person has to say even when that person is spreading harmful lies, but it’s not impossible especially if other influential people work at it. We need much more of that, as I argued in a recent oped. Meanwhile, an interim solution is to limit the freedom of reach of influential people by suspending their accounts. There are many other possibilities, such as downranking their content which would limit their reach without taking down their accounts.

Question: What kind of measures can social media platforms take in order to tackle the menace of hate speech in today’s volatile world?

Answer: The most obvious and most-discussed response is to attempt to detect hate speech and take it down. To do this at scale, you’ve got to detect hate speech automatically, with software, and that’s very difficult since hate speech is hard to define consistently as I mentioned above. Even content that is clearly hateful is often expressed in idiosyncratic, subtle ways (like mocking the way another group of people talk), and it’s highly context-dependent. For example it can be difficult to distinguish someone expressing racism, from someone calling out someone else’s racism. Also platforms operate in dozens of languages. All this makes me worry that detecting and taking down hate speech automatically would lead to overbroad censorship, so takedown decisions should be made or at least reviewed by people, and there should be some form of oversight of the platforms’ enforcement of their own rules, at scale. (This is not at all what Facebook’s new Oversight Board is doing. It is reviewing only a few dozen cases a year; Facebook implements millions of decisions every week).

Platforms can take many other measures, such as detecting and removing bots that produce hate speech, banning accounts that persistently spread hate speech, requiring users to verify their identities, attempting to reform users who post hate speech (with a variety of behavioral interventions), providing users with blocking and filtering tools so they don’t see hate speech or other objectionable content, limiting the reach of hate speech that the platform chooses not to take down entirely, prioritizing hate speech that seems to bring about specific kinds of harm that the platforms (and especially relevant groups of its users!) decide to prevent, making it easier for users to understand platform rules and to report hate speech, and many more.

Question: The Internet has also turned into an unsafe place for women. What kind of steps should social media companies take to address the issue?” 

Answer: Yes, the Internet is unsafe for many women in different ways, and responses must be tailored for each of those. For this large topic I’ll first point you to a brilliant book filled with important ideas: Danielle Citron’s Hate Crimes in Cyberspace. Danielle describes a variety of attacks on women and argues for better laws to protect women, including civil rights laws, since as she argues persuasively, online attacks often violate civil rights.

It’s useful to distinguish between attacks on women as individuals, (like nonconsensual publication of intimate images by their former partners) and attacks on women as members of groups (such as women journalists, who often face relentless harassment and threats because they dare, as women, to do certain kinds of work.

Regarding attacks on women as individuals, almost every U.S. state now has laws against cyberbullying and cyberstalking. Platforms should work with government to enforce such laws, and to crack down on perpetrators where laws or law enforcement are absent. This is bound to be inadequate, but it’s better than nothing. Regarding attacks on women as a group, platforms should count gender as a protected group when it’s the basis for attacks on them, and pay special attention to groups of women who are frequently attacked, like political candidates, journalists, women of color, and overlapping categories of those.

There are also some civil society efforts that can’t take the place of law enforcement, but that can help to make some progress toward preventing harassment, by raising awareness. An interesting one is a video about harassment of women journalists who write about sports, called More Than Mean. In the video, women sportswriters sit and listen while men who volunteered for the project read aloud harassing messages that the women have already received online. The male volunteers have not seen the messages before, and they become increasingly uncomfortable at the profanity and viciousness of them, while the women nod knowingly.