What we’ve been doing at the Dangerous Speech Project

We’ve just launched the website you’re reading, and we’ve expanded Dangerous Speech ideas for use in countries where there is no serious risk of mass atrocity – but where violent hatred abounds. In another effort to make the DS ideas more useful, we are analyzing a variety of Dangerous Speech examples to illustrate, clearly and in detail, how to determine when and why speech is dangerous. We’ve also been studying online Dangerous Speech and other kinds of abusive content, and the most effective ways to diminish the harms they cause.  

We are continuing to develop the scope of the Dangerous Speech ideas, to include violence other than genocide and mass atrocities, such as lone wolf attacks or intergroup violence that is of a smaller scale. We also realized that Dangerous Speech is not only a subset of hate speech, since there is Dangerous Speech that is not hateful. Sometimes, instead of expressing or inciting hatred, it instills fear – often a more powerful emotion for driving people toward violence. These new ideas and others are explained further in our new FAQ.

In 2016, we launched our Global Researchers Initiative, a effort to identify and study a diverse set of Dangerous Speech cases in collaboration with researchers from around the world. Many of the examples are neither obvious nor explicit, since we want to demonstrate very concretely how to use the DS analytical guidelines, and how to draw the line between speech that is dangerous and speech that isn’t – even though it may be offensive. This is critical as the Dangerous Speech ideas are being used more widely. For this reason, a particular focus of this project is collecting ‘edge cases’ – non-obvious examples that fall near the line between Dangerous Speech. We are now working on case studies with researchers in the Philippines, Germany, Denmark, Turkey, and Israel. These case studies will be made available on our website, along with the case study template.

Currently, the Dangerous Speech framework only considers factors that make speech more dangerous in context, so we are expanding the Dangerous Speech ideas to include restraining factors, or contextual factors that make speech less dangerous. An example of a restraining factor might be a speaker who has a known history of lying. Expanding the Dangerous Speech framework to include restraining factors will make our ideas more useful, especially for intervention projects seeking to strategically monitor and reinforce restraining factors.

A major focus of our research is online speech. A large quantity of Dangerous Speech and other harmful speech takes place online, and it is easier to collect, study, and counter it when it is online. Our research on responses to harmful speech online is divided into two projects. The first was our study of hateful speech and counterspeech on Twitter, which was funded by the Canadian government’s Kanishka Project. In collaboration with Derek Ruths at McGill University, we produced three papers in 2016. “A Web of Hate: Tackling Hate Speech in Online Social Spaces” explains the results of our automated hateful speech classifier. “Counterspeech on Twitter: A Field Study” reviews the findings of our study of counterspeech as it occurs naturally ‘in the wild’ rather than as part of an organized campaign. In this paper, we develop a taxonomy of counterspeech strategies observed in interactions on Twitter – including supporting counterspeech with facts and using humor. We also make the original point that online counterspeech is delivered along one of four ‘vectors’ – one (counterspeaker) to one (harmful speaker), one to many, many to one, and many to many. The third report, “Considerations for Successful Counterspeech,” provides recommendations for individuals who wish to engage in counterspeech on Twitter.

The second, ongoing project is an effort to collect examples of responses to online harmful speech other than platform takedowns and criminal law. Already we have found a wider variety of responses than we expected, such as: enabling targets of hate speech to recruit online bystanders to help them, collecting donations per incident of hate speech, correcting the grammar and spelling in hateful posts, deploying bots to re-educate trolls, and shaming them by emblazoning their posts on billboards. As part of this project, we hosted a roundtable discussion at RightsCon in Brussels on March 30 on the topic of researching the effectiveness of pro-speech responses.