This month, we delve into a special type of dangerous speech – one that doesn’t attack anyone. Instead, this speech valorizes violence, characterizing it as something honorable and connected to the identity of the in-group.
Read MoreTransgender, nonbinary, and gender nonconforming people have become a primary target of such speech. Influential politicians, media figures, and religious leaders are falsely portraying transgender people as threats to children, cisgender women, families, traditional values, and public safety.
Read MoreIsrael’s sluggish delivery of food aid to Gaza is unconscionable: It is tantamount to torturing civilians, and it is turning even longtime friends into appalled critics. But Israel can relieve this needless suffering, without helping Hamas
Read MoreOnline debate shouldn’t be outsourced to AI, even though there’s excited buzz about this prospect, and several university teams are building AI tools to respond to digital hatred.
Read MoreStefanie Ullmann and Marcus Tomalin of CRASSH are releasing their book, “Counterspeech: Multidisciplinary Perspectives on Countering Dangerous Speech”, which features collaborative work from Joshua Garland and our Director of Research Cathy Buerger.
Read MoreSusan Benesch and Cathy Buerger for the L.A. Times: Two unrelated facts combined with a lie form a powerful and dangerous piece of misinformation that is spreading virally.
Read MoreDirector of Research Cathy Buerger and Professor Joshua Garland discuss the importance of research collaborations like their project on AI and counterspeech.
Read MoreThe Dangerous Speech Project has written a Toolkit on Using Counterspeech to Tackle Online Hate Speech in collaboration with the Future of Free Speech project, as a resource for learning effective counterspeech strategies.
Read MoreThis amicus brief discusses the proper legal framework for assessing challenges to the California Age-Appropriate Design Code.
Read MoreTo better understand rhetoric that justifies and even inspires violence, a team of Bangladeshi
professors and students collected and analyzed 124 examples of dangerous speech that
circulated in their country in 2021.
Faith and Leadership
The Israel-Hamas war has spawned hateful, violent and even genocidal comments. Executive Director Susan Benesch offers suggestions for Christian leaders to recognize and undermine it.
Read MoreEven as it has wiped out people with terrible speed and cruelty, the Hamas-Israel war has also dried up moderate public discourse about Jews, Palestinians, and the war itself, more quickly and widely than any other conflict in our lifetimes.
Read MoreAustin American-Statesman
The best way to protect people from harms engendered by TikTok (and other social media platforms) is to unleash researchers so they can build up knowledge collectively. That’s why the Coalition for Independent Tech Research sued Texas Gov. Greg Abbott to stop his TikTok ban.
Read MoreThat’s no reason to leave online discourse unscrutinized. On the contrary: it must be systematically, expertly protected. Israel and Palestine would be ideal for such a test.
Read MoreYale Journal of Law & Technology
Digital networked society needs friction-in-design regulation that targets the digital architectures, supposedly smart (data-driven, algorithmic) systems, and interfaces that shape human interactions, behavior, and will (beliefs, preferences, values, intentions). The relentless push to eliminate friction for the sake of efficiency has hidden social costs that affect basic human capabilities and society. A general course correction is needed.
Read MoreDangerous Speech Project Director of Operations Tonei Glavinic responds to the Oversight Board’s March 16th call for public comments regarding an appeal to remove video of a speech by Cambodia’s prime minister, Hun Sen from his official Facebook page.
Read MoreSusan Benesch responds on behalf of the Dangerous Speech Project to the Oversight Boards call for public comments regarding their case “Brazilian general’s speech” (2023-001-FB-UA).
Read More“Prevent Violence, Strengthen Democracy: Electoral Codes of Conduct for the United States” by Buerger and Glavinic has been featured in the Peace and Conflict Studies Journal.
Read MoreFormer members of the Twitter Trust and Safety Council, condemn the abrupt disbanding of the Council on Monday night, and call on Twitter to commit to its users safety.
Read MoreJust Security
In her latest op ed for Just Security, Susan Benesch details the pattern of incendiary language in the United States, and presents tools for countering it.
Read MorePeople who do counterspeech almost universally want to reach audiences, not the people spreading harmful speech. Drawing on over 50 interviews with counterspeakers, this paper reports on four primary theories of change as counterspeakers describe them, and discusses the implications of this for researchers.
Read MoreCrises like COVID-19 and the invasion of Ukraine clearly illustrate how mis- and disinformation narratives proliferate and overlap with dangerous speech.
Read MoreIndia has been steadily transforming from an aspiring liberal democracy into an ethnic democracy where a narrow and supremacist interpretations of Hinduism, the religion of the majority, wields near hegemonic status.
Read MoreThe Atlantic
As long as Facebook employees are the only ones who can monitor Facebook, we will not understand how toxic content thrives on social-media platforms and how best to diminish the human suffering to which it contributes.
Read MorePeople have been trying to understand the catalysts of human behavior, especially violent behavior, for thousands of years. In this review, we focus on how speech, especially public, often online speech, can inspire civilians of one group to attack civilians of another or create an atmosphere in which such violence is encouraged.
Read MoreEvery day, internet users encounter hateful and dangerous speech online, and some of them choose to respond directly in order to refute or undermine it. We call this counterspeech. Only a few studies have attempted to measure the effectiveness of counterspeech directly, and as far as we know, this is the first review of relevant literature.
Read MoreJudging posts exclusively by their content is like studying cigarettes to understand their toxicity. It’s one useful form of data, but to understand what smoking can do to people’s lungs, study the lungs, not just the smoke.
Read MoreFacebook was correct to prohibit Donald Trump from posting on Facebook and Instagram, and should make that ban permanent. In the future, when considering whether to take action on content posted by political candidates, office holders, and former office holders, Facebook should test the content’s capacity to lead to real-world violence, by evaluating whether the content has been understood by an account’s followers as incitement, rather than trying to divine the intent of the account holder.
Read MoreSocial Science Research Council
It was a watershed when Donald Trump lost his social media megaphone on Facebook, Twitter, and other platforms right after the January 6 attack on the US Capitol, since none of these companies had ever dumped a head of state. But this moment also represents a failure: The companies barred him for inciting violence only after he had done it successfully. They identified incitement to violence in the rearview mirror, by waiting for violence to happen.
Read MoreThis guide – updated for 2019 – provides an in-depth exploration of dangerous speech and how to identify it, dangerous speech on the internet, and some promising efforts to reduce the harmful effects of speech.
Read MoreNoēma Magazine
Social media companies have spent too much time trying to divine the intent behind messages posted by influential people, when what really matters for preventing violence is how content is understood by its audience – especially people who might commit or condone violence. Content moderation staff should focus on real-world potential impacts and consequences, not unknowable states of mind or hypothetical meanings.
Read MorePhiladelphia Inquirer
Awful though it was that thugs smashed their way into the Capitol last week, it might do the United States good in the end.
Read More#jagärhär – “I am here” – is a Sweden-based group of thousands of volunteers working collectively to counter online hatred. This is first qualitative study of the group – how it works, why people join and stay engaged, and how its members try to shift thinking and discourse norms among the general public.
Read MoreThe 2020 U.S. election was like nothing we’d seen before for many reasons, including disinformation, dangerous speech, and unprecedented fears of election-related violence. Now it’s time to start thinking about the future.
Read MoreYale Journal on Regulation
Private social media companies regulate much more speech than any government does, and their platforms are being used to bring about serious harm. Yet companies govern largely on their own, and in secret.
To correct this, advocates have proposed that companies follow international human rights law. But for this to work, the law must first be interpreted to clarify how (and whether) each of its provisions are suited to this new purpose.
Read MoreThis report presents seven proposals for how internet companies can more effectively address harmful content on their platforms, protect freedom of expression, and provide a better experience for their users.
Read MoreAutomated content moderation is a temporary, imperfect emergency response. Companies and governments must ensure that it does not become the new status quo.
Read MoreWhat you can and can’t post online is being decided by algorithms, now more than ever. Tech companies are relying…
Read MoreMedium
DSP Executive Director Susan Benesch, J. Nathan Matias of Princeton University, and several other independent researchers launched a collaborative study with Twitter to test whether awareness of the rules affects user behavior.
Read MoreEuropean Council on Tolerance and Reconciliation
Online hateful and harmful messages are so widespread that the problem cannot be laid at the feet of any particular culture or country, nor can such content be easily classified with terms like ‘hate speech’ or ‘extremism’ – it is too varied. Similarly, the people who produce harmful content (and their motivations) are too diverse to fit a stereotype. Daunting though this problem is, there are opportunities to diminish it and to build norms of tolerance that have been largely overlooked so far. This paper offers a set of specific and contrarian ideas for better understanding hate speech and other harmful speech that proliferates online, and for reducing the damage such content causes, while limiting the risk of other harms.
Read MoreThis six-minute introductory video explains Dangerous Speech and its five elements, with historical and contemporary examples from around the world.
Read MoreProceedings of the 55th Annual Meeting of the Association for Computational Linguistics
A study of conversations on Twitter found that some arguments between strangers led to favorable change in discourse and even in attitudes. The authors propose that such exchanges can be usefully distinguished according to whether individuals or groups take part on each side, since the opportunity for a constructive exchange of views seems to vary accordingly.
Read MoreThis guide offers recommendations for those who wish to engage in counterspeech online, based on the findings of our two year study of hateful speech and counterspeech on Twitter.
Read MoreThis report from our two year study of hateful speech and counterspeech on Twitter reviews existing literature on counterspeech, examines cases of counterspeech through the vector in which it was delivered, and develops a taxonomy of counterspeech strategies.
Read MoreFirst Workshop on Text Analytics for Cybersecurity and Online Safety
Online social platforms are beset with hateful speech – content that expresses hatred for a person or group of people. Such content can frighten, intimidate, or silence platform users, and some of it can inspire other users to commit violence. Despite widespread recognition of the problems posed by such content, reliable solutions even for detecting hateful speech are lacking. In the present work, we establish why keyword-based methods are insufficient for detection. We then propose an approach to detecting hateful speech that uses content produced by self-identifying hateful communities as training data. Our approach bypasses the expensive annotation process often required to train keyword systems and performs well across several established platforms, making substantial improvements over current state-of-the-art approaches.
Read MoreGenocide Studies and Prevention: An International Journal
This paper draws together the authors’ work on dangerous speech and the ideological dynamics of mass atrocities by offering a new integrated model to help identify the sorts of speech and ideology that raise the risk of atrocities and genocides.
Read MoreReligion and Human Rights, An International Journal
This comment examines the tension between freedom of expression and freedom of religion by embedding the Charlie Hebdo cartoons in a wider, century-old European tradition of publications mocking religion, including Christianity. It describes, and draws lessons from, the 19th century blasphemy case against the British Freethinker newspaper, whose “technique of offense” was similar to that of Charlie Hebdo. Finally, the comment tackles the problem of violent response to text or images that mock religion, pointing out that malicious intermediaries often carry such messages between social groups or across national borders—greatly escalating the risk of violence.
Read MoreSusan Benesch reviews efforts to counter hateful speech online in the Berkman Center for Internet and Society’s report Internet Monitor…
Read MoreState of the World's Minorities 2014
State of the World’s Minorities and Indigenous Peoples 2014, Minority Rights Group International, 2014.
Read MoreIn this video, Dangerous Speech Project director Susan Benesch discusses methods of countering Dangerous Speech in online social spaces and research that has been conducted to test those methods.
Read MoreUS Holocaust Memorial Museum, February 11, 2014
Read MoreWith Michael Abramowitz. Wall Street Journal, December 18, 2013.
Read MoreLauren Kogen of the Center for Global Communication Studies (CGCS) at the University of Pennsylvania’s Annenberg School for Communication conducted an evaluation of the Vioja Mahakamani media intervention, a partnership between the Dangerous Speech Project and Media Focus on Africa.
Read MoreThis document has been replaced by “Dangerous Speech: A Practical Guide,” released in December 2018. It is preserved here for…
Read More“It’s time for Internet giants to explain when censorship is and isn’t OK.” With Rebecca MacKinnon.
Read MoreDangerous Speech Project director Susan Benesch provides an overview of the distinction between hate speech and dangerous speech. April 20, 2012.
Read MorePeople are increasingly privy to communication that they would not have heard (or read or seen) in the past. This has significant implications for speech law and policy.
Read MoreThis book chapter describes existing case law on incitement to genocide, the challenges with identifying the cause of genocide, and offers the Dangerous Speech framework as an alternative approach.
Read MoreInflammatory speech – a common feature of elections – provides opportunities for preventing ethnic violence in the context of elections. However, this must be done carefully in order to preserve freedom of expression.
Read MoreThis paper analyzes incitement in international criminal law as well as in international human rights law, building an interpretive bridge between the two bodies of law; proposes a method for distinguishing incitement from other forms of hate speech; summarizes the jurisprudence on incitement to genocide; and describes a methodology for identifying speech that has a reasonable possibility of successfully inciting genocide, suggesting ways in which this framework may be adapted for distinguishing other forms of incitement.
Read MoreThis 2009 talk delivered by Susan Benesch at the US Holocaust Memorial Museum’s Sudikoff Seminar examines the history of genocide in international law, and offers a proposed definition for the crime of incitement to genocide.
Read More