November 06,2023
Criticism of Facebook’s role in the Rohingya genocide largely focuses on anti-Rohingya speech. But content targeting and shaming members of the in-group – like calling people traitors for showing empathy for the Rohingya – also played a significant role.
Read More
April 06,2022
This talk explores what we do (and don’t yet) know about how “real-world” violence is inspired online, and important considerations for researchers in identifying, scoping, and studying aspects of this relationship.
Read More
November 03,2021
As long as Facebook employees are the only ones who can monitor Facebook, we will not understand how toxic content thrives on social-media platforms and how best to diminish the human suffering to which it contributes.
Read More
May 31,2021
Judging posts exclusively by their content is like studying cigarettes to understand their toxicity. It’s one useful form of data, but to understand what smoking can do to people’s lungs, study the lungs, not just the smoke.
Read More
January 16,2021
Twitter and Facebook have set a precedent in removing Trump from their platforms—but can they apply the same principle globally? Salil Tripathi explores this question, using the dangerous speech framework.
Read More
September 17,2020
Private social media companies regulate much more speech than any government does, and their platforms are being used to bring about serious harm. Yet companies govern largely on their own, and in secret.
To correct this, advocates have proposed that companies follow international human rights law. But for this to work, the law must first be interpreted to clarify how (and whether) each of its provisions are suited to this new purpose.
Read More
July 10,2020
This piece by Adriana Stephan of the Stanford Internet Observatory offers a clear side-by-side comparison of online platforms’ hate speech policies and enforcement mechanisms.
Read More
June 22,2020
This paper presents seven proposals for how internet companies can more effectively address harmful content on their platforms, protect freedom of expression, and provide a better experience for their users.
Read More
April 24,2020
Automated content moderation is a temporary, imperfect emergency response. Companies and governments must ensure that it does not become the new status quo.
Read More
April 22,2020
The Dangerous Speech Project is one of 75 organizations and individuals voicing concerns about how social media and content-sharing platforms…
Read More
March 16,2020
Researchers discuss Twitter’s repeated failure to deal with harmful content, including DSP Executive Director Susan Benesch, Dr. J. Nathan Matias of Cornell’s Citizens & Technology Lab, and Rebekah Tromble of George Washington University.
Read More
July 12,2019
Earlier this week, Twitter announced a new rule against language that dehumanizes others on the basis of religion. This change is a step in the right direction, but in order to truly mitigate offline harms, the company must define dehumanizing speech by its likely effect on others in addition to the literal content of the speech.
Read More
June 13,2019
In his new book, Speech Police, David Kaye argues that online content needs to be regulated in a fair, consistent way – not in reactive, confusing fits and starts. In the same week as the book’s release, YouTube showed us why he’s right.
Read More
May 15,2019
Following Facebook’s announcement of plans to create a content moderation oversight board, internet and human rights organization Access Now lays out a set of principles designed to help platforms create content moderation processes and practices that will protect free expression.
Read More
January 06,2019
The DSP is proud to have hosted the First International Counterspeakers’ Workshop, a meeting of people who respond to hateful or harmful speech online – to trade ideas, war stories, and best practices. The event, held in late November in Berlin, drew 15 people from around the world who ‘counterspeak’ online in a wide variety of ways.
Read More
July 27,2018
Facebook’s Tessa Lyons announced a new policy for content moderation – the company will now remove misinformation which constitutes Dangerous Speech.
Read More
June 15,2018
Our Data-Driven Decency panel at RightsCon 2018 explored experiments to diminish online hate and harassment. Here are our notes on these efforts, and considerations for future interventions.
Read More