DSP Comments for OSB “River to the Sea” Case
The Dangerous Speech Project offers recommendations to the Oversight Board on their “River to the Sea” case.
Read MoreThe Dangerous Speech Project offers recommendations to the Oversight Board on their “River to the Sea” case.
Read More“Companies are regulating human expression more than any government does. And more than any government ever has,” Susan Benesch explained.
Read MoreDirector of Research Cathy Buerger and Professor Joshua Garland discuss the importance of research collaborations like their project on AI and counterspeech.
Read MoreThe Dangerous Speech Project has written a Toolkit on Using Counterspeech to Tackle Online Hate Speech in collaboration with the Future of Free Speech project, as a resource for learning effective counterspeech strategies.
Read MoreDangerous speech flourished in 2023. Director of Research Cathy Buerger offers steps for combatting its spread in the new year.
Read MoreCriticism of Facebook’s role in the Rohingya genocide largely focuses on anti-Rohingya speech. But content targeting and shaming members of the in-group – like calling people traitors for showing empathy for the Rohingya – also played a significant role.
Read MoreThis talk explores what we do (and don’t yet) know about how “real-world” violence is inspired online, and important considerations for researchers in identifying, scoping, and studying aspects of this relationship.
Read MoreAs long as Facebook employees are the only ones who can monitor Facebook, we will not understand how toxic content thrives on social-media platforms and how best to diminish the human suffering to which it contributes.
Read MoreJudging posts exclusively by their content is like studying cigarettes to understand their toxicity. It’s one useful form of data, but to understand what smoking can do to people’s lungs, study the lungs, not just the smoke.
Read MoreTwitter and Facebook have set a precedent in removing Trump from their platforms—but can they apply the same principle globally? Salil Tripathi explores this question, using the dangerous speech framework.
Read MorePrivate social media companies regulate much more speech than any government does, and their platforms are being used to bring about serious harm. Yet companies govern largely on their own, and in secret.
To correct this, advocates have proposed that companies follow international human rights law. But for this to work, the law must first be interpreted to clarify how (and whether) each of its provisions are suited to this new purpose.
Read MoreThis piece by Adriana Stephan of the Stanford Internet Observatory offers a clear side-by-side comparison of online platforms’ hate speech policies and enforcement mechanisms.
Read MoreAutomated content moderation is a temporary, imperfect emergency response. Companies and governments must ensure that it does not become the new status quo.
Read MoreThe Dangerous Speech Project is one of 75 organizations and individuals voicing concerns about how social media and content-sharing platforms…
Read MoreResearchers discuss Twitter’s repeated failure to deal with harmful content, including DSP Executive Director Susan Benesch, Dr. J. Nathan Matias of Cornell’s Citizens & Technology Lab, and Rebekah Tromble of George Washington University.
Read MoreEarlier this week, Twitter announced a new rule against language that dehumanizes others on the basis of religion. This change is a step in the right direction, but in order to truly mitigate offline harms, the company must define dehumanizing speech by its likely effect on others in addition to the literal content of the speech.
Read MoreIn his new book, Speech Police, David Kaye argues that online content needs to be regulated in a fair, consistent way – not in reactive, confusing fits and starts. In the same week as the book’s release, YouTube showed us why he’s right.
Read MoreFollowing Facebook’s announcement of plans to create a content moderation oversight board, internet and human rights organization Access Now lays out a set of principles designed to help platforms create content moderation processes and practices that will protect free expression.
Read MoreThe DSP is proud to have hosted the First International Counterspeakers’ Workshop, a meeting of people who respond to hateful or harmful speech online – to trade ideas, war stories, and best practices. The event, held in late November in Berlin, drew 15 people from around the world who ‘counterspeak’ online in a wide variety of ways.
Read MoreFacebook’s Tessa Lyons announced a new policy for content moderation – the company will now remove misinformation which constitutes Dangerous Speech.
Read MoreOur Data-Driven Decency panel at RightsCon 2018 explored experiments to diminish online hate and harassment. Here are our notes on these efforts, and considerations for future interventions.
Read More