Big Tech’s Big Challenge
Twitter and Facebook have set a precedent in removing Trump from their platforms—but can they apply the same principle globally? Salil Tripathi explores this question, using the dangerous speech framework.
Read MoreTwitter and Facebook have set a precedent in removing Trump from their platforms—but can they apply the same principle globally? Salil Tripathi explores this question, using the dangerous speech framework.
Read MoreEarlier this week, Twitter announced a new rule against language that dehumanizes others on the basis of religion. This change is a step in the right direction, but in order to truly mitigate offline harms, the company must define dehumanizing speech by its likely effect on others in addition to the literal content of the speech.
Read MoreAustralia’s Parliament rushed to pass a law which would punish social media companies who do not “expeditiously” remove “violent material” from their platforms – a move which could encourage increased censorship.
Read MoreThe DSP is proud to have hosted the First International Counterspeakers’ Workshop, a meeting of people who respond to hateful or harmful speech online – to trade ideas, war stories, and best practices. The event, held in late November in Berlin, drew 15 people from around the world who ‘counterspeak’ online in a wide variety of ways.
Read MoreA study of conversations on Twitter found that some arguments between strangers led to favorable change in discourse and even in attitudes. The authors propose that such exchanges can be usefully distinguished according to whether individuals or groups take part on each side, since the opportunity for a constructive exchange of views seems to vary accordingly.
Read MoreThis article from the Wall Street Journal quotes Susan Benesch, who criticizes a new German law that places strict content regulations on social media companies as an infringement on freedom of expression.
Read MoreThis New York Times Magazine profile of Jonny Sun – a Twitter humorist and fellow at the Berkman Klein Center for Internet and Society – covers his collaboration with our director Susan Benesch on a series of online humor workshops at MIT.
Read MoreIn this interview featured in Wired Germany, Susan Benesch explains Dangerous Speech, how it is different from hate speech, and…
Read MoreDangerous Speech, like hate speech or pornography, is difficult to define in a precise or objective way – which makes it difficult to train a machine to classify it reliably. A new Google-related research effort has made interesting progress by asking people to classify millions of online comments.
Read MoreThis paper offers reflections and observations on the state of research related to harmful speech online. The perspectives outlined here…
Read MoreOnline social platforms are beset with hateful speech – content that expresses hatred for a person or group of people. Such content can frighten, intimidate, or silence platform users, and some of it can inspire other users to commit violence. Despite widespread recognition of the problems posed by such content, reliable solutions even for detecting hateful speech are lacking. In the present work, we establish why keyword-based methods are insufficient for detection. We then propose an approach to detecting hateful speech that uses content produced by self-identifying hateful communities as training data. Our approach bypasses the expensive annotation process often required to train keyword systems and performs well across several established platforms, making substantial improvements over current state-of-the-art approaches.
Read MoreThis report by iHub Research presents key findings from the Umati Project, which sought to identify and understand the use of dangerous speech in the Kenyan online space.
Read More