OP-ED: Why Elon Musk’s Twitter might be (more) lethal
We seek the best ways to blunt the power of dangerous speech, online and offline. Musk taking over Twitter doesn’t look like one of them, to say the least.
Read MoreWe seek the best ways to blunt the power of dangerous speech, online and offline. Musk taking over Twitter doesn’t look like one of them, to say the least.
Read MoreJudging posts exclusively by their content is like studying cigarettes to understand their toxicity. It’s one useful form of data, but to understand what smoking can do to people’s lungs, study the lungs, not just the smoke.
Read MoreTwitter and Facebook have set a precedent in removing Trump from their platforms—but can they apply the same principle globally? Salil Tripathi explores this question, using the dangerous speech framework.
Read MoreWhile online speech is rarely implicated in initial violent incidents, social media platforms increasingly feature dangerous speech afterward – which increases the risk of additional violence.
Read MoreThis piece by Adriana Stephan of the Stanford Internet Observatory offers a clear side-by-side comparison of online platforms’ hate speech policies and enforcement mechanisms.
Read MoreThis paper presents seven proposals for how internet companies can more effectively address harmful content on their platforms, protect freedom of expression, and provide a better experience for their users.
Read MoreResearchers discuss Twitter’s repeated failure to deal with harmful content, including DSP Executive Director Susan Benesch, Dr. J. Nathan Matias of Cornell’s Citizens & Technology Lab, and Rebekah Tromble of George Washington University.
Read MoreEarlier this week, Twitter announced a new rule against language that dehumanizes others on the basis of religion. This change is a step in the right direction, but in order to truly mitigate offline harms, the company must define dehumanizing speech by its likely effect on others in addition to the literal content of the speech.
Read MoreThe new Christchurch Call to Action is largely vague and symbolic, but it may be worthwhile since it prompts internet companies to expand their collaborations against harmful and especially terrorism-promoting content online.
Read MoreBy referring to immigrants who “pour into and infest our Country,” President Trump has invoked a malevolent vocabulary – one which has striking similarities to rhetoric which has preceded episodes of intergroup violence.
Read MoreDSP Executive Director Susan Benesch, J. Nathan Matias of Princeton University, and several other independent researchers launched a collaborative study with Twitter to test whether awareness of the rules affects user behavior.
Read MoreDSP Executive Director Susan Benesch talks with Mashable about Twitter’s new policy of evaluating users’ off-platform behavior when deciding to suspend accounts.
Read MoreSadly, it’s not new that Donald Trump vilified a group of people with spurious claims from a terrible source: today,…
Read MoreIn this essay, Susan Benesch points out that content ‘takedown’ by Internet companies is not the only solution to harmful speech online. She highlights projects organized by civil society – not governments or platforms – to diminish harmful speech and support its targets. It was published by the Berkman Klein Center for Internet & Society in a collection of essays on harmful speech online.
Read MoreThis guide offers recommendations for those who wish to engage in counterspeech online, based on the findings of our two year study of hateful speech and counterspeech on Twitter.
Read MoreThis report from our two year study of hateful speech and counterspeech on Twitter reviews existing literature on counterspeech, examines cases of counterspeech through the vector in which it was delivered, and develops a taxonomy of counterspeech strategies.
Read More