Platforms Should Preserve Data about Content Censored During COVID-19
The Dangerous Speech Project is one of 75 organizations and individuals voicing concerns about how social media and content-sharing platforms…
Read MoreThe Dangerous Speech Project is one of 75 organizations and individuals voicing concerns about how social media and content-sharing platforms…
Read MoreIntentionally associating COVID-19 with Asian communities encourages cruel and ignorant stigmatizing, and distracts people from reliable information about the disease.
Read MoreAuthor Salil Tripathi examines the distinction between hate speech and dangerous speech in the context of elections in Delhi, India.
Read MoreThese inaugural Fellows will be the first in a worldwide network of researchers in countries where Dangerous Speech abounds.
Read MoreWhen people describe government policy as evil or even criminal, and then others attack facilities where the policy is carried out, is that Dangerous Speech?
Read MoreIndividuals who carry out attacks such as the one in El Paso are not only audience members who have heard Dangerous Speech and have been convinced to commit violence. They are also speakers themselves. Their words (through manifestos and social media posts) and their actions (the shootings) are performances designed, at least in part, to move others to commit similar atrocities in the future.
Read MoreAs with several other massacres in the past year, the attacker in the El Paso shooting appears to have online ties with white supremacist ideology and published a racist screed prior to his attack. We’ve written on the ideas espoused by this killer many times in the past.
Read MoreWhen people cannot verify or refute rumors, and cannot access alternative narratives or trustworthy counterspeech, there is an increased likelihood that these rumors will become Dangerous Speech. Therefore, if governments are serious about addressing rumors that could inspire violence, they must resist the urge to shut down the internet.
Read MoreEarlier this week, Twitter announced a new rule against language that dehumanizes others on the basis of religion. This change is a step in the right direction, but in order to truly mitigate offline harms, the company must define dehumanizing speech by its likely effect on others in addition to the literal content of the speech.
Read MoreA report shows that 78% of Americans believe aggressive language can make violence more likely. The Guardian asked Susan Benesch how we can respond to Dangerous Speech.
Read More