Viral fake news can be powerful: it may have influenced the U.S. presidential election, as many have suggested, and it can also be dangerous – inspiring violence by targeting another group.
Indeed Dangerous Speech is often false. Typically it takes the form of rumors about the activities and alleged plans of another group of people. Such rumors are often a catalyst for riots – they justify violence and compel ordinary people to commit atrocities, as Donald Horowitz describes in his book The Deadly Ethnic Riot. Horowitz refers to rumors spread primarily by word of mouth, but digital communication spreads rumors – and can inspire violence – more efficiently than ever before.
Internet rumors are used to encourage violence in many countries and contexts. For instance, supporters of Filipino President Rodrigo Duterte spread misleading stories using fake social media accounts to defend and promote his deadly crusade against drug users. In 2014, Buddhist nationalist organizations in Myanmar and their supporters used Facebook to disseminate a false rumor that Muslim men raped a Buddhist woman in the city of Mandalay. The rumor helped spark a riot against Muslims that left two dead and fourteen injured.
Prior to Kenya’s 2007 post-election violence, rumors were rampant and spread notably via SMS text messages. According to one, the government was planning to poison the beer at bars where opposition voters tended to drink, and some bars indeed closed a few days before the election to counter this fictitious plan. The rumors contributed to the violence by creating fear and increasing tensions.
Fake news (and its possible influence on election results) has captured public attention for now, but platform administrators should not overlook the role of false rumors in Dangerous Speech around the world. Facebook and Google recently announced they will attempt to stop fake news websites from profiting from advertisements, by blocking ads from appearing in notorious sources of fake news. While this could be be useful, it does little to prevent the spreading of dangerous rumors by those who intend to cause violence.
Censorship of false content – whether conducted by an editor or an algorithm – is not the solution either. That would vest too much power in the platforms and give them the difficult task of determining what is true and what is false in myriad situations where that isn’t clear. It could also backfire by giving producers of rumor legitimacy in the eyes of individuals who believe platforms are silencing ‘outsiders.’
Instead of using truth as a standard for content policy, platforms should focus on the dangerousness of rumors. Because violence all too often takes place soon after a rumor is disseminated, platforms and civil society developers should design features to allow users to quickly identify, delegitimize, and slow the dissemination of dangerous rumors. For example, when a post is reviewed and determined to be dangerous, the share function could be disabled for some hours. Meanwhile, other users could point out that the post is false and dangerous. While platforms are unlikely to eliminate dangerous rumors entirely, it is important that they ensure rumors can do as little harm as possible – without infringing on freedom of expression.