“Dear Mark…”: Global Civil Society Demands that Facebook Act Against Dangerous Speech

Even as Facebook CEO Mark Zuckerberg was being grilled by U.S. Senators this week, others sought his attention, from countries where content posted on Facebook has been linked to atrocities. One notable example is Myanmar, where since last August, the Burmese military has been killing and raping Muslims known as Rohingya, and burning their villages, so relentlessly that over 700,000 Rohingya have fled their homes to escape. Very few of Myanmar’s majority Buddhist population have protested, in part because of the violently anti-Rohingya messages they see on Facebook. In March, the United Nations Special Rapporteur on the situation of human rights in Myanmar issued a report stating that social media, and Facebook in particular, has had “a determining role” in the anti-Rohingya violence in the country. When asked about Myanmar during a recent Vox interview, Zuckerberg said Facebook staff had paid close attention to Myanmar and had “systems” to detect incitement to violence.

In an open letter to Zuckerberg, however, six Burmese NGOs that have resisted the military’s ethnic cleansing of Rohingya called out three major flaws in Facebook’s “systems” in Myanmar:

  1. Lack of a proper mechanism for emergency escalation, which leaves civil society actors largely responsible for trying to personally channel content that may incite violence to individuals at Facebook who have the power to remove it. “We believe your system, in this case, was us,” the letter stated.
  2. Reticence to engage local stakeholders around systemic solutions. Burmese civil society members have made efforts to assist in the development of solutions that are informed by local knowledge of the crisis, but Facebook has so far been unresponsive.
  3. Lack of transparency, which continues to deny both users and civil society members access to information about why content was removed and what steps Facebook has taken to improve their response to the ethnic cleansing.

Activists in Sri LankaVietnam, and Indonesia quickly chimed in with their own letters to Zuckerberg. The Center for Policy Alternatives in Sri Lanka noted that Facebook responded slowly or not at all when Sri Lankans flagged Dangerous Speech. Many times, Facebook chose not to remove misogynist content. They also noted that Facebook seems unable to deal with harmful content written in Sinhala, the native language of the largest ethnic group in Sri Lanka.

A coalition of Vietnamese civil society actors also asked that Facebook explain its decisions to take down content, or not, in their own letter to Zuckerberg. Like the Sri Lankans, the Vietnamese coalition expressed concern over what has appeared to be a strengthening of ties between Facebook and the government, while relationships between the company and civil society remain weak. For civil society actors in Indonesia, issues related to privacy and data protection topped the list of concerns. 

Although the four situations vary, the letters from Myanmar, Sri Lanka, and Vietnam each make this important point: for Facebook to be more effective at regulating harmful content that appears on its pages, it must be more culturally sensitive and nuanced in its approach to policy. It must employ enough people who speak local languages and who understand the social and cultural contexts in which content is being shared and received (or at the very least, who will work collaboratively – and openly -with others who do have such knowledge). As contexts differ so much around the world, applying the same tools and policies will not necessarily produce the same responses. For example, the Vietnamese coalition notes that policies designed to fight disinformation in open societies can end up being used as tools of oppression in closed (or closing) societies like Vietnam.

Unfortunately, Zuckerberg indicated in his Senate testimony that in at least one major way, Facebook is going in the opposite direction. He said that the company plans to rely increasingly on automated detection of hate speech. Though he conceded that artificial intelligence tools cannot reliably detect hate speech, he suggested that the tools would sufficiently improve within five to 10 years. This is difficult to believe, given the great importance of cultural nuance and context in which speech is hateful, or indeed which speech is dangerous – even in one country or cultural context. Automated detection tools – or indeed, moderation by humans – won’t be successful in a variety of countries unless Facebook makes more vigorous efforts to learn more about, and to respond more quickly in, countries around the world where the platform has become a vital tool for public discourse – often including dangerous discourse. Zuckerberg is likely to hear more from activists in many countries. He and his colleagues would do well to listen: they have much to learn.