New Reports Highlight Extensive Problems in Content Moderation

While the last few weeks have featured internet companies instituting new policies, promising to make small changes, and taking new enforcement actions, two recent reports make it clear that there’s still a long way to go.

On June 30, Reddit, under pressure to remove more dangerous and hateful content, announced an update to its hate speech content policy. The new policy includes language that “Everyone has a right to use Reddit free of harassment, bullying, and threats of violence. Communities and users that incite violence or that promote hate based on identity or vulnerability will be banned.” Reddit staff used the new rules to immediately remove some 2,000 subreddits – most of which were tiny or dormant, with only 200 having more than 10 daily active users.[1]

While the new policy is a positive step, it has already been revised once in the past week, and needs additional work. Writing for the Stanford Cyber Policy Center, Adriana Stephan noted the lack of clarity in Reddit’s first draft:

Reddit’s initial updated language followed the lead of its peers by listing marginalized or vulnerable groups with one notable caveat: “While the rule on hate protects such groups, it does not protect all groups or all forms of identity. For example, the rule does not protect groups of people who are in the majority or who promote such attacks of hate.”

While the rule implied that hate speech is permissible if directed towards groups that constitute “the majority” or who promote hate speech, the company offered little clarification as to which groups constitute said majorities.

Reddit quickly removed the clause about ‘not protecting groups in the majority’ after advocates pointed out that the language was unhelpful: for example, women are the majority of the population in many countries where they nevertheless face significant discrimination and harassment.

Stephan’s article also compares platform hate speech policies, breaking down the hate speech standards language at each company and describing how platforms diverge over which groups or individual attributes are explicitly protected.[2] We have added the piece to the Resources section of our website for this helpful comparison.

Also this week, Facebook released a civil rights audit, in which the auditors write damningly that “Facebook’s approach to civil rights remains too reactive and piecemeal” (p. 8). The auditors also called for “more resources invested to study and address organized hate against…targeted groups on the platform.” The report’s scathing findings are even more remarkable given that the auditors relied largely on information Facebook chose to share about its practices, rather than observing them firsthand.

These two reports highlight the need for additional clarity, transparency, and effectiveness in both the policies that companies choose to establish and the manner in which they enforce them, in line with DSP Executive Director Susan Benesch’s seven “Proposals for Improved Regulation of Harmful Online Content,” published last month.

 

[1] “Update to Our Content Policy : Announcements.” https://www.reddit.com/r/announcements/comments/hi3oht/update_to_our_content_policy/. (last accessed July 10, 2020)

[2] Adriana Stephan, “Comparing Platform Hate Speech Policies: Reddit’s Inevitable Evolution.”  Stanford Cyber Policy Center. https://cyber.fsi.stanford.edu/io/news/reddit-hate-speech.(last accessed July 10, 2020.)