What Speech Police by David Kaye can teach us about YouTube’s latest scandal
Internet ‘policing’ scandals of the kind David Kaye decries in his useful new book Speech Police: The Global Struggle of Govern the Internet erupt constantly. One of the latest started just a few days before the release of the book, on May 30, when Vox journalist Carlos Maza tweeted evidence that he was being steadily harassed by the conservative YouTuber Steven Crowder and his followers. YouTube made things worse by responding with several confusing explanations for why it wouldn’t ban Crowder but would strip him of his ad revenue. At almost the same time, YouTube also announced new rules against content that declares one group of people superior to a disadvantaged one, and in implementing the new rules, mistakenly removed ad revenue from innocuous channels. This chaotic series of missteps supports Kaye’s argument that online content needs to be regulated in a fair, consistent way – not in reactive, confusing fits and starts.
Kaye – the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression – advocates for a serious rethinking of the current model of content moderation in order to include the public in “making, interpreting, and enforcing” platform rules.
The internet has become dangerously centralized, Kaye points out, transforming it from a distributed space where speech was nearly ungovernable to a consolidated one where a handful of companies have unprecedented control over global freedom of expression. In response, some governments have tried to bring the companies into line with laws. However, Kaye argues that when governments require platforms to remove content that’s illegal under their national laws, they can paradoxically increase the platforms’ power by making them into private interpreters of law.
Kaye is quite sympathetic to company employees who are trying to moderate content fairly. His first hand accounts of the internal processes of platform rulemaking suggest that those involved are genuinely interested in protecting users on their platforms. But he does not endorse self-regulation by companies, since the current internal processes, while well-intentioned, are too secretive and bureaucratic. The role of governments, Kaye suggests, should be to make the governance of platforms more democratic. Regulation should focus on the process of rulemaking and enforcement rather than demanding that specific types of content be removed.
Human rights law, Kaye suggests, should be the explicit basis for content moderation policies of all companies. Internet companies are attempting to regulate speech at a global scale, so, Kaye argues, they should adopt the only global standard for freedom of expression and its limits, since doing so would protect users from harassment, hate speech, and disinformation while also preserving their right to free speech. It would also give platforms a widely recognized framework to point to when governments and user disagree with their decisions.
He also advocates for greater corporate transparency. This includes opening up the process of making new rules, allowing for public comment, and clearly explaining how they arrived at the new rules, and creating ‘platform case law’ in which companies publicly explain why they took a ‘content action’ and the process for appealing it.
Kaye’s focus on regulating the process rather than the content itself won’t eliminate content moderation scandals. People will still disagree over what speech belongs online. But his recommendations could go a long way toward improving the process of governing online content. YouTube’s latest moderation scandal is a great example why.
Five days after Maza tweeted a video compilation of Crowder repeatedly attacking his ethnicity and sexual orientation, YouTube responded on Twitter, informing Maza that while they recognize that Crowder’s words are hurtful, they don’t violate the platform’s rules against harassment because he is debating Maza over opinions. After a day of criticism for this decision, YouTube’s Twitter account followed-up to announce that they would demonetize Crowder’s channel after all. This means his videos will still appear on the platform, but he won’t be able to make money from ads.
When Maza pointed out that much of Crowder’s income comes from merchandise – including shirts that say, “Socialism is for F*gs,” – YouTube tweeted again, this time to say that “in order to reinstate monetization on this channel, [Crowder] will need to remove the link to his T-shirts.” This incurred another wave of criticism to which YouTube responded once again, apologizing for the confusion and explaining that Crowder will have to do more than remove the shirts to have his ad money restored.
To people on both sides of this scandal, YouTube’s response is inadequate and provides evidence to support their pre-existing beliefs about the platform – either that it refuses to take harassment of the LGBT community seriously or that it makes policy on the fly, caving to complaints from the left by demonetizing conservatives.
To make matters worse, on the same day that YouTube announced Crowder’s demonetization, they also announced the new rules against supremacy and almost immediately demonetized the channels of an independent journalist who documents far-right rallies and a history teacher who uploads videos on topics including Nazi Germany (the history teacher’s channel has since been re-monetized). Crowder and his supporters seized on these glaring mistakes by blaming Maza for the new rules and escalating their harassment against him. As Becca Lewis, who studies the far-right and YouTube, pointed out, YouTube’s handling of the situation actually incited new harassment against Maza.
How could Kaye’s recommendations in Speech Police be applied to this mishap?
First, if YouTube had been more transparent regarding how rules are developed, Crowder wouldn’t have been able to weaponize the policy change against Maza, whose accusations have little to do with the new rules. His complaint is about harassment not supremacy, so it’s highly unlikely that YouTube developed the new rules in response to his tweets. Yet we can’t actually prove this to Crowder and his followers because, outside of vague statements, the general public doesn’t know the specific timeline of when YouTube began working on this policy change or their planned schedule for enacting it.
In another world where YouTube announced months ago that they were deliberating on a change to their rules regarding supremacy, invited public comment, and provided regular updates on their progress, the public could have monitored the process. We may even have been expecting a change to the rules on June 5. Without this degree of transparency, it is understandable why casual observers would assume all of the controversial demonetizations announced on that day were related.
Second, while YouTube did explain their decision to Maza, their method fell far short of Kaye’s call for platform case law. Their handling of his complaint was haphazard and their explanations often seemed contradictory. It is absurd that this entire process – from Maza alerting YouTube of a years-long campaign of organized harassment to their multi-day explanation of their decision – took place on Twitter.
Maza likely turned to Twitter due to the inadequacy of the YouTube reporting system. His tweets capture the scale of Crowder’s harassment in a way reporting individual videos wouldn’t have. Tweeting also made his complaint far more public – so that his thousands of followers could see it and YouTube would be compelled to issue a public response. In other words, Twitter afforded transparency, accountability, and detail that YouTube’s internal reporting system simply doesn’t. But not everyone has thousands of Twitter followers to amplify their complaints. An internal reporting system that handles controversial or difficult cases and publicly issues a single, detailed explanation only when the review of all relevant policies is complete could enable the transparency, accountability, and detail of Maza’s Twitter complaint without requiring existing influence.
Third and finally, this scandal highlights an obvious lack of a central framework behind YouTube’s content policies. YouTube’s explanation that Crowder’s harassment is allowed because it is really a debate doesn’t seem to reflect their Community Guidelines that do prohibit harassment but say nothing of protecting debate. Much of the backlash seems to come from the fact that the public’s perception of the rules, the rules as written, and the rules as enforced all differ. Whether it is Kaye’s proposed human rights framework or something else, YouTube needs a clearer set of principles that it can point to when explaining these decisions.
As with so many prior content moderation scandals, this case progressed from criticizing YouTube for not removing enough content (Crowder’s videos) to criticising the platform for removing too much (the independent journalist and history teacher channels) within less than a week. These positions are not mutually exclusive – we should demand that platforms do more to protect users while also holding them accountable when they get it wrong. But the public’s ability to have a say in this process is severely limited by YouTube’s lack of transparency regarding both rulemaking and content decisions.
David Kaye’s Speech Police comes at a time when users are increasingly frustrated by how platforms moderate content and governments are feeling empowered to take a heavy-handed approach to regulating platforms. Kaye shows us that neither self-regulation nor government content regulation will sufficiently protect both the safety of users and their right to freedom of expression. In the same week as the book’s release, YouTube showed us why he’s right.
David Kaye argues that online content needs to be regulated fairly and consistently - not in reactive, confusing ways. YouTube showed us why he’s right.
DownloadRead More