Public Comment to the Facebook Oversight Board’s regarding Donald Trump’s account

This comment can also be found on pages 7951-7953 of the Oversight Board’s public comment archive [PDF].

Summary

Facebook was correct to prohibit Donald Trump from posting on Facebook and Instagram, and should make that ban permanent. In the future, when considering whether to take action on content posted by political candidates, office holders, and former office holders, Facebook should test the content’s capacity to lead to real-world violence, by evaluating whether the content has been understood by an account’s followers as incitement, rather than trying to divine the intent of the account holder.

Full text

Facebook was correct to prohibit Donald Trump from posting on Facebook and Instagram, and should make that ban permanent.

The ban is fully consistent with Facebook’s rules and values, and with international human rights law. Trump’s posts imperiled the safety of Facebook users, members of the U.S. Congress, and even the U.S. political system, by helping to convince millions of people that the election was fraudulent, and that they had a right or even a duty to block the peaceful transition of power. Facebook failed to protect safety by waiting to intervene until people had been killed. If reinstated, Trump would likely continue to post as before. During the Jan 6 attack, he repeated his election lie and expressed love for the rioting mob, and has not changed his tune since.

Also, Facebook deprived Trump of neither “voice” nor freedom of expression. As U.S. President, he had perhaps the world’s largest set of megaphones, on and offline. Though he is now ex-President and has been banned by other platforms, he has innumerable opportunities to speak. He sends emails, gives speeches, buys advertising, and can even circulate content on Facebook and other platforms through proxies and supporters. Finally, the ban is in keeping with the legality, necessity and legitimacy prongs of Article 19(3) of the International Covenant on Civil and Political Rights: under Facebook’s Community Standards egregious violators of its rules will be banned, the ban is necessary to prevent further violence, and it will further the legitimate public interest of preserving public order.

Facebook should evaluate how audiences are understanding content posted by political leaders.

Trump is only one case, and the ban on his account is an opportunity for Facebook to plan better responses when other political leaders spread dangerous lies and/or incite violence. Below I offer ideas to overcome two longstanding obstacles: Facebook’s deference to politicians, and the difficulty of determining which ambiguous content is dangerous.

Facebook hesitates to moderate politicians even when their content violates its rules, under its “newsworthiness” exemption. Indeed private companies should not stifle public discourse. But risks like inspiring group violence outweigh the public interest in viewing content. Further, Facebook cannot silence people who hold office, since they can easily speak to the public without social media. Also, the exemption’s effect is circular, since newsworthiness is not a fixed value; it rises and falls with access. Giving political figures direct access to a bigger public gives them more influence, making their content more newsworthy.

Below I propose a new method for deciding when to intervene in the accounts of political candidates, officeholders, and former officeholders. This is sorely needed: Facebook cannot continue to wait for violence and then suspend an account for inciting it.

Facebook hesitated to intervene in Trump’s account for lack of a post explicitly calling for violence. But that’s not how incitement works. A witness at the UN Tribunal for Rwanda described it brilliantly, recounting how a notorious radio station groomed its listeners to commit and condone unthinkable violence. The witness said, “In fact, what RTLM [Radio Télévision Libre des Mille Collines] did was almost to pour petrol, to spread petrol throughout the country little by little, so that one day it would be able to set fire to the whole country.”

In any such process, it’s difficult to decide which drop of petrol is the first “actionable” one. Facebook staff debate the meaning(s) of language in a post, and wonder about the intent of its author. But what really matters for preventing violence is how content is understood by people who might commit or condone violence. Staff should focus on real world impact and consequences, not unknowable states of mind or hypothetical meanings.

To do that, Facebook would identify political figures who spread disinformation and use language that tends to increase fear, threat, and a sense of grievance. It would then build classifiers to monitor those accounts and their followers’ accounts, looking for significant shifts in the sentiment of the followers’ posts, and signs that a critical mass of followers understand the political figure to be endorsing or calling for violence.

To use this method, Facebook should draw on context from non-Facebook sources where it may learn more about followers’ understanding of a politician’s messages. For example, even if it wasn’t apparent on Facebook that Trump’s followers believed he wanted them to commit violence, long before Jan 6, it was painfully clear and publicly available on TheDonald.win, where they described their plans in gruesome detail.

When Facebook becomes aware of a shift, humans would review relevant accounts to determine whether the followers’ own posts indicate that they are being incited to violence. If that is the case for a significant number of them, the next step would be to inform the account holder, along these lines: “This is how you are being understood, whether you intend it or not. If you don’t intend to incite violence, please make that very clear in a post.” If the account holder declines to do that, there would be no more need to think about intent or knowledge. Notice was given.

After that, Facebook would no longer be forestalled from taking action against the relevant content. Action could refer to a variety of interventions; that choice is not our topic here.

It’s no coincidence that Trump was repeatedly asked to repudiate violence, and never did until the last minute. Then he repeated his lie, expressed love for the rioters, and finally suggested they go home. Making that statement after months of continuous incitement was like steadily screwing open a fire hydrant until water was blasting out, and then holding up a cocktail umbrella to stop it.