The Insidious Creep Of Violent Rhetoric
(This article was originally published in Noēma Magazine.)
Social media companies have missed an important way to stop users from inciting violence: paying attention to how language is interpreted.
After the 1994 genocide in Rwanda, at the trial of a newspaper editor and two radio station executives, a witness tried to describe how the defendants had helped put nearly half a million people to death without ever touching a machete. What the radio station did, the witness said, “was almost to pour petrol, to spread petrol throughout the country little by little, so that one day it would be able to set fire to the whole country.”
Like genocidaires, rioting crowds must be primed for violence. No one would smash their way into a building on the basis of only one rant, no matter how convincing.
In the weeks before the January 6 attack on the U.S. Capitol, Donald Trump incited violence, not genocide, but the process was similar: a gradual, insidious, relentless drip of inflammatory material that slowly made people more likely to condone or commit violence, though he typically used ambiguous language, just as most inciters do. Facebook, Twitter and other platforms failed to prevent him from successfully inciting violence; they shut down his accounts only after serious, predictable violence happened.
There’s not much point in prohibiting incitement to violence if you define it only in a rear-view mirror, after it leads to violence. So the essential question for moderating online accounts like Trump’s and those of other violence-endorsing strongmen around the world, such as President Rodrigo Duterte of the Philippines or President Jair Bolsonaro of Brazil, as well as their acolytes and mouthpieces, is this: How should companies decide whether a particular drop of petrol is “actionable,” as they put it? At what point does the risk of harm outweigh the right of a political figure to speak and the right of an audience to read or hear what they want to say, wherever and however they choose to say it?
The words are typically equivocal, as the politicians’ readers and followers know just as well as the moderators. A Trump supporter implied as much with a comment late last year on the online forum TheDonald: “He can’t exactly openly tell you to revolt. This is the closest he’ll ever get.” For now, when trying to decide whether to intervene on an ambiguous tweet or post by a major political figure, content moderation staffers carefully parse the words and wonder about their author’s intent.
For example, when Trump posted in May 2020, “When the looting starts, the shooting starts,” evidently referring to Black Lives Matter protests, social media company staff wondered whether he was actually threatening violence. Were his words a deliberate reference to the Miami police chief who, in 1967, threatened to use dogs and guns to curb violence in the city’s Black neighborhoods, and said “I’ve let the word filter down that when the looting starts, the shooting starts”?
Was that just a coincidence? And was Trump merely “glorifying” violence rather than threatening it, as Twitter claimed, confusingly, in its comment on the tweet? Such debates are time-consuming and inconclusive, just like efforts to discover what account-holders like Trump intended to convey or inspire with their woolly language. Intent is impossible to discover — we have all wondered fruitlessly what Trump was thinking at one time or another — but, fortunately, it’s not relevant.
What really matters for preventing violence is how content is understood by its audience, especially people who might commit or condone violence, as I’ve learned studying rhetoric that increases the risk of violence at the Dangerous Speech Project. Content moderation staff should focus on real-world potential impacts and consequences, not unknowable states of mind or hypothetical meanings.
To better determine the risk of violence, and also to demand more of those who have an extra measure of power and influence, social media platforms should hold such people accountable when the content they post is understood by their followers to call for crime. Until they took Trump down, the platforms rarely punished powerful people at all for misusing their online megaphones, even as they banned millions of less influential accounts. But public figures — including politicians, religious leaders and even sports and music stars — can do much more harm online than others, since they influence large numbers of people.
Facebook, Twitter, YouTube and other platforms give powerful people unprecedented direct access to millions of listeners, but haven’t even required them to follow the rules under which the companies regulate the rest of us. Instead of enforcing rules equally for all users, the companies have used a lower standard for the powerful and influential, or have gone so far as to give them carte blanche to post just about anything.
In September 2019, for example, Facebook decided to exempt all politicians and political candidates worldwide from Facebook’s rules against hate speech. “It is not our role to intervene when politicians speak,” said Nick Clegg, the former British politician who became a senior Facebook executive. Clegg did allow that Facebook would make exceptions “where speech endangers people.” What he didn’t explain is how Facebook would identify such speech.
Facebook and other companies have skipped an important way to do that. A major advantage of online public discourse is that it’s quite easy and quick to discover how large numbers of people understand particular content from the way they discuss it — especially if you’re at a company in possession of the data.
Before billions of people became accustomed to expressing themselves in writing online, the only practical way to find out what large numbers of them thought was polling, which is slow and sometimes unreliable. But now, a company could easily identify which of its users are spreading disinformation and using language that is threatening or tends to increase fear and a sense of grievance.
The company could build software to monitor those accounts and their followers’ reactions to them, looking for significant shifts in the sentiment of the followers’ comments or posts, and signs that a critical mass of followers understand the political figure to be endorsing or ordering violence.
For example, on Dec. 19 (at 1:42 a.m.), Trump tweeted: “Big protest in D.C. on January 6th. Be there, will be wild!” There was no evident call to violence in the tweet, but many members of the notorious, open online forum TheDonald immediately reacted with posts like: “Well, shit. We’ve got marching orders, bois” and “We have been waiting for Trump to say the word.” To that, another replied, “Then bring the guns we shall,” and they went on to describe plans for attacking the Capitol and arresting or even killing legislators. It was abundantly clear, more than two weeks before they went to Washington D.C. — that they had been incited to violence.
Inciters commonly use language that their followers understand as a call to attack, but outsiders don’t. It’s no coincidence: Coded language is a social glue that holds groups of people together and binds them more tightly to a leader.
Take Shah Ahmad Shafi, an ultra-conservative Muslim cleric in Bangladesh with millions of followers. He said in a 2013 speech that “Women are like tamarind, tamarind, tamarind! If you go to the market where tamarind is sold, your mouth will water. Women are worse than that!” (Tamarind is a popular candy in Bangladesh.) He went on to say that women should be kept “within four walls” at home — outside, they are dangerous temptations to men, and by implication, they should be punished.
The speech was posted on YouTube and Facebook and quickly inspired severe beatings of women, especially women journalists who attempted to cover a rally of Shafi’s group. This wasn’t the first time that the cleric had made such remarks about women, nor the first time his followers had taken them as encouragement to beat women, at home and in public.
Influential people like Shafi, who have already spread false and/or inflammatory content that can incite violence, make up only a tiny proportion of billions of accounts — not too difficult for social media companies to monitor. If the prospect of such surveillance is still alarming, remember that companies already do it constantly for their own internal purposes: They monitor some of the same data that would be useful for banning hate speech to maximize “user engagement.” In this case, they would be trying to protect a significant public interest instead. Also, they need not and should not rely only on their own data. Much useful and relevant information, like comments on public Facebook pages or posts on TheDonald, is not private.
When a company becomes aware of a sudden surge in reactions to an influential account or a change in the emotional tone of many of the reactions, its staff could review relevant accounts to determine whether followers’ posts and comments indicate that a significant number of them are being incited to violence. What number is “significant” would depend on the circumstances, such as whether they have access to weapons and their desired targets. Company staff would make educated guesses as to whether they are being incited from their own comments and posts.
When that’s the case, the next step would be to inform the account holder with a message like: “This is how you are being understood, whether you intend it or not. If you don’t intend to incite violence, please make that very clear in a post.” If the account holder declines to do that within a reasonable period of time, there would be no more need to dwell on intent or knowledge. Notice was given, and the company would assume that the poster was satisfied with the way their followers interpreted their words. Company staff could then take action on the relevant content and/or the account that posted it.
Not only would this method allow companies to make evidence-based judgments about what content might inspire violence, it would also provide a warning — a bit of due process — to the user who posted the content, and give them an opportunity to speak out against possible harmful consequences. Such a statement would likely be far more influential with the followers than a rebuke from the platform.
There are, of course, many questions that would need to be answered before implementing this method. For example, who should be considered an influential political figure? Heads of state are an obvious place to start, and like many other new content moderation policies in the past, this one should be tried first as a small experiment, with a short list of possible inciters.
If the test proves effective, the list of politicians could be expanded to include candidates for office and former politicians, as well as non-politicians with significant power over specific audiences, like religious leaders and other people with online audiences over a certain size.
Also, companies may find themselves with one more content moderation challenge: determining whether a politician’s claim that they didn’t mean to incite violence is sufficiently sincere or convincing. A strong repudiation can make all the difference, but many politicians are far from disappointed when their followers understand them to be calling for violence, though they don’t say so directly. To determine whether a disavowal is real, companies should again pay attention to how it has been understood by its author’s followers. Political leaders and their followers tend to understand each other quite well, even when the leader’s language is coded.
It’s no coincidence that Trump was repeatedly asked to repudiate violence and the violent — for example after the August 2017 Unite the Right march in Charlottesville, Virginia, during a presidential debate in September 2020 regarding the Proud Boys white supremacist group and on January 6. But it was only until the attack on the Capitol had been underway for hours that he did so, and it wasn’t much of a repudiation: He repeated his lie about the election, expressed love for the rioters and finally, tepidly, suggested they go home.
Making that statement after years of dangerous speech was like steadily screwing open a fire hydrant until water was blasting out, and then holding up a cocktail umbrella to stop it. It is time for social media companies to hold powerful people accountable for opening up those hydrants.
Content moderation staff should focus on real-world potential impacts and consequences, not unknowable states of mind or hypothetical meanings.
DownloadRead More