Filters

Flower speech: new responses to hatred online

This essay first appeared in the Internet Monitor project’s second annual report, Internet Monitor 2014: Reflections on the Digital World. The report, published by the Berkman Center for Internet & Society, is a collection of roughly three dozen short contributions that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.

From threats to rape the children of an online game developer, to a dustpan full of ashes labeled “one good Jew” or a hashtag meaning “let’s burn gays,” hatred online often seems uncontrollable — except with censorship that would also curb freedom of expression. Possibly not, though: tech activists are testing new methods to counter digital vitriol and to protect its targets, employing innovative code, memes, and group efforts.

If successful, this might present an alternative to leaving speech regulation to governments and companies, whose relevant laws and rules are often inconsistent, misapplied, and opaque.

Illustration by Willow Brugh (@willowbl00)

In Myanmar, a new anti-hatred movement called Panzagar or ‘flower speech’ began in response to hateful and dangerous speech in public and private discourse. Much of this speech describes Muslims as an existential threat to Myanmar as a Buddhist nation. Gruesome images of violence with the tagline “this is what Muslims do” appeared on well-known Facebook pages, implying that Buddhists face a mortal threat. Other cases could hardly be more brazen or explicit: one Facebook account was named “We Will Genocide All the Muslims and Feed Them to the Dogs,” in the Myanmar language. (After repeated complaints to Facebook that this violated the company’s real-name policy, that account was taken down.)

Panzagar was created in April 2014 by a group of tech activists led by Nay Phone Latt, an Internet café owner, activist, and poet who was sentenced to 20 years in prison in 2008 for violating Myanmar’s Electronics Act by reporting news that might tarnish the government’s reputation: he had blogged about anti-government protests. He was released in 2012 and is now campaigning against hateful speech and “dangerous speech,” a term that I coined for speech that tends to catalyze violence by pitting one group of people against another.

Panzagar began by creating a meme. A person (usually a cute young woman, as drawn by the team’s animé-loving volunteer illustrators) holds a flower in her mouth. Taking a cue from this symbolic commitment not to use or tolerate speech that can “spread hate among people,” as Nay Phone Latt puts it, thousands of people ‘liked’ Panzagar’s Facebook page within days of its creation, and many have posted photographs of themselves holding flowers in their mouths. This is a courageous act in a country where anti-Muslim feeling is growing and there have been fatal clashes — most recently in July 2014 in Mandalay, after a false rumor that a Buddhist woman had been raped by Muslim men surfaced online and went viral on Facebook.

Facebook so dominates online life in Myanmar that some of its users believe Facebook is the Internet, and have not heard of Google. Many users, however, cannot easily read English, the language of the platform, and are unaware of the ‘community standards’ or the opportunity to report content. Facebook, for its part, couldn’t make much sense of the reports that it received until it hired a Myanmar-speaking subcontractor last year.[1]

Panzagar isn’t waiting for Facebook, though. “We need to moderate ourselves without control by others,” Nay Phone Latt says. “Both restrictive laws and [hateful] speech are dangers to freedom.”

Across the world a few months later, a group of women also argued that violent, hateful content can impinge on freedom of expression. In August 2014, the editors of Jezebel, an American blog directed at women, publicly demanded freedom to write without seeing gifs of rape and other violence against women.

“It’s like playing whack-a-mole with a sociopathic Hydra,” they wrote of their efforts to delete gifs that appeared relentlessly on their site, posted as anonymous comments. For months Jezebel readers saw the images, and staffers deleted them manually since their platform, Kinja, provided no alternative. In August, Jezebel editors took the unusual step of posting a public letter to the management of their parent company, Gawker Media. In that brief but impassioned text, the editors demanded a solution, concluding with an argument that is gaining currency even in the intensely speech-protective land of the First Amendment: some forms of expression constrain the freedom of speech of those who hear or see it, by frightening or even silencing them.[2]

“Gawker has always been a place that would really go to the mat for its writers, a place that offered unmatched freedom to smart people with something to say. It’s time that Gawker Media applied that principle to promoting our freedom to write without being bombarded by porn and gore,” the editors wrote. The next day, Gawker editorial director Joel Johnson apologized. He later announced that Gawker was disabling image uploads in comments as a temporary fix, and would re-introduce moderation. (Commenters can still remain anonymous.)

Also in response to violent misogyny, this time on Facebook, three activists tried another new tactic. Instead of encouraging supporters to complain to Facebook, Soraya Chemaly, Laura Bates, and Jaclyn Friedman asked supporters to tweet their outrage directly to Facebook advertisers whose ads were sometimes popping up alongside photographs of battered women on pages celebrating domestic violence. Within days, Nissan UK announced that it would pull ads from Facebook, followed by several other companies.

Almost immediately, Facebook said it had failed to remove hate speech, especially gender-based hate, and would “do better.” Since then, Facebook has stopped identifying misogynist jokes as “controversial humor” according to Chemaly, and casts a more critical eye on such content.

Taking hateful posts or images off a single platform cannot protect anyone from being personally targeted with threats, doxxing, and fearsome language and images, however, and this tactic has been rampant in the past year. Well-known cases include Zelda Williams, who received tweets with images made to look like the body of her father Robin Williams, soon after he committed suicide; female game developers and critics such as Brianna Wu, Zoe Quinn, and Anita Sarkeesian; and British feminist Caroline Criado-Perez, who was threatened with rape after she campaigned to get women (other than the Queen) featured on UK banknotes. These are, unfortunately, the tip of a grim iceberg.

In response, Hollaback, a nonprofit that works against harassment of women on the street, also wants to “reclaim the streets of the Internet,” as its director Emily May puts it. Hollaback is developing a platform that would encourage online ‘bystanders’ to support people who are being targeted, since this technique has been quite effective offline. It could work even better online, if for no other reason than that bystanders are always available. Millions of them. “With street harassment,” May said, “maybe half the time there’s somebody else around. The difference is that online, 100% of the time there’s somebody else around. It’s a public space.”

Early data from other contexts suggests that online bystanders can indeed shift discourse norms away from hateful speech, especially in virtual public spaces. In Kenya in 2013, for months before a fraught presidential election, iHub Research (a tech company associated with crowdsourcing pioneer Ushahidi) monitored Kenyan online speech for hateful and dangerous remarks. Such speech abounded, especially in newspaper comment sections and on Facebook.

Among KOT (Kenyans on Twitter), however, there was dramatically less hateful and violent speech. Trying to understand the great discrepancy, iHub’s monitors noticed spontaneous speech regulation on Twitter. In response to hateful Tweets directed at members of particular ethnic groups, other users responded with Tweets such as “please remember that we are all Kenyan” or “Is this the Kenya that we want?” At least one of the original accounts produced an apology, others stopped Tweeting hatred, and some disappeared entirely.

Twitter data from other environments also suggests that “counterspeech” may sometimes convince the authors of hateful speech to change their tune. Nina Davuluri’s selection as Miss America 2014 was met with a storm of furious tweets from Americans who confused the daughter of Indian immigrants with an Arab, which made her a presumptive terrorist to some (others were enraged by her skin tone and refused to recognize her as a “real” American).

In one (typically ungrammatical) tweet, a teenaged boy said, “I am literarily soo mad right now a ARAB won #Miss America.” He received replies telling him that he was wrong and that his tweet was racist. At first he refused to engage with his new interlocutors, but he eventually tweeted at Davuluri directly, apologizing.

University of Illinois chancellor Phyllis Wise became another target as soon as she decided not to declare a snow day in spite of a very cold weather forecast for Monday, January 27, 2014. Students used the hashtag #fuckphyllis to pelt Wise with racist, sexist, violent threats. Those messages were quickly met with another surge of messages, rebuking those students and defending Wise. The latter messages were more numerous, and may have been more influential.

Influence isn’t the same as changing minds or behavior, and counterspeech cannot be expected to shift the minds or behavior of hardcore racists, misogynists, and other haters. Producers of online hatred are not all the same, however — just as members of the groups they target also differ from one another. It may be possible to shift the attitudes of young people, in particular.

Humor is one more tool that is in frequent, and perhaps sometimes effective, use against online hatred. After a UCLA student named Alexandra Wallace recorded herself ranting against Asian students, others posted parodies and responses that were viewed much more than Wallace’s clip — in the case of actor Jimmy Wong’s “Ching Chong! Asians in the Library Song,” more than five million times.

In another case, equally crude racism was met with gentle humor. In response to racist soccer fans’ practice of throwing bananas at black players, FC Barcelona’s Dani Alves picked up a banana and took a bite during a match in April 2014. Many of his fellow players and fans posted photos of themselves eating bananas, using the hashtag #SomosTodosMacacos (“we are all monkeys”). It bears studying whether this changed the minds of any racist monkeys.

[1] Facebook employees, in discussion with the author.

[2] Jeremy Waldron, University Professor at NYU, makes a similar argument in his book The Harm in Hate Speech (Cambridge: Harvard University Press, 2012), for example.

Local campaign combating hate speech created Facebook stickers for Myanmar users, encouraging them not to “start fires” and to think before sharing.

DownloadRead More