Public Comment on the Oversight Board’s “Brazilian general’s speech” Case

On March 9th, the Oversight Board announced the following call for public comments regarding an appeal to remove a speech by a Brazilian general:

Call for Public Comment

Brazilian general’s speech

(2023-001-FB-UA)

User appeal to remove content from Facebook

On January 3, 2023, two days after Luiz Inácio Lula da Silva had been sworn in as Brazil’s president, a Facebook user posted a video with a caption in Portuguese. The caption includes a call to “besiege” Brazil’s congress as “the last alternative.” The video shows part of a speech given by a prominent Brazilian general and supporter of Lula’s electoral opponent, in which he calls for people to “hit the streets” and “go to the National Congress… [and the] Supreme Court.” A sequence of images follows the general’s speech, including one of a fire raging in the Three Powers Plaza in Brasília, which houses Brazil’s presidential offices, Congress, and Supreme Court. Text overlaying the image reads, “Come to Brasília! Let’s Storm it! Let’s besiege the three powers.” Text overlaying another image reads “we demand the source code,” a slogan that protestors have used to question the reliability of Brazil’s electronic voting machines. The video was played over 18,000 times, was not shared, and was reported seven times.

Mr. Lula da Silva’s swearing-in had been accompanied by civil unrest, including protests and roadblocks. On January 8, more than a thousand supporters of former president Jair Bolsonaro broke into the National Congress, Supreme Court, and presidential offices, intimidating the police and destroying property. Meta designated Brazil a temporary high-risk location ahead of the country’s October 2022 general election, and has been removing content “calling for people to take up arms or forcibly invade …federal buildings” as a consequence. Meta only announced it had done so on January 9.

On the same day the content was posted, a user reported it for violating Meta’s Violence and Incitement Community Standard, which prohibits calls to “forcibly enter locations … where there are temporary signals of a heightened risk of violence or offline harm.” In total, four users reported the content seven times between January 3 and January 4. Following the first report, the content was reviewed by a human reviewer and found not to violate Meta’s policies. The user appealed the decision, but it was upheld by a second human reviewer. The next day, the other six reports were reviewed by five different moderators, all of whom found that it did not violate Meta’s policies. The content was not escalated to policy or subject matter experts for additional review.

One of the users who had reported the content appealed Meta’s decision to the Oversight Board. In their appeal to the Board, they link the content’s potential to incite violence to the movement of people in Brazil “who do not accept the results of elections.”

The Board selected this case to examine how Meta moderates election-related content, and how it is applying its Crisis Policy Protocol in a designated “temporary high-risk location.” Meta developed the Protocol in response to the Board’s recommendation in the “Former President Trump’s suspension” case. This case falls within the Board’s “Elections and civic space” priority.

As a result of the Board selecting this case, Meta determined that its repeated decisions to leave the content on Facebook were in error. Because at-scale reviewers do not record their reasons for making decisions, the company does not have further information about why they found the content did not violate its policies in this case. On January 20, 2023, Meta removed the content, issued a strike against the content creator’s account, and applied a feature-limit, preventing them from creating new content.

The Board would appreciate public comments that address:

  • The political situation in Brazil in advance of October’s election, and how it shifted between October 2022 and January 8, 2023.
  • The relationship between political violence, election denialism, and calls for offline mobilization on social media.
  • When Meta’s election integrity efforts should begin and end, and what criteria should guide decisions about those timeframes, particularly as they relate to transitions of power.
  • How Meta should distinguish between legitimate political organizing and harmful coordinated action.
  • How Meta should treat content attacking or delegitimizing democratic institutions and processes.

 

Dangerous Speech Project Responds

Executive Director of the Dangerous Speech Project Susan Benesch responded on behalf of the DSP in a public comment, available here:

Dangerous Speech Project

Susan Benesch, Executive Director

2023-001-FB-UA

Electoral periods require the most vigorous protection of public peace and freedom of expression since
without both, elections cannot be properly conducted, and democracy may be undermined. Meta must
act accordingly, under the Board’s guidance.

Election denialism, defined by the ACLU as “baselessly casting doubt on or refusing to accept the
outcome of free, fair, and secure elections,” undermines public confidence not only in a particular
election or electoral process, but in public institutions. Election denialism is also routinely and
directly linked to political violence since – as in this case – it is a powerful justification.¹ It turns the
tables, making political violence seem virtuous: if elections have been stolen, those who protest, even
violently, can see themselves as brave, honorable guardians of democracy, a nation, and/or an
identity group.

Meta must be alert to the fact that election denialism usually begins long before votes are cast, with
assertions that the relevant institutions and technology aren’t adequate to conduct fair elections, and that
the people involved are corrupt. This prepares followers to believe a subsequent assertion that an
election was rigged, and may discourage voting. Denialism is a way of obstructing a fair election, for
example when candidates baselessly predict fraud to try and secure victory whether they win
legitimately or not, telling their followers something like: “if they tell you I lost, you will know the
election was fraudulent.”²

Many elections are not free, fair, and secure, however, and it is vital to allow criticism and inquiry into
possible flaws and irregularities.

To make many key judgments and decisions regarding elections, Meta needs advice from external
experts with specialized knowledge about elections, and electoral processes in specific countries. Well
in advance of an election like the Brazilian one, when there are warning signs for election denialism and
coordinated violence, Meta should convene a body of external experts to advise on country conditions
and answer questions relevant to content moderation, review key moderation decisions, and flag content
they consider harmful. Such bodies would work in conjunction with internal Elections Operations
Centers like the one Meta assembled for the 2022 Brazilian election. They would also build on the sort
of collaboration that Meta enjoyed from Brazil’s Superior Electoral Court (TSE), since collaboration
would be more extensive and would include a wider variety of participants.

Experts might be drawn from local or national NGOs and civil society organizations, human rights
defenders, scholars, journalists, an electoral administrative council and/or court like the TSE (where
such an institution is highly regarded and independent), and international monitors from organizations
like the Carter Center, the International Foundation for Electoral Systems, or the United Nations. The identity of experts may be kept confidential to protect them from possible repercussions, but the expert
body’s membership should be disclosed to a trusted external organization (possibly even the Oversight
Board) to verify that Meta has constituted a diverse, nonpartisan body.

Meta should consult such experts on the sort of vital questions posed here, e.g. when it is reasonable to
cast doubt on an electoral process and when it is denialism, and which moderation tools are likely to be
most beneficial in reducing the risk of violence. Similarly, the experts will be invaluable for
distinguishing legitimate political organizing and harmful coordinated action – often the same activities
and even similar language can be used for both. As an example, poll monitoring can be harmless and
constructive, or intimidating to voters and poll workers.

As for the timing of Meta’s election integrity efforts, including building an external body of independent
observers and civil society as described above, efforts should be launched as soon as campaigning
begins in a country or region, and should continue until a peaceful transition of power has been
completed and (if applicable) denialism subsides so that the transfer of power is not in doubt, nor are the
relevant institutions or office holders in evident danger. Meta’s external expert bodies should advise on
when these concerns have passed, and election integrity efforts can be stopped.

If the Board plans to issue a policy advisory opinion on Meta’s approach to addressing election-related
disinformation and violence generally, we would also encourage review of Meta’s handling of political
advertising³ and disparate handling of election-related disinformation and misinformation in different
languages (e.g. fact-checking identical claims in English but not in Spanish)⁴.

Finally, we offer guidance regarding the Board’s last two questions. For distinguishing between
legitimate political organizing and harmful coordinated action it will be invaluable to consider, with the
help of external experts, what the goals and especially the likely outcomes are. Political organizing may
lead to violence in fraught conditions, but it should neither be meant – nor be likely – to galvanize such
violence.

The Board’s final question is arguably the most difficult, since the right to criticize government and its
institutions and policies is at the core of freedom of speech. Such speech must therefore be protected by
default. However as the German notion (and constitutional regime) of “militant democracy” holds,
extremists must not be permitted to use democratic tools to destroy democratic institutions and
processes. Meta should make use of its own existing tools to respond to content that is manifestly false
and/or that has a significant chance of inciting violence. To gauge these risks regarding specific content,
it should study responses to it⁵, rely on high-quality research on the links between speech and violence,⁶
and consult experts as described above.


¹ For details on identifying speech that increases the risk that its audience will commit or condone violence, see Dangerous Speech: A Practical Guide.

² This is unfortunately increasingly common. See e.g. Kenyan presidential candidate Raila Odinga in 2017 (“The only way Jubilee [his opponent’s party] can win this election is by rigging…They have no other way.”), and U.S. presidential candidate Donald Trump in August 2020 (“The only way we’re going to lose this election is if the election is rigged”).

³ See e.g. the KU Leuven–NYU audit that found 83% of ads in their sample that were not declared as political by the
advertiser were misclassified by Meta, including both false positives and false negatives, and that Meta allowed more than 70,000 political ads to run in the U.S. during its advertising moratorium around the 2020 election

⁴ See e.g. Aliya Bhatia’s summary of this issue for the Center for Democracy & Technology and Equis Labs’ research on Spanish-language misinformation.

⁵ For a more detailed description of this idea, see “The Insidious Creep of Violent Rhetoric” in Noēma.

⁶ For an extensive review of such research, see Cathy Buerger’s Speech as a Driver of Intergroup Violence: A Literature Review.

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download [36.71 KB]