Automating content moderation has its virtues: algorithms are quick and cheap. But they’re also notoriously prone to error and can enforce bias on a massive scale. The avalanche of recent failures shows that algorithms must not be left holding the wheel, once this crisis ends.

Automatic censorship stopped people from sharing key information just as the virus was gaining footholds around the world. Paige Williams, a staff writer for the New Yorker, tried to post reliable guidance on how to stop the virus from spreading in mid-March, only to have it censored by Facebook as spam. Responding to the critical shortage of face masks for medical workers, volunteers sewed thousands of homemade masks and tried to donate them on Facebook, only to find themselves in “Facebook jail”. The software couldn’t tell the difference between their altruism and an attempt to profit from the crisis, so those posts and reams of others were mistakenly censored, and many accounts were suspended altogether.

Automation has proven to be no panacea against stopping dangerous misinformation from spreading either. It has failed to halt the sharing of conspiracy theories alleging a (completely spurious) link between coronavirus and 5G towers, as well as advertisements recommending that people drink bleach.

These failures come as no surprise to anyone who has studied or tried to build automated tools for detecting hateful or abusive content, including the companies’ own staff. That’s why Facebook, Twitter, and YouTube warned users that they could expect lots of mistaken takedowns – and long delays in getting those mistakes corrected.

Content moderation can be automated with a number of techniques, each of which has its distinct limitations. Tools like keyword filters and hash-matching algorithms can only find previously identified material to block, and don’t take context into account. Machine-learning tools, designed to detect new examples of offending content, need to be trained on clear examples of what is and isn’t a violation – difficult to do when forms of communication evolve constantly, as they do with or without a pandemic. Mike Schroepfer, Facebook’s chief technology officer, told The New York Times last year that the company’s AI was catching only 51 percent of hate speech on the platform.

Hate speech and abuse are notoriously hard to detect with software, since the slickest algorithms in the world can’t parse the nuances of human communication. Algorithms often fail to distinguish between an insult and a genuine joke, or miss the many creative ways in which people attack and denigrate one another, such as by imitating the speech patterns of another group. Humans don’t always understand what other people mean, but they can bring cultural, linguistic and social context into consideration.

However, moderating content can be deeply traumatic for human workers,who face a never-ending stream of the worst that society has to offer, often for low pay and in poor working conditions. Automation should not serve as a substitute for providing proper mental health support and fair compensation to the people who enforce platform rules.

To compensate for the well-known weaknesses of AI, several companieshave said that they will not punish users permanently for content that is removed automatically during the crisis. Automated removals won’t count against users’ overall record of misbehavior on a platform.

In other words, automation is a temporary, imperfect response to an emergency situation. Companies must remember that, and governments must understand it too. Both must ensure that automated monitoring or “filtering” does not become the new status quo. The European Union has already taken disturbing steps towards mandated filtering. Last summer, it passed a Copyright Directive that essentially requires user-generated content sites to implement “upload filters” that attempt to block unlicensed use of copyrighted works. But aggressive copyright filters often make errors of attribution and routinely fail to take fair use into account. Just ask the violinist who found his live-streamed living room recital of a piece by J.S. Bach, whose work has been in the public domain for centuries, muted by Facebook as a copyright violation.

In the meantime, companies must preserve the content they are blocking or removing so that people who believe their posts or accounts were mistakenly censored can appeal when humans are available to consider those appeals. Doing so will also enable researchers to later discover the specific and systemic impact of this emergency use of automated removal. For now, we do not know which cases of automated censorship were in keeping with the companies’ policies, and which were malfunctions of their algorithms. Without such understanding, it’s impossible to usefully critique companies on where they draw the line.

Tech companies control more human communication than any government does, or ever has. In emergencies and in ordinary times, they should exercise that enormous power as fairly and transparently as possible.