Since Twitter and Facebook banned Donald Trump and began “purging” QAnon conspiracists, a segment of the chattering class has been making all sorts of wild proclamations about this “precedent-setting” event. As such, I thought I’d set the record straight.
Everything in the social media ecosystem was once tilted in the favor of toxic forces, from the algorithms that push our content feeds toward extremism to the companies’ longstanding reticence to admit it. Imagine a foosball game on a slanted table. Yes, the little soccer players could try to stop each rush of the rolling ball, but all their spinning wouldn’t matter in the end. Over the past few years, however, that table has started to be righted. Driven by outside pressure over election disinformation, mass killings, and COVID-19 striking close to home — and perhaps most significantly, internal employee revolts — the companies’ leaders have put into place a series of measures that make it harder for toxic forces. From banning certain types of ads to de-ranking certain lies, these safeguards built up, piece by piece, culminating in the deplatforming of the Internet’s loudest voice.
While it is impossible for anyone to have complete foresight about this moment, one need only look back at the abuse hurled most often at already marginalized individuals and groups on these platforms to know that the warning signs were there all along. Trouble is that these platforms did not meaningfully change to address the causes of why they were being used in bad faith. I am under no illusions that horrible people would not exist on these or any other online platform. But I do think it is possible that, had those who make decisions at these companies taken more seriously the concerns of those on the receiving end of viral hate, they would have been better equipped to scale their moderation strategies.
My own opinion is that this collision of politics, society, and technology has been a long time coming. As far back as 2010, I have argued about the legislative challenges facing technology will be more acute than technological changes themselves. My argument has been that these social platforms are essentially nation-states and require a higher level of social and civic etiquette established and enforced through official policies. When evaluating the performance of Twitter, Facebook, and others on this particular score, the phrase I have often used is “dereliction of duty.”
Malik doesn’t directly say this and I do not want to put words in his mouth, so this is my own extension of his piece: I think part of that duty must be in their careful moderation. That means creating limitations around problematic posts and users very quickly; it also means applying the lightest possible touch. For over a decade now, the largest social platforms have often been far too cautious about setting expectations of behaviour.
Concern about firefighting efforts doesn’t get us far enough when there are prolific arsonists.