January 15, 2025 Meta Mark Zuckerberg Freedom of Expression Post-Truth
When Mark Zuckerberg changed Meta’s rules for content moderation on its platforms, he framed the decision as a boost for freedom of expression.
“It’s time to get back to our roots around free expression,” Mark Zuckerberg, Meta’s chief executive, said in a video announcing the changes. The company’s fact-checking system, he added, had “reached a point where it’s just too many mistakes and too much censorship.”
Analysts across the web have dissected the new policy and widely interpreted the move as attempt to cozy up to the incoming Trump administration. And it certainly seems that way1: After all, in the twisted logic of the “free-speech absolutists”, any restrictions on speech amount to censorship.
“This is a standard complaint of the right”, Rebecca Solnit has written, “the real victim is the racist who has been called a racist, not the victim of his racism, the real oppression is to be impeded in your freedom to oppress.”
Content moderation, of course, is incredibly hard (if not impossible) to get right at scale. Where do you draw the line between what’s allowed and what isn’t? Do you try an maintain a semblance of civility or do you, like Meta has now, just give up under the guise of “free speech”2?
Between 2011 and 2015 I worked at a company that had comments on its site and dabbled in content moderation.3 Even back then, in what I’ve called the pre-post-truth era, we were repeatedly accused of “censorship” when deleting comments.
People felt very strongly that they should be able to say whatever they wanted online, even if their comments were insulting, degrading, or openly racist. When we deleted these comments, we predictably got accused of censorship—so you could argue that Zuckerberg is belatedly catching up with the tendencies of our grievance-based political discourse.
What people got wrong then, and what (I think) they’re getting wrong now is that speech on a company’s platform isn’t the same as speech in public. It’s fundamentally different to write on Facebook or to speak on the street; to write an Instagram comment or to post a letter. Platforms, no matter how big or important, are not the public town square. Platforms can make their own rules. And clearly, a company running a platform has the right to make rules for their platform—especially if they’re dependent on advertiser funding and don’t want their advertising client appearing next to neo-nazi content.4
In fact, I believe that a degree of moderation is the right thing to do for a private platform that, at the end of the day, runs it for money. It’s not only about respect: Even in purely business terms it makes sense to treat the overall public nicely and not to have your platform descend into a shouting match where people promote hatred about “certain races” or call queer people mentally ill. Pretending it’s fine only normalizes it; and it keeps moving the boundaries of what kind of speech is ok inexorably to the right.
The question, instead, is if platforms should be allowed not to moderate. When does a platform become too big for their own good? So much speech has moved from the public to platforms, platforms have become so dominant in shaping the cultural discourse and the way people see the world, that it’s dangerous to let the misinformation run wild. Because that’s what’s at stake here: If people are free to claim that women are household objects, if the fundamental freedom of 51% of the world population becomes a matter of opinion, then spreading any kind of falsehoods becomes not just possible but likely.5
I’m nevertheless sympathetic to the slippery slope argument that moderation, especially if state-sanctioned, can turn authoritarian. That’s why it’s important for moderation to have rules, for there to be a reason why some content gets removed—not because political opinions are censored but because there are reasons some speech is deemed undesirable by society at large.
Aside from clearing all sorts of vile rhetoric (calling queer people sick, equating women with household objects, and equating immigrants with vomit, among others), Meta also shut down it’s DEI initiatives and deleted messenger themes for trans and nonbinary people. Taken together, these decisions send a clear message that Meta is decidedly falling in line with Trump’s “anti-woke” obsession.↩︎
I’m aware that in Europe we have a very different culture surrounding speech; there is no First Amendment and we’re much more used to a certain policing of speech than Americans are.↩︎
The approach was hardly strategic, and there was no “policy” in place. The site was also too insignificant to warrant much of it at the time. Effectively, we deleted insults, blatant racism, and threads of violence. In Germany, you also need to watch out for Nazi rhetoric or terms, since those may be unconstitutional.↩︎
None of that makes Zuckerberg’s changes right! I find them utterly shameful and spineless.↩︎
A few weeks ago, Elisabeth Lopatto wrote: “I know people are sick of talking about glue on pizza, but I find the large-scale degradation of our information environment that has already taken place shocking.”↩︎