Certainly the company doesn't want to lose its CDA 230 protections. But this isn't only about such legal details. Numerous academics and policy experts believe there are ways to tweak liability rules that would combat misinformation and protect both platforms and users. These measures range from standards for policing of misuse, to offering companies trade-offs between making their algorithms transparent and safe haven status. In France, for example, there are laws requiring algorithmic transparency in the public sector. And in Germany, social networks must remove unlawful content within 24 hours.
Even if CDA 230 was revoked wholesale, that would only make it possible for particular individuals to sue platforms for particular cases of defamation. Would Hillary Clinton bother suing for election manipulation? I doubt it. Either way, modernising and amending CDA 230 — which should happen — is just one step towards fixing the cesspool of toxic content on so many social media platforms.
Rather, Zuckerberg — presumably with the counsel of his chief operating officer Sheryl Sandberg — insists that Facebook's refusal to fact check Trump is all about protecting free speech. But as Twitter CEO Jack Dorsey has pointed out, the right to free speech is not the same as the right to virality. "Societies have always made judgments about which speech should be protected," says Anya Schiffrin, a senior lecturer at Columbia University specialising in policy solutions around disinformation. She points out that robust democracies throughout Europe have found ways to strike a balance between free speech and disinformation. "In the US, hiding behind the First Amendment has become a way for Big Tech to shut down critics, hobble political opponents and protect profits."
That brings us to what Facebook's stance is really about — power. Like most large, ubiquitous and systemically important companies that operate globally, Facebook aligns itself with the powers that be. If it wants to stay this big and unregulated, Facebook cannot afford to upset the rulers of countries where it operates, no matter how abhorrent their actions. We saw that in Myanmar, where military personnel used Facebook to help incite the Rohingya massacres. Now we see it in the US, where Facebook refuses to run afoul of a president who just called in troops to tear gas citizens.
It is a kind of oligarchic symbiosis that we haven't really seen in the US since 1877. That was when then-president Rutherford B. Hayes, who had been helped into office by the railway barons, ordered 1,200 federal troops to Baltimore to put down what he called a labour "insurrection". It was the first time that federal troops had been turned against American workers, and it transformed what might have remained a local conflict into the Great Railway Strike of 1877.
Zuckerberg says he doesn't want to be an "arbiter of truth." But he already is — as nearly three dozen early Facebook employees put it in a recent open letter that called for the company to fact check the president as Twitter does. "Facebook's behaviour doesn't match the stated goal of avoiding any political censorship," they wrote. "It monitors speech all the time when it adds warnings to links, downranks content to reduce its spread, and fact checks political speech from non-politicians."
So why isn't Facebook warning its users about the untruths of a president who often seeks to embolden the hatemongers and racists that form a part of his base? Because its goals, to make Croesus-style profits and stay as big as possible, are aligned with Mr Trump's goal of winning a second term.
Facebook, perhaps more than any other company in the developed world today, defines dangerous oligarchy.
Written by: Rana Foroohar
© Financial Times