Fake news doesn't spread because of Facebook's algorithmic attempts to deliver to the user what the company thinks he or she wants to see.
It spreads because of a fundamental disconnect between the definitions of "engagement" in the advertising industry and the newsrooms. To Facebook and Twitter, engagement is likes and shares.
They sell such interactions to advertisers, and they prioritise posts with high engagement on newsfeeds. To a journalist, it's how many people have read or watched the full story and intellectually engaged with it.
An engaged reader is someone who, after reading this column, will respond intelligently in the comment section or send me an email with her thoughts.
The trouble is that people who "like" and share content often don't read it -- beyond the headline, that is. According to a recent study by Maksym Gabielkov and collaborators, 59 per cent of links circulated on Twitter are never clicked.
NPR ran a brilliant experiment on Facebook in 2014 proving that people will often comment after reading the headline and nothing else.
A recent survey of millennials revealed that one in five of them only ever read headlines (and I suspect the other four weren't quite frank with the researchers).
Facebook, Google, Twitter and the Macedonian hustlers who produced fake pro-Trump stories (headlines, really -- it doesn't matter what's in the body of the article) in bulk to get traffic and make a few dollars through Google AdSense -- all want to keep things as they are. They don't care whether people read what they share and repost because that's not how their incentives work.
Editors, by contrast, hate this setup. Instead of employing thorough, accurate reporters and well-informed columnists, they might as well outsource most of the work to robots and concentrate on writing catchy headlines. That would kill off the journalistic profession and leave the public woefully uninformed.
If publications certain of the quality of their information were more resolute in placing all their content behind paywalls... they would end up with less money and smaller audiences.
Because of the commercial symbiosis between editorial operations and tech platforms, there are all sorts of uneasy compromises.
Editors write sensationalist headlines that don't always match the stories beneath them, and they develop social media strategies to spread these headlines as widely as they can -- knowing full well that even a majority of those who interact with the posts won't read the linked stories.
Tech companies pretend they want to police the fakes -- and in the process, they perfect their capability to block content based on certain words. Twitter's recent to let users block "abuse" by filtering feeds for certain words falls in the same category.
Getting serious about fake detection requires human input: Essentially, as Victoria Rubin and collaborators specified in a 2015 paper, it would require building a data set of various types of fake news to train natural language processing systems. Even if an "automatic crap detector" is ever built, I wouldn't trust it.
Journalists, who are professional fact-gatherers and fact-checkers, may disagree about a set of facts. But at least they can argue it out; artificial intelligence is a black box, and if it is allowed to make decisions about which news is fake and which is "real," there will be no way to verify these decisions without some complex reverse-engineering.
In any case, fact-checking has been weapon zed and discredited during the UK referendum campaign and the US election: The efforts to analyse the arguments have been defiantly partisan. Besides, what was supposed to be fact-based reporting left most people unprepared for election-day shocks.
Perhaps the increasingly profitable tech giants will want to show some civic responsibility by rethinking their business model in relation to news.
The essentially economic conflict around the meaning of engagement is destroying the news industry's value proposition. It is no longer a trusted source of information. This year, only 32 per cent of Americans, and 14 per cent of Republicans, have a "great deal" or even a "fair amount" of trust in the media -- compared with 54 per cent and 52 per cent in 1998.
A small minority of people are willing to pay serious amounts of money for truthful, painstakingly collected information. Those who don't pay for it have to expect their news won't have so fine a filter on it. By nature, only propaganda is free because it's the consumers, not the content, who are being trafficked.
If publications certain of the quality of their information were more resolute in placing all their content behind paywalls, without loopholes or exceptions meant to increase "reach," "engagement" and ad revenues, they would end up with less money and smaller audiences. They would also be forced to prioritise coverage -- something many readers would welcome, I suspect. The social networks would cease to be a major channel for quality content: The links would only be shared among subscribers. Editors would have far more responsive and engaged audiences to deal with. I don't see it happening.
Perhaps the increasingly profitable tech giants will want to show some civic responsibility by rethinking their business model in relation to news.
Advertisers shouldn't be sold deceptive "engagement metrics": Only a story that has been read in full should generate income. That would kill off most of the fakes and sensationalist headlines.
Perhaps some combination of these two approaches could be worked out in a dialogue between the news and tech industries. I hesitate to suggest regulatory interference in freedom of speech matters, but governments could help regulate advertising in a way that would align commercial interests with editorial ones. It's clear that action is needed: Accurate, substantive news is on the brink of extinction, and it's not all the social networks' fault.
Bershidsky, a Bloomberg View contributor, is a Berlin-based writer.