An example of fake news which was a top search result on Google.
Facebook chief executive Mark Zuckerberg took to the social network's website over the weekend to dispute allegations that "fake news" had tilted the election for Republican Donald Trump. "More than 99% of what people see is authentic," he wrote, adding it was "extremely unlikely hoaxes changed the outcome of this election."
But such reassurances have buckled under mounting criticism. On late Monday, Zuckerberg acted, joining Google in taking the most serious steps yet to crack down on purveyors of phony stories by cutting off a critical source of funding - the ads that online platforms have long funneled to creators of popular content.
The move has raised new questions about long-standing claims by Facebook, Google and other online platforms that they have little responsibility to exert editorial control over the news they deliver to billions of people, even when it includes outright lies, falsehoods or propaganda that could tilt elections.
Such claims became increasingly unsustainable amid reports that News Feed and Trending Topics, two core Facebook products, had promoted a number of false, misleading and fantastical political stories, such as an article saying Pope Francis had endorsed Donald Trump, which was shared by over 100,000 users. There were "vote online" memes that assured Democrats in Pennsylvania that they could cast their ballots from home and a widely shared news release claiming Hillary Clinton's health disqualified her from serving as president.
Over the weekend, the No. 1 Google hit for the search "final election count" was an article from a little-known site claiming that Donald Trump had won the popular vote by 700,000 votes. (Clinton won the popular vote.)
Facebook, Google and other Web companies have sought to walk a fine line: They don't want to get into the practice of hiring human editors, which they think would make them vulnerable to criticisms of partisan bias and stray from their core business of building software. Yet outsiders, as well as some within Silicon Valley, are increasingly clamoring for technology giants to take a more active role in policing the spread of deceptive information.
"It is very difficult for Facebook to say they are not a gatekeeper when they drive such an enormous share of the attention of most news consumers across the world," said Joshua Benton, director of the Nieman Journalism Lab at Harvard University. "They need to figure out some editorial mechanism; with their scale comes responsibility."
When Facebook detects that more people than usual are clicking on any given story, the company's software algorithms instantaneously spread and promote that story to many other users in the network - enabling articles to "go viral" in a short period of time and making it harder to catch false news before it spreads widely.
The moves by Google and Facebook this week take aim at false stories but don't affect Facebook's News Feed or Google Search rankings, the places where fake news has actually spread. Instead, the tech giants are seeking to exert financial leverage that could prompt sites to clean up their act. Publishers make money when Google and Facebook help place ads on their websites, and they share that revenue with the technology giants. Companies found to be in violation of the policy risk losing a major source of ad dollars.
Nearly 1.2 billion people log on to Facebook every day; almost half of Americans rely on the social network as a source of news, according to the Pew Research Center. Google accounts for roughly 40 percent of traffic to news sites, according to Parse.ly, a startup that analyzes Web traffic data for news publishers.
Google spokeswoman Andrea Faville declined to detail how the company would determine the difference between false and accurate information going forward. "We use a combination of automated systems and human review," she said.
Facebook spokesman Tom Channick emailed a statement about the new policy: "In accordance with the Audience Network policy, we do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news. While implied, we have updated the policy to explicitly clarify that this applies to fake news. Our team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance."
If we wouldn't trust the government to curate all of what we read, why would we ever think that Facebook or any one company should do it?
Facebook, Google and other sites have struggled to find automated solutions to clamp down on fake news, because there is not always a clear line between true and false news online, said a former Facebook employee who worked on the News Feed product and spoke on the condition of anonymity because he did not want to burn bridges with the company.
The most influential sources of political misinformation on Facebook are not Macedonian fake-news sites or satirical pages but the thousands of partisan news outlets, pages and blogs that derive their traffic from News Feed, the employee said. Because those stories aren't strictly "true" or "false," it's difficult to design a software algorithm that can distinguish among them.
Some civil liberties experts said it was dangerous to push Facebook to take on a greater editorial role. "If we wouldn't trust the government to curate all of what we read, why would we ever think that Facebook or any one company should do it?" said Jonathan Zittrain, faculty director at the Berkman Klein Center for Internet and Society at Harvard.
Over the past year, Facebook has struggled to strike a balance. In May, the company was accused by former employees of suppressing news stories that had a conservative bent in its trending-news section, which appears on the upper right corner of a user's Facebook page. In an effort to quell the criticism, Zuckerberg met with conservative leaders and launched an internal investigation into bias at the social network. (The investigation, conducted by Facebook and not by an outside firm, found no evidence of anti-conservative bias.)
The company also published a set of editorial guidelines that detail how human editors and algorithms work together to pick what stories should appear in the Trending Topics.
The controversy over trending news reflects Facebook's challenges in curating content on the medium. Facebook had quietly used a handful of journalists, hired as freelance contractors, to help the curate the Trending section. Those contractors were replaced in late August by a team of engineers who defer most editorial decisions to the Trending section's algorithms. The social network still has an editorial team that manages trending news, and it also allows users to flag hoaxes or fake stories in its News Feed product.
In recent months, Zuckerberg has repeatedly emphasized that Facebook is a technology company, not a media company. Still, "there is more we can do here," he said in his weekend blog post. He said he was proud of Facebook's role in the election and hoped to have more to share soon. The challenge, he wrote, is that any major changes run the risk of introducing unintended side effects or biases.
"This is an area where I believe we must proceed very carefully," he wrote, adding, "Identifying the 'truth' is complicated."