Facebook, Twitter and YouTube face a number of challenges on and after Election Day. Photo / Shira Inbar, The New York Times
The sites are key conduits for communication and information. Here's how they plan to handle the challenges facing them before, on and after election day.
Facebook, YouTube and Twitter were misused by Russians to inflame US voters with divisive messages before the 2016 presidential election. The companies have spent thepast four years trying to ensure that this November isn't a repeat.
They have spent billions of dollars improving their sites' security, policies and processes. In recent months, with fears rising that violence may break out after the election, the companies have taken numerous steps to clamp down on falsehoods and highlight accurate and verified information.
We asked Facebook, Twitter and YouTube to walk us through what they were, are and will be doing before, on and after election day. Here's a guide.
Since 2016, Facebook has poured billions of dollars into beefing up its security operations to fight misinformation and other harmful content. It now has more than 35,000 people working on this area, the company said.
One team, led by a former National Security Council operative, has searched for "coordinated inauthentic behavior" by accounts that work in concert to spread false information. That team, which delivers regular reports, will be on high alert Tuesday (Wednesday NZ time). Facebook has also worked with government agencies and other tech companies to spot foreign interference.
To demystify its political advertising, Facebook created an ad library so people can see what political ads are being bought and by whom, as well as how much those entities are spending. The company also introduced more steps for people who buy those ads, including a requirement that they live in the United States. To prevent candidates from spreading bad information, Facebook stopped accepting new political ads October 20.
At the same time, it has tried highlighting accurate information. In June, it rolled out a voter information hub with data on when, how and where to register to vote, and it is promoting the feature atop News Feeds through Tuesday. It also said it would act swiftly against posts that tried to dissuade people from voting, had limited forwarding of messages on its WhatsApp messaging service and had begun working with Reuters on how to handle verified election results.
Facebook has made changes up till the last minute. Last week, it said it had turned off political and social group recommendations and temporarily removed a feature in Instagram's hashtag pages to slow the spread of misinformation.
Election Day
On Tuesday, an operations center with dozens of employees — what Facebook calls a "war room" — will work to identify efforts to destabilise the election. The team, which will work virtually because of the coronavirus pandemic, has already been in action and is operating smoothly, Facebook said.
Facebook's app will also look different Tuesday. To prevent candidates from prematurely and inaccurately declaring victory, the company plans to add a notification at the top of News Feeds letting people know that no winner has been chosen until election results are verified by news outlets like Reuters and The Associated Press.
Facebook also plans to deploy special tools if needed that it has used in "at-risk countries" like Myanmar, where election-related violence was a possibility. The tools, which Facebook has not described publicly, are designed to slow the spread of inflammatory posts.
After the election
After the polls close, Facebook plans to suspend all political ads from circulating on the social network and its photo-sharing site, Instagram, to reduce misinformation about the election's outcome. Facebook has told advertisers that they can expect the ban to last for a week, although the timeline isn't set in stone and the company has publicly been noncommittal about the duration.
"We've spent years working to make elections safer and more secure on our platform," said Kevin McAlister, a Facebook spokesman. "We've applied lessons from previous elections, built new teams with experience across different areas and created new products and policies to prepare for various scenarios before, during and after Election Day."
Twitter
Before the election
Twitter has also worked to combat misinformation since 2016, in some cases going far further than Facebook. Last year, for instance, it banned political advertising entirely, saying the reach of political messages "should be earned, not bought."
At the same time, Twitter started labelling tweets by politicians if they spread inaccurate information or glorify violence. In May, it added several fact-checking labels to President Donald Trump's tweets about Black Lives Matter protests and mail-in voting, and restricted people's ability to share those posts.
In October, Twitter began experimenting with additional techniques to slow the spread of misinformation. The company added context to trending topics and limited users' ability to quickly retweet content. The changes are temporary, although Twitter has not said when they will end.
The company also used push notifications and banners in its app to warn people about common misinformation themes, including falsehoods about the reliability of mail-in ballots. And it expanded its partnerships with law enforcement agencies and secretaries of state so they can report misinformation directly to Twitter.
In September, Twitter added an Election Hub that users can use to look for curated information about polling, voting and candidates. The company has said it will remove tweets that call for interference with voters and polling places or intimidate people to dissuade them from voting.
"The whole company has really been mobilised to help us prepare for and respond to the types of threats that potentially come up in an election," said Yoel Roth, Twitter's head of site integrity.
Election Day
On Tuesday, Twitter's strategy is twofold: Root out false claims and networks of bots that spread such information by using both algorithms and human analysts, while another team highlights reliable information in the Explore and Trends sections of its service.
Twitter plans to add labels to tweets from candidates who claim victory before the election is called by authoritative sources. At least two news outlets will need to independently project the results before a candidate can use Twitter to celebrate his or her win, the company said.
People looking for updates Tuesday will be able find them in the Election Hub, Twitter said.
After the election
Twitter will eventually allow people to retweet again without prompting them to add their own context. But many of the changes for the election — like the ban on political ads and the fact-checking labels — are permanent.
YouTube
Before the election
For Google's YouTube, it wasn't the 2016 election that sounded a wake-up call about the toxic content spreading across its website. That moment came in 2017 when a group of men drove a van into pedestrians on London Bridge after being inspired by YouTube videos of inflammatory sermons from an Islamic cleric.
Since then, YouTube has engaged in an often confusing journey to police its site. It has overhauled its policies to target misinformation, while tweaking its algorithms to slow the spread of what it deems borderline content — videos that do not blatantly violate its rules but butt up against them.
It has brought in thousands of human reviewers to examine videos to help improve the performance of its algorithms. It has also created a so-called intelligence desk of former analysts from government intelligence agencies to monitor the actions of foreign state actors and trends on the internet.
Neal Mohan, YouTube's chief product officer, said that he held several meetings a week with staff to discuss the election, but that there was no last-minute effort to rewrite policies or come up with new approaches.
"Of course, we're taking the elections incredibly seriously," he said in an interview. "The foundational work that will play a really major role for all of this began three years ago when we really began the work in earnest in terms of our responsibility as a global platform."
Before Tuesday, YouTube's home page will also feature links to information about how and where to vote.
Election Day
On Tuesday, Mohan plans to check in regularly with his teams to keep an eye on anything unusual, he said. There will be no "war room," and he expects that most decisions to keep or remove videos will be clear and that the usual processes for making those decisions will be sufficient.
If a more nuanced decision is required around the election, Mohan said, it will escalate to senior people at YouTube, and the call will be made as a group.
YouTube said it would be especially sensitive about videos that aimed to challenge the election's integrity. YouTube does not allow videos that mislead voters about how to vote or the eligibility of a candidate, or that incite people to interfere with the voting process. The company said it would take down such videos quickly, even if one of the speakers was a presidential candidate.
As the polls close, YouTube will feature a playlist of live election results coverage from what it deems authoritative news sources. While YouTube would not provide a full list of the sources, the company said it expected the coverage to include news videos from the major broadcast networks, as well as CNN and Fox News.
After the election
Starting Tuesday and continuing as needed, YouTube will display a fact-check information panel above election-related search results and below videos discussing the results, the company said. The information panel will feature a warning that results may not be final and provide a link to real-time results on Google with data from the AP.
Google has said it will halt election advertising after the polls officially close. The policy, which extends to YouTube, will temporarily block any ads that refer to the 2020 election, its candidates or its outcome. It is not clear how long the ban will last.