Facebook monitors election-related content around the world from its war room. Photo / AP
A clock on the wall counts down the hours and minutes until zero day. Monitors display live television feeds and data dashboards. Headshots of VIPS, and a map of the battle zone, are pinned to a noticeboard.
It could be a military command post, but this is Facebook's new election "war room" - a hastily assembled facility at its California headquarters where up to 400 employees keep an eye on the world's democratic contests.
Analysts and engineers confer in an atmosphere of studious hush, watching for signs of malicious activity.
One wall shows a silent feed from another Facebook office in Washington, ready to become a conference call at a moment's notice. The opposite wall bears information about Brazil's election.
The room is staffed 20 hours a day, rising to 24 as polls approach. The aim is to spot misinformation driven by foreign spies or domestic charlatans, as it develops - and, if possible, stop it.
Facebook used to deny the impact of the falsehoods its algorithms helped spread. In November 2016, Mark Zuckerberg claimed it was "pretty crazy" to think that fake news influenced the US presidential election.
But since, under pressure from governments and the media, his company has changed course, rewriting policies, hiring an investigations team and doubling "safety and security" staff to 20,000. The war room is part of that effort; a symbol of hesitant acceptance of the fact Facebook wields global power.
"We've been hard at work for the last two years to make sure that we're more prepared than in 2016," said Samidh Chakrabarti, the company's head of civic engagement. "We're committed to getting it right this time ... everyone across the company has a deep sense of responsibility."
The war room is built for speed. Its staff represent thousands in 20 departments across Facebook and its subsidiaries, ready to work together and act fast. Systems scan constantly for floods of foreign political content, spikes in reports from users, or other clues.
Anomalies trigger an alarm and are flagged on a "situation board". Investigators hunt the culprits while moderators decide if content breaks rules. If so, it can be removed. If not, third-party fact-checkers can tag it with a health warning.
In round one of Brazil's election, for example, staff saw a spreading story which falsely claimed protests would delay the polls. It was swiftly removed.
But Facebook is limited by its own technology. Much of the fake news in Brazil's election travels not on Facebook but Whatsapp, where all messages are encrypted.
Facebook is also limited by its policies. Since 2016 it has applied a range of sanctions to news deemed by outside fact checkers to be misleading.
It uses AI to go after the fakers' bogus accounts. Between October 2017 and March 2018, it blocked or removed almost 1.3 billion messages.
But Facebook still resists banning fake news outright, insisting instead that it wants to police behaviour.
Renee DiResta, a digital propaganda expert, thinks this is the right approach, allowing Facebook to avoid "allegations of bias [and] censorship" that stricter curbs on fake news could bring.
Finally, Facebook is limited by its business model. "The problem is the social ecosystem was designed to help like-minded people find each other and help content go viral ... effective for propagandists," said DiResta.
Facebook's next tests will be the Brazilian run-off on October 28, then the US midterm elections on November 6. If the war room performs well, it may become permanent.
Either way, its existence shows that Facebook has finally acknowledged its role as a global speech regulator, part of an endless, thankless war against spies, trolls and clickbait merchants.
"This is going to be a constant arms race. This is our new normal," said Katie Harbath, Facebook director of government outreach.