The strategies and tools being deployed during the midterm vote in the US this year by Facebook, TikTok and other companies often resemble tactics developed to deal with misinformation in past elections: partnerships with fact-checking groups, warning labels, portals with vetted explainers as well as post removal and user bans.
Social media platforms have made attempts to pre-bunk before, though those efforts have done little to slow the spread of false information. Most have also not been as detailed — or as entertaining — as the videos used in the studies by the researchers.
Twitter said this month it would try to "enable healthy civic conversation" during the midterm elections in part by reviving pop-up warnings, which it used during the 2020 election. Warnings, written in multiple languages, will appear as prompts placed atop users' feeds and in searches for certain topics.
The new paper details seven experiments with almost 30,000 total participants. The researchers bought YouTube ad space to show users in the United States 90-second animated videos aiming to teach them about propaganda tropes and manipulation techniques. One million adults watched one of the ads for 30 seconds or longer.
The users were taught about tactics such as scapegoating and deliberate incoherence, or the use of conflicting explanations to assert that something is true, so that they could spot lies. Researchers tested some participants within 24 hours of seeing a pre-bunk video, and found a 5 per cent increase in their ability to recognise misinformation techniques.
One video opens with a mournful piano tune and a little girl grasping a teddy bear, as a narrator says "what happens next will make you tear up." Then, the narrator explains that emotional content compels people to pay more attention than they otherwise would, and that fearmongering and appeals to outrage are keys to spreading moral and political ideas on social media.
The video offers examples, such as headlines that describe a "horrific" accident instead of a "serious" one, before reminding viewers that if something they see makes them angry, "someone may be pulling your strings."
Beth Goldberg, one of the paper's authors and the head of research and development at Jigsaw, a technology incubator within Google, said in an interview that pre-bunking leans into people's innate desire to not be duped.
"This is one of the few misinformation interventions that I've seen at least that has worked not just across the conspiratorial spectrum, but across the political spectrum," Goldberg said.
Jigsaw will start a pre-bunking ad campaign on YouTube, Facebook, Twitter and TikTok at the end of August for users in Poland, Slovakia and the Czech Republic, meant to head off fearmongering about Ukrainian refugees who entered those countries after Russia invaded Ukraine. It will be done in concert with local fact-checkers, academics and disinformation experts.
The researchers do not have plans for similar pre-bunking videos before the midterm elections in the United States, but they are hoping other tech companies and civil groups will use their research as a template for addressing misinformation.
However, pre-bunking is not a silver bullet. The tactic was not effective on people with extreme views, such as white supremacists, Goldberg said. She added that elections are tricky to pre-bunk because people have such entrenched beliefs. The effects of pre-bunking last for only between a few days and a month.
Groups focused on information literacy and fact-checking have employed various pre-bunking strategies, such as a misinformation-identifying curriculum delivered over two weeks of texts, or lists of bullet points with tips such as "identify the author" and "check your biases." Online games with names like Cranky Uncle, Harmony Square, Troll Factory and Go Viral try to build players' cognitive resistance to bot armies, emotional manipulation, science denial and vaccine falsehoods.
A study conducted in 2020 by researchers at the University of Cambridge and at Uppsala University in Sweden found that people who played the online game Bad News learned to recognise common misinformation strategies across cultures. Players in the simulation were tasked with amassing as many followers as possible and maintaining credibility while they spread fake news.
The researchers wrote that pre-bunking worked like medical immunisation: "Preemptively warning and exposing people to weakened doses of misinformation can cultivate 'mental antibodies' against fake news."
Tech companies, academics and nongovernmental organizations fighting misinformation have the disadvantage of never knowing what lie will spread next. But professor Stephan Lewandowsky of the University of Bristol, a co-author of Wednesday's paper, said propaganda and lies were predictable, nearly always created from the same playbook.
"Fact-checkers can only rebut a fraction of the falsehoods circulating online," Lewandowsky said in a statement. "We need to teach people to recognize the misinformation playbook, so they understand when they are being misled."
This article originally appeared in The New York Times.
Written by: Nico Grant and Tiffany Hsu
Photographs by: Scott McIntyre
© 2022 THE NEW YORK TIMES