“Even many conspiracy theorists will respond to accurate facts and evidence — you just have to directly address their specific beliefs and concerns,” said Rand, a professor at the Massachusetts Institute of Technology’s Sloan School of Management.
“While there are widespread legitimate concerns about the power of generative AI to spread disinformation, our paper shows how it can also be part of the solution by being a highly effective educator,” he added.
The researchers examined whether AI large language models such as OpenAI’s GPT-4 Turbo could use their ability to access and summarise information to address persistent conspiratorial beliefs. These included that the September 11 2001 terrorist attacks were staged, the 2020 US presidential election fraudulent and the Covid-19 pandemic orchestrated.
Almost 2,200 participants shared conspiratorial ideas with the LLM, which generated evidence to counter the claims. These dialogues cut the person’s self-rated belief in their chosen theory by an average of 20% for at least two months after talking to the bot, the researchers said.
A professional fact-checker assessed a sample of the model’s own output for accuracy. The verification found 99.2% of the LLM’s claims to be true and 0.8% misleading, the scientists said.
The study’s personalised question-and-answer approach is a response to the apparent ineffectiveness of many existing strategies to debunk misinformation.
Another complication with generalised efforts to target conspiratorial thinking is that actual conspiracies do happen, while in other cases sceptical narratives may be highly embellished but based on a kernel of truth.
One theory about why the chatbot interaction appears to work well is that it has instant access to any type of information, in a way that a human respondent does not.
The machine also dealt with its human interlocutors in polite and empathetic terms, in contrast to the scorn sometimes heaped on conspiracy theorists in real life.
Other research, however, suggested the machine’s mode of address was probably not an important factor, Rand said. He and his colleagues had done a follow-up experiment in which the AI was prompted to give factual correction “without the niceties” and it worked just as well, he added.
The study’s “size, robustness, and persistence of the reduction in conspiracy beliefs” suggested a “scalable intervention to recalibrate misinformed beliefs may be within reach”, according to an accompanying commentary also published in Science.
But possible limitations included difficulties in responding to new conspiracy theories and in coaxing people with low trust in scientific institutions to interact with the bot, said Bence Bago from the Netherlands’ Tilburg University and Jean-François Bonnefon of the Toulouse School of Economics, who authored the secondary paper together.
“The AI dialogue technique is so powerful because it automates the generation of specific and thorough counter-evidence to the intricate arguments of conspiracy believers and therefore could be deployed to provide accurate, corrective information at scale,” said Bago and Bonnefon, who were not involved in the research.
“An important limitation to realising this potential lies in delivery,” they added. “Namely, how to get individuals with entrenched conspiracy beliefs to engage with a properly trained AI program to begin with.”
Written by: Michael Peel
© Financial Times