From the archives: Deep fakes are becoming evermore ubiquitous on the social media landscape. AI-generated pornographic images and video, featuring Taylor Swift, recently deluged social media – especially X, the site formerly known as Twitter. X took hours to remove the nonconsensual deepfake porn and eventually blocked searches for the singer’s name. Writing in the Guardian, author Jill Filipovic labelled deepfake porn “a potent new weapon for harassment”. It has renewed calls for toughening up the laws around AI, particularly when it is used for sexual harassment.
In this 2019 feature from the New Zealand Listener archives, Gavin Ellis investigates how message manipulation using bots, algorithms and now, AI software is making it harder to know what’s real – and threatening democracy itself.
The Word of the Year for 2019 will be “disinformation”. It is a natural extension of two of 2018′s Words of the Year: “toxic” and “misinformation”. Toxic misinformation has the ingredients that combine to produce disinformation.
Why not simply “fake news”? [Former] US President Donald Trump has appropriated that to undermine media he does not like. So journalists and academics have settled on “disinformation” to describe this dangerous form of “alternative facts” that has the potential to undermine civic institutions and democracy itself. The Oxford Dictionary defines it as information intended to mislead. We could add another layer: the identity of the perpetrator is often disguised.
Disinformation is not new. In 32BC, Octavian (later Emperor Augustus) used what was almost certainly a fake will to shaft his rival for leadership in Rome. According to the will, Mark Antony would bequeath large tracts of Rome’s territory in the Eastern Mediterranean to his children by Cleopatra. It was a classic piece of propaganda that labelled him a traitor. What is new is the use of social-media platforms and artificial intelligence to create an environment in which almost two-thirds of people - including New Zealanders - in the 2018 Edelman Trust Barometer international survey did not know how to tell good journalism from rumour or falsehood.
Why will disinformation be front and centre? The answer is simple: in Europe this year there will be 13 parliamentary and 10 presidential elections, including elections to the European Parliament.
Canada and Australia will hold federal elections and in Asia there will be national elections in India, Indonesia and the Philippines and Japan will elect half the upper house in the Diet.
Israel and South Africa will also hold general elections. So, too, will North Korea, but it is safe to say that all the disinformation in that campaign will be state-generated on behalf of Kim Jong-un. In the US, manoeuvring will be well under way for the presidential primaries, with the first four ballots in February next year.
Rapid Alert System
In December, the European Commission produced a 10-point action plan to meet what it called an urgent need to preserve the integrity of member states’ electoral systems and infrastructure ahead of the elections. The plan includes a rapid alert system to address disinformation campaigns and additional resources and personnel for regional strategic communication taskforces.
Last year, we saw the shape of things to come. Venezuela had been mired in disinformation since socialist Hugo Chávez rose to power in the 1990s. His successor Nicolás Maduro’s control and coercion of the media meant he could manipulate at will information in the widely discredited May election. However, his opponents - including those in the US – also used social media to spread disinformation.
During widespread unrest a year before the election, a tweet falsely claimed jailed opposition leader Leopoldo López had died in custody. A petition organised by academics and writers called for an end to disinformation in the election. It was ignored.
In Brazil, the campaign that led to the election of far-right, pro-gun, pro-torture populist Jair Bolsonaro was mired in disinformation from right and left. The Guardian reported that Comprova, a monitoring project set up by 24 media organisations, had investigated 110 alleged false news stories on Facebook and WhatsApp, a closed messaging group that Comprova’s executive editor, Sérgio Lüdtke, admitted was almost impossible to monitor. “We see only some of this and we know that’s not representative, it’s just an indication,” he said. “We know we cannot stop the tsunami.”
Disinformation was widespread in the lead-up to the widely discredited Bangladesh general election late last year. Facebook and Twitter both removed fake news-site accounts they said were “linked to individuals associated with the Bangladesh government”.
The Bangladesh Telecommunications Regulatory Commission blocked 3G and 4G mobile services - preventing uploading of pictures and videos - and slowed internet traffic. An official told Agence France-Presse, “We have done it to prevent propaganda and misleading content spreading on the internet.” Opposition sources said the move was to hamper their election campaigning. In early December, the commission blocked the website of the leading opposition group, the Bangladesh Nationalist Party, along with 53 news websites and portals (including several pro BNP sites) that it said spread “obscene” and malicious content.
Not all elections will be influenced by disinformation, but Russian interests are certain to mount destabilisation campaigns in countries such as Poland, Romania and Ukraine.
Canada could be targeted by China if the US-requested arrest of Huawei’s chief financial officer, Meng Wanzhou, is not resolved to its satisfaction. Beijing is expert in the use of disinformation.
Foreign actors will not be the sole suspects and sometimes “disinformation” will provide a useful excuse. Some countries will face internal assaults, and some will come from within established organisations.
Bloomberg reported that Amit Shah, the head of Indian Prime Minister Narendra Modi’s Bharatiya Janata Party, has encouraged his organisation’s social-media volunteers to spread viral messages supporting the government ahead of the Indian general election. He told a rally of the volunteers in Rajasthan:
“We are capable of delivering any message we want to the public, whether sweet or sour, true or fake.”
Voters are obvious targets. Disinformation was brought sharply to public attention during the 2016 US presidential election campaign: from the improbable Pizzagate- a pedophile ring supposedly run by the Democratic Party - to voters being told they could vote online. The perpetrators ranged from money-hungry Macedonians cashing in on social media’s programmatic advertising (the false stories were crafted to become click magnets and the creators received a share of the revenue from the advertisements that automatically attached themselves to the shady Internet Research Agency in St Petersburg. The “agency” operates at one remove from the Kremlin as a private company. Investigations by the New York Times and the New Yorker identify it as Russia’s foremost troll factory. The Mueller investigation into Russian interference in the 2016 election has named its funder as Yevgeny Prigozhin, a [now deceased] Russian oligarch known as “Putin’s chef”, not only because one of his companies provides catering at Kremlin functions, but also because Prigozhin is believed to have cooked up a number of clandestine missions for the Russian leader.
The Mueller investigation issued an indictment against Prigozhin and his Concord company. It identified him as chief architect of the agency. Late last month, the US Justice Department accused Russia of leaking Mueller investigation documents to discredit the case against Prigozhin and Concord. The leaks alleged the documents were the sum total of the case against Concord when, in fact, a US federal judge has allowed sensitive information - so sensitive it is kept on a server disconnected from the internet - to be kept from the Russians.
Russian Roulette
Last July, a British House of Commons interim report on disinformation and “fake news” pulled no punches in accusing the Russian government of a concerted campaign to undermine not only US but also European democratic processes. It followed Prime Minister Theresa May accusing Russia of meddling in elections and planting “fake news” in an attempt to “weaponise information” and sow discord in the West.
In addition to the activities of the Internet Research Agency, disinformation is spread through news agencies Russia Today and Sputnik News. The US Government estimates that 126 million people were exposed to Facebook pages linked to Russian interests in the presidential campaign. A joint study by Berkeley and Swansea universities identified more than 156,000 Russian Twitter accounts related to Brexit. That study also found that in the final 48 hours of the campaign, more than 45,000 tweets were posted.
Disinformation campaigns are not solely aimed at ballot boxes. Britain was also a target in the wake of the Novichok nerve agent attack in Salisbury. Security services detected at least 38 “false information narratives” promulgated by Russia.
Following allegations of chemical weapons attacks in Syria, one Twitter account claiming that the chemical weapons attack on Douma had been falsified - sent 100 posts a day over a 12-day period and reached 23 million users before it was suspended.
Another account reached 61 million users with 2300 posts over the same 12-day period. Both were linked to Russia. The accounts were also used to discredit the White Helmets humanitarian group in Syria. By last December, the Internet Research Agency’s role had been firmly established.
In the first major analysis of the agency’s operations in the US, Oxford University’s computational propaganda research project drew three main conclusions:
· More than 30 million US users, between 2015 and 2017, shared the agency’s Facebook and Instagram malicious posts. This total was boosted by multiplatform strategies involving a range of social media.
· Its activity was specifically designed to polarise the US public and interfere in elections. It encouraged extreme right-wing groups to be more confrontational.
· Its activity did not stop once it was exposed. In fact, its activity and engagement increased and covered, wider range of policy issues.
The Kremlin is seen as the disinformation bogeyman by US and European officials any by the media. However, they are aided, if not abetted, by Western social-media platforms whose washing-of-hands makes them appear to be modern incarnations of a certain Roman Governor of Judea.
Social Media’s Complicity
The House of Commons report noted the connection between the social media spread of disinformation and the unwillingness of operators like Facebook and Google to accept responsibility for providing the pipelines through which it was transported.
The committee that produced the report expressed carefully couched fury at the concerted efforts of the social-media companies to protect their own interests at the expense of getting to the root of the disinformation problem.
Although Facebook, Google and Twitter have moved to remove some of the false accounts used to plant disinformation - Twitter confirmed last June that it was conducting a mega purge and eliminating a million fake and suspicious accounts a day - the British government’s response to the Commons report in October was that the social-media companies were not doing enough.
Facebook and its kind earn little sympathy. They deserve to be accused of a form of third-party complicity by failing to build into their systems the checks and balances to prevent their misuse.
However, it may be possible to grant them a thimbleful of understanding because the inherent characteristics of disinformation – and the fact that its form is changing so fast – mean its detection and suppression are becoming more difficult.
The false stories produced by bogus news sites and promulgated through Facebook and Twitter before the US presidential election now appear rather crude. Hindsight does provide us with insight but, even as they surfaced, there were ways in which the falsehoods could be outed. Some, after all, were a little too obvious: the Pope’s endorsement of presidential candidate Trump was a falsehood too far.
Newsrooms were provided with services that might be crudely but accurately described as bullshit detectors. Online services such as TinEye were developed to determine whether images were real or doctored -by doing a form of reverse engineering and checking back through image search engines such as Google to find full or partial matches. Services such as Storyful were set up to do verification checks on trending stories using tried-and-true journalistic techniques. Verification may be as simple as checking with the people mentioned in the story to ascertain whether they had actually said or done what was being attributed to them.
Most of the falsehoods that were produced in the run-up to the US election, and during European elections and the Brexit referendum, were detected and debunked.
The British government’s response to the Commons report noted that it had not seen evidence of the successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. It did not, however, define “successful”.
Disinformation aimed at the UK may not have led to mass shifts of opinion, but it seldom seeks to achieve what Adolph Hitler thought possible in Mein Kampf. That, by repetition and a clear understanding of psychology, you could prove to the masses that a square was in fact a circle. Rather, modern disinformation seeks to discredit right-angles among people already predisposed towards circles. This is not preaching to the converted, although they, too, will be willing recipients. It is a sophisticated targeting of what Indiana University researchers have identified as three different types of bias that are susceptible to manipulation. Giovanni Luca Ciampaglia and Filippo Menczer, of the university’s observatory on social media, have developed tools to show people how cognitive, social and machine bias can aid the spread of disinformation. Cognitive bias emerges from the way the brain copes with information overload to prioritise some ideas over others.
“We have found that steep competition for users’ limited attention means that some ideas go viral despite their low quality – even when people prefer to share high-quality content,” they wrote. They added that the emotional connotations of a headline were a strong driver.
They found that when people connect directly with their peers via social media, the social biases that guided how they chose their friends also influenced the information they chose to see. It was also a significant factor in favourably evaluating information from within their own “echo chamber”. And these preferences are fed by the machine - the algorithms that determine what people see online. The internet is not simply a system of highways on which we may choose to drive. It is an organism that mines data to build a profile of every driver and passenger on the system and to feed on their wants and preferences. At the very least, its users are in semi-autonomous vehicles and, at worst, they have no control over the car whatsoever.
“These personalisation technologies are designed to select only the most engaging and relevant content for each individual user,” Ciampaglia and Menczer said. “But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.”
Data is accessed by disinformation sources, and algorithms used to identify groups (say, people predisposed to circles) into which disinformation can be seeded and sent on its merry viral way. Algorithmic selection ensures the message reaches the “right people”. Disinformation campaigns may then be fed by social botnets that allow massive proliferation of messages. These automated social-media accounts create and move huge amounts of material that unsuspecting users believe is legitimate. This is how 45,000 tweets were created in the final two days of the Brexit campaign. A high percentage of the Twitter accounts eliminated after the US presidential election were operated by bots, and the FBI found new bot accounts were created before the recent mid-term elections. The bureau believed many emanated from St Petersburg.
Milking bias is all the easier when the disinformation meets four criteria. According to Ben Nimmo, of the digital forensic lab at think tank Atlantic Council, a successful fake story has emotional appeal, a veneer of authority, an effective insertion point into the online space and an amplification network such as Twitter or Facebook.
And disinformation has something else working for it: we humans seem to prefer fake over fact when it is presented in ways that trigger those biases. Massachusetts Institute of Technology has studied rumour cascades.
These are rumour-spreading patterns that have a single origin and an unbroken chain of retweeting or reposting. The study found that falsehood reached far more people than the truth. Whereas the truth rarely spread to more than 1000 people, the top 1% of false-news cascades routinely spread to between 1000 and 100,000 people. It took the truth about six times as long as falsehood to reach 1500 people. In other words, we now have scientific props that Jonathan Swift was right: “Falsehood flies, and truth comes limping after it.”
Quantum leap forward
To date, most disinformation has been believed by those who want to believe it. Others have rejected it, because they do not want to believe it and it has been relatively easy to discredit its content. That, however, is about to change. Artificial intelligence and machine learning were able to do reasonably credible service creating and spreading disinformation, but there were telltale signs that these messages were created by bots. Tweets could be checked on Botometer, a joint project of the Network Science Institute and the Centre for Complex Networks and Systems Research at Indiana University, that used about 1200 features to characterise the suspected account’s profile, friends, social network structure, activity patterns, language and sentiment. Facebook posts often contained stilted, formulaic language. The use of bots to share disinformation was harder to detect.
But now, artificial intelligence has allowed disinformation to take a quantum leap forward to the point where it is no longer possible to tell whether what you see and hear is real. The threat these so-called deep fakes pose is so serious that the US Department of Defence has tasked one of its agencies with finding ways of detecting fake video and audio.
The threat stems from software that takes alarmingly small amounts of authentic material - as little as 3.7 seconds of audio and 300-2000 images from a short video clip - to create a visual message in which words have, quite literally, been put in someone else’s mouth. Facial and body movements will be indistinguishable from the real thing. In August, a team led by the Max Planch Institute for Informatics in Germany revealed a system called Deep Video Portraits. In contrast to existing approaches restricted to manipulations of facial expressions only, it was the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. In the first test of the technology, the team showed real and manipulated videos of Vladimir Putin, Theresa May and Barack Obama to two groups from North America and Europe. More than half thought the manipulated videos were real and 65% thought the altered images of Putin were authentic. Perhaps as a symptom of a post-truth age, only 80% thought the real videos were authentic.
In parallel with the German-led research, the University of Washington has been perfecting lip-sync software that allows a third party to script what a person will say in a deep fake video. Somewhat naively, the American researchers see “a range of important practical applications”, including allowing hearing-impaired people to lip-read on their smartphones and providing new ways for Hollywood to seek box-office success.
One of the more worrying aspects of such research is the speed with which it is perfecting the software to create flawless fakes. The Defense Department’s Advanced Research Projects Agency has spent US$68 million but has so far found only limited ways to detect deep fakes. Matt Turk, head of the agency’s media forensics project, said in an ABC News interview - carried on the agency’s Facebook page that defensively labelled it “a real piece” - that deep-fake detection was a “bit of a cat-and-mouse game”.
“A lot of times there are some indicators that you can see, particularly if you are trained or used to looking at them. But it is going to get more and more challenging over time ... We are looking at sophisticated indicators of manipulation, from low-level information about the pixels to metadata associated with the imagery, the physical information that is present in the images or media and then comparing it (to] information that we know about the outside world.”
One indicator identified by the agency was blinking. In many of the fakes it examined, manipulated images of people did not blink in a natural way. However, the German research is rapidly overcoming that anomaly. Its programme transfers not only eye movement but authentic blinking rates from the source to the deep fake. And it hasn’t finished. The research paper concludes: “We see our approach as a step towards highly realistic synthesis of full-frame video content under control of meaningful parameters. We hope that it will inspire future research in this very challenging field.”
It will almost certainly inspire ever more realistic deep fakes that may rob us of one of our most basic assumptions: that – in combination - we can believe our own eyes and ears. When we see a video of Obama, we expect it to be a captured version of what American philosopher John Searle calls “direct realism”: the camera as a surrogate for our own eyes. Perhaps it is inevitable that we are even less equipped to question the validity of a machine-created moving image than we are an Al-driven chatbot that can mimic human responses in text.
Yes, the word of the year 2019 will be disinformation. You just may not recognise it when you see it.
Contagious & deadly
From Myanmar to Brazil, falsehoods spread insidiously and take a tragic toll.
INDIA
WhatsApp, the Facebook-owned encrypted messaging service, was held accountable by the Indian Government for 33 deaths in mob violence associated with false stories of child abduction.
MEXICO
It was also blamed for the spread of a similar falsehood in Mexico that led to two men being burnt to death by a mob. The following day, a mob pulled a man and a woman from their truck in a rural area and beat and burnt them, despite the pair’s pleas of innocence.
The man died at the scene, and the woman in hospital.
BRAZIL
A mass yellow fever immunisation campaign in Brazil has been compromised by disinformation, including one post - shared 300,00 times - that side effects of the vaccine (used for decades with no serious issues) had killed a teenage girl. A total of 1257 confirmed cases and 394 deaths from yellow fever were reported in Brazil between July 2017 and June last year.
MYANMAR
Facebook accounts run by Myanmar military personnel targeted the Rohingya Muslim minority. Human rights groups blame the anti-Rohingya disinformation campaign for inciting murders, rapes and the largest forced human migration in recent history.
More than 700,000 people have fled Rakhine state and a UN mission estimated 10,000 Rohingya have died.
Gavin Ellis is a weekly media commentator on RNZ National’s Nine to Noon. He attended a recent workshop on disinformation in Taipei as a guest of the Taiwan Foundation for Democracy and the American Institute in Taiwan.
This feature was originally published in the February 16, 2019 issue of the New Zealand Listener.