Twitter carried “dehumanising, sexist, deeply misogynist tropes” following the resignation of Jacinda Ardern as Prime Minister, continuing what researchers call the “significant deterioration of platform integrity” since Elon Musk took over.
The Disinformation Project found the platform - which has 520,000 users in New Zealand - was embracing content previouslyfound on the lightly-moderated, fringe-associated Telegram, which has been the home to alt-right and terror-connected groups.
That included a “dominant visual trope” which targeted Ardern over her appearance. Dr Sanjana Hattotuwa, a researcher at The Disinformation Project, said it was a trope that had been found to dominate discussions in Telegram but was now " mirrored on Twitter – itself a significant development, and signal of how far, and fast Twitter’s platform integrity has deteriorated.”
When resigning, Ardern said online abuse had an impact but was “not the basis of my decision”. Those farewelling Ardern have highlighted the level of vitriol aimed at the former prime minister and her family.
The new research follows a separate study of marginal social media channels which have little or no moderation and are often associated with alt-right that showed Ardern copped an overwhelmingly level of abuse compared to others studied.
The new research indicates how mainstream abuse has become and reflects comments from NZ Security Intelligence Service director general Rebecca Kitteridge about how the rising tide of violent talk online makes it hard to find those who are actually dangerous.
Work by The Disinformation Project, published in November, recorded “grave and growing concerns around chilling effects the normalisation of threats and harassment” which were most commonly directed at women.
The new work by Hattotuwa studied 4185 Twitter accounts which published almost 54,000 tweets which were, on average, retweeted 960 times. The research also studied Facebook and Telegram.
He said searching Ardern’s first or last name alongside a range of derogatory terms never failed to produce a result - and sometimes thousands of results - featuring “dehumanising, sexist, deeply misogynist tropes, and dangerous speech frames”.
Hattotuwa said the “rapid capture” didn’t reflect the weight of additional content beyond text which included memes, cartoons, animated GIFs, deep fakes and posters.
He said there were specific “inflection points” able to be identified at which time the abuse spiked upwards in volume, including Ardern’s speech to Nato, the United Nations’ speech on disinformation and her Harvard address.
Some of the content reflected themes pushed by Russian disinformation and misinformation operations.
Those who posted reflected beliefs they had adopted and research they believed supported those views, even though urging people to “do your own research” often created pathways to “incredibly harmful sources” that presented information that lacked context or was manipulated.
Hattotuwa said those comments and themes were supercharged by the way social media was designed to work. “What’s new is the algorithmic amplification of psychosis.”
The consequence - as has been seen over the past few years - was the melding together of previously disconnected and disparate communities that had previously pushed issues they supported in isolation.
As they merged - as had been seen online - it brought together elements who had not previously connected while also pushing such dialogue into mainstream areas, connecting general society with elements they would not previously had encountered or spent time with.
“Not everyone is far-right or a violent extremist but they are very active in domestic communities.”
Hattotuwa offered an example through the oft-used images of Ardern accompanying abuse posts in which she was seen wearing a hijab when meeting with whanau of those killed and injured during the March 15 attack in Christchurch.
The intent, he said, was to show her as “woke” and “anti-Christian” through “visual code” intended to communicate that she was a “traitor” who was more partial to Muslims.
Those images carried an underlying message in existence before the pandemic “but finds stronger, and wider expression after the pandemic’s networking potential (over social media), and cross-fertilisation of content, and commentary on anti-vaxx communities”.
It combined to give new life to old prejudices and reactions “through newly networked, highly motivated, and extremely agitated communities across the country, online, and offline”.
Hattotuwa said the research was part of a picture which showed New Zealand was on an “accelerated path towards entropy in society as a result of the pandemic”.
He quoted Nobel Prize laureate Maria Ressa who spoke of how technology had brought extremism to politics, impacting on “facts, truth, trust”. She said: “Without these three we have no shared reality, we cannot solve any problem together, and we cannot have democracy.”
Hattotuwa said sweeps of social media had already detected characterisation of incoming Prime Minister Chris Hipkins which was intended to undermine his standing.
That included references to Hipkins as “manlet” which, Hattotuwa said, was intended to isolate him from the perception of an “alpha male who in this country is Pakeha and white and the inability of anyone in Government to match up to that”.
Hattotuwa said he expected the abuse to be less for a cis-gendered white male but the underlying hate would not dissipate because it was embedded in communities and would endure.
He said that would be particularly the case with women, people of colour and minority groups - “especially anyone seeking, voted into, or appointed to public office”.
“I don’t think many Kiwis realise how much things have already changed irrevocably. Whoever comes into power, that sentiment is not going to go away. It is entrenched, it is expanding and it is normative.”
As security increased around those figures who were the focus of the most anger, those whose access to those was frustrated would look for “proxies”, such as academics, activists and journalists who did not have the same ability or resources to improve their safety.
He said that a lesser volume of hate directed at such people could have a greater impact on their wellbeing which could lead to chilling effects online, self-censorship, trauma and even “offline consequences involving acts of violence, abuse, antagonism, stalking, arson, robbery, and assault”.
Twitter has been contacted for comment about Hattotuwa’s research. It has yet to respond.