If Covid-19 wasn’t enough, we’re faced with widespread socio-economic hardship and inequities, ongoing international conflicts, civil unrest, an imminent environmental crisis, and fears technology will supersede everyone and everything.
A month-long delay to form a government seemed darkly comical in contrast.
Misinformation v disinformation
While technology has enhanced accessto information and connectivity, albeit in a tapping-away-in-the-dark-by-yourself kind of way, it has come with a very real factual cost.
Putting aside the fact I’ll be out of a job thanks to the advent of AI, I’m referring to the rise of misinformation and disinformation.
Misinformation refers to misleading information created or disseminated without malicious intent, whereas disinformation describes false content that’s designed to confuse or manipulate audiences. It’s the digital version of a cult, if you like.
A 2018 Unesco report described disinformation as acts of fraud that should be treated “for what they are”, as a particular category of phoney information.
“Powerful new technology makes the manipulation and fabrication of content simple and social networks dramatically amplify falsehoods peddled by states, populist politicians, and dishonest corporate entities, as they are shared by uncritical publics,” the report said.
What’s the harm in falsehoods?
Last month, the Disinformation Project released a paper looking at open-source and quantitative data from a range of social media platforms and other content between June and September.
It found racism and white supremacist ideologies had been amplified, normalised, and spread through disinformation across mainstream groups and individuals.
It manifested in the targeting of high-profile wāhine Māori online, the dissemination of Holocaust denial content, and an increase in accessing illegal footage of the Christchurch masjidain attacks, for example. The footage had been distributed more this year than immediately after the attacks in 2019.
This is despite the Films, Videos, and Publications Classification Amendment Act 2021, which criminalised the livestreaming of objectional videos.
Kantar research in July found 8 per cent of survey respondents believed threatening violence was an acceptable way to achieve change, for example.
While 72 per cent of respondents felt misinformation posed a serious threat to democracy, at least 81 per cent of respondents held at least one factually incorrect belief listed in the survey.
Those who wave the “freedom of expression” flag in defence of offensive rhetoric (at best) often forget free speech sits alongside freedom from discrimination and specific hate speech provisions in the Human Rights Act.
The Labour Government pledged sweeping changes to broaden the list of protected groups following the Royal Commission of Inquiry into the mosque terror attacks in 2019.
Former Prime Minister Chris Hipkins canned the reforms altogether earlier this year.
In a statement in February, Human Rights Commissioner Paul Hunt said it was a very sad day that such a straightforward amendment to the legislation had been dropped, instead giving way to “often misinformed and opportunistic political debate”.
Alternative regulation routes
The Harmful Digital Communications Act aims to deter, prevent, and mitigate harm caused to individuals by digital content.
People who’ve suffered serious emotional distress resulting from threatening, grossly offensive, indecent, false, discriminatory, private, or personal content can report it to Netsafe.
Netsafe can’t force the producer of harmful content to remove it, nor can it unmask or identify someone using fake accounts or profiles. Instead, Netsafe uses advice, negotiation, mediation, and persuasion to resolve complaints.
Netsafe can seek an order from the District Court.
Failing to comply with an order from the court carries a prison sentence of up to six months or a fine of up to $5000. Police are also mandated under the act, and people could face a two-year prison sentence or a fine of up to $50,000.
What if the content isn’t harmful to an individual per se, such as disinformation that’s designed to manipulate audiences and favour dangerous ideologies over tried and tested pillars of democracy?
Down with ‘fake news’
This year’s Kantar research also revealed 54 per cent of respondents who strongly believed in misinformation had avoided or stopped consuming mainstream media.
AUT research this year found general trust in news declined from 45 per cent in 2022 to 42 per cent. Almost 69 per cent avoided news often, sometimes, or occasionally.
Distrust in the media is complicated as it is fraught.
Declining budgets, increasing demands thanks to the relentlessly fast-paced internet, working conditions, and workforce retention are major issues, sure, but I’d argue the situation is exacerbated by the rise of disinformation.
For example, “fake news” as a slogan is not only misleading, it’s also an oxymoron. The “news” is inexplicably linked to information that’s verified and in the public interest.
A journalist who fails to meet those standards could find themselves without a job (most outlets have codes of ethics) and in breach of Media Council principles and the Broadcasting Act. The Broadcasting Standards Authority may issue an order and a fine of up to $5000, for example.
Compare this to keyboard warriors spouting misinformation or disinformation with abandon. It’s the wild, wild, west.
Instead, the Department of Internal Affairs (DIA) released its Safer Online Service and Media Platform document for public consultation this year. It proposed cracking down on social media giants (and other platforms) through a code of practice and creating a new industry regulator to cover all things media (news included).
Will the free-speech-inclined hodgepodge government rise to the occasion? Good grief, I certainly hope so.
Sasha Borissenko is a freelance journalist who has reported extensively on the legal industry.