"While many of us have seen that disinformation, and dismissed it as conspiracy theory, a small portion of our society have not only believed it - they have acted on it in an extreme and violent way," the PM continued.
"We have a difficult journey in front of us to address the underlying cause of what we have seen here today."
Social media platforms have played a part in the spread of Covid-19 misinformation, and its ugly cousins of hate speech and polarisation as newsfeed algorithms send people down rabbit holes as more content is recommended that echoes and amplifies their prejudices, or false beliefs.
There have been various efforts by the likes Facebook and Twitter to clamp down on misinformation, remove content and promote links to factual sources.
But the social networks' publish first, police later model makes it difficult to stamp it out.
We've seen Ardern's Facebook livefeeds swamped by anti-vax comments.
Misinformation runs far beyond Covid, too. A person who works in the telco industry tells me it's often a slow, painful process to get misinformation posts removed - and that in the rare times he succeeds, the perpetrator often simply reposts the same content, and this time it stays up.
While reluctant to introduce policy changes, such as disabling livestreaming to all, the social media platforms say they have more human and AI filters to fight misinformation, banned many accounts and groups, and become proactive in promoting links to valid pandemic information and applying warning labels, among other measures.
But a major piece of research by Auckland and Victoria University researchers - the Disinformation Project - carried out between August 17 to November 5 last year, analysed hundreds of millions of data points and found there was "a sharp increase in the popularity and intensity of Covid-19-specific disinformation and other forms of
'dangerous speech' and disinformation" over the 12 weeks of the Delta lockdown on Facebook, Telegram, Instagram, Twitter and other platforms. It's disturbing that misinformation was multiplying so deep into the pandemic. (The project did note that anti-vaxxers also used more traditional channels, such as misinformation leaflets delivered to letterboxes.)
And just yesterday, amid the mayhem at Parliament, we saw another vivid example of misinformation out of control, as protester and Facebook "influencer" Chantelle Baker relayed the theory on her livestream, without any evidence, that police had set fire to tents - a potent and absolutely incorrect claim that literally inflamed the situation. Video evidence shows protesters starting the blaze, but that didn't stop Baker's alternate reality spreading, like wildfire, across social media.
It's not just anti-vaxxers, racists and 5G conspiracy theorists. Bad state actors have emerged as a major misinformation force, with Russia accused of interference in the past two US elections through fake accounts.
And, in the here and now, the Kremlin-funded Russia Today stands accused of spreading misleading propaganda about the Ukraine conflict.
As Russian tanks rolled across the border, Facebook and YouTube did block Russia Today from carrying ads with its content. But as one commentator noted, RT is not a 16-year-old gamer looking to make a couple of bucks. And eliminating the ads arguably only made RT's social content more appealing.
Facebook did slap RT with misleading information warning labels, however. And this week it joined other social media firms in blocking RT content in the EU (here, on the broadcasting side, Sky TV pulled Russia Today from its lineup on February 27).
Too often, there's too much of a Wild West feeling. Today, Kate Hawkesby detailed how a scammer is pretending to be her on Facebook as a lure to win "$20,000 in cash prizes".
If Hawkesby wants to try to hold Facebook to account, she should pack a lunch. Australian business magnate Andrew Forrest has been battling the social network over cryptocurrency scam posts using his image since 2019, while, after a five-year fight, Facebook finally settled a fight with Irish broadcaster Miriam O'Callaghan over false and misleading advertisements.
So where should the PM start if she really wants to clean up this mess?
Across the Tasman, we've seen Scott Morrison's government take a series of front-foot measures, implementing a new law that sees social media companies risk a fine of up to 10 per cent of their revenue, or up to three years' jail for their executives, if they fail to remove harmful content from their platforms in a reasonable time.
But on this side of the Tasman, NZ Council For Civil Liberties chairman Thomas Beagle both worried about over-reach and a risk to free speech, and criticised the Aussie clamp-down as rushed and impractical. I'd have to agree with his latter point. What is a "reasonable" amount of time? Could you imagine the US extraditing Mark Zuckerberg to serve jail time in Canberra? And while big fines make big headlines, as we've seen in the EU, time and time again, Big Tech buries them under years of endless appeals.
Instead, I keep coming back to a point made by NZRise co-founder Don Christie: New Zealand already has lots of laws that apply to online content and the companies that publish it, from the Harmful Digital Communications Act and Privacy Act (expanded in December 2020 to explicitly cover offshore entities doing business in NZ) to libel and copyright laws, and rules around areas like name suppression, and paying taxes locally on profits earned in New Zealand. We just have to do a better job of enforcing them.
Some argue that social media companies are neutral platforms. But their algorithms decide what you see in your newsfeed, and in what order, which makes them publishers - and publishers who follow a live-content model that, among other things, does not involve the costs that local media outlets incur as they vet content for accuracy. It's not a level playing field.
Ardern has pointed out, correctly, that it's difficult to rein in the social media companies without co-ordinated global action.
But while the 2019 Christchurch Call conference in Paris held the promise of being a turning point in the global regulation of social media, it never gained much impetus (Facebook CEO Mark Zuckerberg and the US Government skipped it), with a very broad series of commitments to try harder in various areas and modest sums for research, and what little momentum there was soon dissipated.
And, regardless, we've seen across the ditch that unilateral action can be effective. Last year, the Morrison government pushed Big Tech companies to pay for news - the logic being that the tech multinationals' ad dominance undermined mainstream media and the flow of authenticated information and balanced debate, which in turn weakened democracy.
Big Tech pushed back, with Facebook and Google playing hardball as they removed Australian news content from their feeds. But it just wasn't a good look, with US and EU regulators peering on, and ultimately they came to the party (NZ media companies are in the early stages of an effort to negotiate a pay-for-news arrangement with Big Tech).
On February 22 this year, Australia introduced a Code of Practice on Disinformation and Misinformation. It's not perfect, but it's a start. It's put misinformation on the table at a time when the ACCC (Australia's equivalent of the ComCom) has just kicked off another inquiry into the power of Big Tech and potential new rules.
Here, there's generally been a much lighter touch.
Netsafe - the lead agency for the Harmful Digital Communications Act - released a draft internet safety code shortly before Christmas.
It was big on buzzwords and short on detail or any specific measures, or consequences.
Mandy Henk - the head of Tohatoha, which advocates for a more equitable internet, and works on initiatives to curb hate speech and misinformation online - said it looked like a box-ticking exercise.
She also saw a potential conflict of interest in that social media companies would part-fund a new administrator (likely to sit with Netsafe) who would enforce the code.
Meanwhile, InternetNZ public policy lead Andrew Cushen had no issue with the social media companies having input into the code. But he did question why they were the only outside parties involved in shaping the draft.
While others could potentially influence the code through public submissions, affected community groups should have been involved from the ground-up, Cushen said.
Earlier, on an online call to discuss the code, Meta Australia-New Zealand policy director Mia Garlick said Facebook encouraged governments and government agencies like Netsafe to set online safety policy. The social network welcomed clarification of the ground rules in each jurisdiction it operated.
And it definitely sends a mixed message to big tech when our Government both complains about social media misinformation while at the same time shovelling money at Facebook and its peers. Ardern says the Crown is following the audience, and it's easy to make a case for online ads pushing the public health theme. But it's also a little perverse that the more misinformation there is on social media, the more the Ministry of Health has to spend on social media ads to combat it.
And Meta (to give Facebook its new corporate name) is one of the NZ Super Fund's largest equity holdings. It recently reported a $375m stake in the social network - and the end of its campaign for governance reform at the company.
I'm not sure why New Zealand treats social media with kid gloves.
But a visitor from Mars would definitely note that our Prime Minister has benefited hugely from Facebook. It doesn't work out for every politician, but as a natural communicator, Ardern has been able to use Facebook and Instagram for unmediated access to the public.
That's a powerful tool.
But also one that can get out of hand.
As her livestream comments were flashmobbed by anti-vax messages during Delta, Ardern shrugged off the flashmob of misinformation, saying the benefits of social media outweighed its drawbacks.
Surveying the carnage on Parliament's front lawn yesterday, maybe she seemed to be starting to have some doubts.
This is where we came in.
"One day it will be our job to try and understand how a group of people could succumb to such wild and dangerous mis-and disinformation," Ardern said.
If we don't want a repeat of the Beehive riot, drop "one day" and substitute "today".