There was a lot of strong rhetoric in Jacinda Ardern's speech at Harvard last week - and one glaring omission.
The Prime Minister chewed out social media companies for not being more responsible, and asked internet users to be more conscious and careful of how they consumeonline content.
"But there was very little reference to what the Government could or should do," says Internet NZ interim chief executive Andrew Cushen.
"Amid the call for social media to do more and for users to not be keyboard warriors, what was missing was the commitment to Government action from our Prime Minister."
That's not to say that nothing has happened, both domestically in New Zealand and globally via the Christchurch Call.
But amid all the tough talk over the years, the online experience has changed little for the users of Facebook, Twitter, Instagram and YouTube, where echo chambers abound and video clips from the March 15 terrorist attack can still be found.
Does 'please be responsible' work?
Demanding more from online platforms is common for Ardern every time she speaks about the terrorist attack in Christchurch.
"Let's start with transparency in how algorithmic processes work and the outcomes they deliver," she said in her Harvard address.
"But let's finish with a shared approach to responsible algorithms – because the time has come."
But the platforms don't appear to be listening, says Cushen. "I don't see how you can get the likes of responsible algorithms just by asking nicely."
The algorithms are designed to maximise user engagement. They can be harmless, but can also lead to deep, dark rabbit holes of increasingly extreme content. The Royal Commission into March 15, for example, noted how Brenton Tarrant was radicalised online.
So what's the alternative to asking nicely?
The Government can't require the companies to change the basis for how they make money. How would that even work, given the national scope of a new law and the transnational nature of these companies?
Global regulation with buy-in from the countries where the companies are based would have more heft. Facebook boss Mark Zuckerberg has even asked for it, and it was considered in the lead-up to the signing of the Christchurch Call in Paris three years ago.
The problem was, and is, that it threw up more questions than answers. What's the right level of regulation? Would enough countries sign up? In a fast-evolving online world, would the regulations still be relevant by the time consensus was reached?
This is why the Call is voluntary. This isn't the same as ineffective. It's led to an improved global response, which has already been deployed three times to restrict the online spread of terrorist content.
But it's up to the companies to self-police when it comes to their algorithms.
"There's no real incentive, really, for social media to solve that because that's counter to their commercial, shareholder-driven interests," says Cushen.
They simply unleash them, make billions of dollars, and throw in some extras in an attempt to limit how dangerous they can be. For example, Facebook pushes interventions into the feeds of users who search for neo-Nazi content.
The effectiveness of such interventions is debatable; a Facebook redirecting pilot - from November 2019 to March 2020 - found 57,523 relevant user searches, 96 per cent of which didn't click on the intervention.
'They are the publisher, not just the postman'
The omission in Ardern's Harvard speech contrasts with her words in the days following the March 15 terror attack, when she signalled change to New Zealand's legal framework.
"We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher, not just the postman. There cannot be a case of all profit no responsibility," she told the House on March 19.
The subtext was that online platforms will be more liable for the content they publish. At the time, the "safe harbour" provisions in the Harmful Digital Communications Act provided protection from legal liability.
That has changed, though not in a way that an internet user might notice.
An update to the Films, Videos, and Publications Classification Act means that, since February this year, the chief censor can make an interim decision about material that's likely to be "objectionable", and hence illegal to publish or distribute.
This allows online platforms to remove such content more quickly without any doubt that it crosses the threshold of illegality. So far only two interim orders - for the video of the shooting in Buffalo, New York, and the gunman's manifesto - have been issued.
Takedown notices for such content can also be issued, and $200,000 fines can be dished out to online platforms if they fail to comply.
These changes override the "safe harbour" provision, aligning the legal framework in New Zealand with other countries including Australia and Germany, where large fines can also be issued.
But they don't widen the scope of what's illegal, so they don't have any impact on harmful material that may or may not be illegal.
Hate speech, misinformation and disinformation
Terrorist attacks are easily captured by what's "objectionable", but disinformation, misinformation and hate speech are much greyer areas.
A year ago, the Government announced intentions to widen the scope of hate speech protections beyond race to include - potentially - religious belief, sexual orientation, marital status and even political opinion.
But that stirred the free speech pot to the point where the work has been delayed, opening up criticism that it's taking too long.
Led by Internal Affairs Minister Jan Tinetti and the Department of Internal Affairs (DIA), the review is looking at how to minimise harmful content on any platform, including online.
Tinetti's Cabinet paper about the review includes misinformation and disinformation as examples of harmful content alongside, for example, material that threatens someone's privacy, or is racist. The paper also stresses the importance of freedom of expression and freedom of the press.
"The threshold for justifying limitations on freedom of expression will remain appropriately high," it says.
Recommendations are not expected this year. A report to Cabinet is due in October, followed by a second round of public consultation.
To prepare for the review, the DIA commissioned a report into what happens in Australia, Canada, Ireland and the UK, including when online material is deemed harmful enough to be blocked or removed.
In Australia and Canada, for example, the decision rests with a digital safety commissioner. In the UK and Ireland, a court order would be needed.
Unsurprisingly, a common issue in every jurisdiction was "how to balance attempts to limit harmful content and hate speech with ensuring freedom of speech". Another was defining what is and isn't harmful.
"An underdeveloped or unclear definition of what counts as harmful content can be difficult to enforce and lead to over-censorship or even self-censorship," the report says.
"Finding an effective balance between clarity and adaptability is a key, but difficult, task."
Such issues have no obvious solution.
Says Cushen: "I'm not going to tell you that Internet NZ has the magic wand that balances misinformation, disinformation and free speech. I don't know what that looks like."
What else is happening?
The Government launched a social cohesion programme 11 months ago, starting with seeking community feedback.
This was recommended by the March 15 Royal Commission, which was damning; it found "limited political leadership and public discussion of social cohesion", no overarching strategy, and "limited and poor" community engagement processes.
The minister in charge of the programme, Priyanca Radhakrishnan, will take a paper to Cabinet later this month about how to strengthen social cohesion.
"It's part of the wider context of work the Government is doing to prevent the polarisation and disenfranchisement that can lead to violence and extremism," she told the Herald.
The DIA also supported a report to better understand the online extremism webscape in New Zealand. It found 315 extremist accounts that collectively published more than 600,000 posts during 2020.
Overall it's a small presence, but they're not unnoticed; they provoked a response or reaction from the public over eight millions times. And far-right Facebook pages in New Zealand have more followers per capita (757 per 100,000 internet users) than Australia (399), Canada (252), the US (233) and the UK (220).
In the meantime, social media companies continue to self-police for potentially harmful content - including misinformation and disinformation.
That's not to say they do nothing; Twitter and Facebook, for example, both banned Donald Trump after the Capitol riot in January last year.
But self-policing also led to anti-vaxxer Chantelle Baker's Facebook broadcasts pulling greater engagement than mainstream media on certain days during the 23-day protest at Parliament, according to a report by the Disinformation Project.
Despite Ardern's rhetoric and what's happened behind the scenes over the years, this reinforces how little our experience of the internet has changed since those horrific shootings in Christchurch more than three years ago.