Prime Minister Jacinda Ardern co-chairing the virtual summit to mark the second year anniversary of the Christchurch Call. Photo / AP
ANALYSIS
In the days after the horrific shootings in Christchurch on March 15, 2019, Prime Minister Jacinda Ardern spoke about the issue that led to the Christchurch Call.
"We cannot simply sit back and accept that these platforms just exist and that what is said on them is not theresponsibility of the place where they are published," she said.
"They are the publisher. Not just the postman. There cannot be a case of all profit no responsibility."
Social media platforms had - and have - massive global reach, and were so focused on making dollar bills that they were careless about the content they hosted.
That content might take a strong grip on a user's attention while also steering them down a dangerous path; Brenton Tarrant, for example, was inspired in part by what he watched on YouTube.
It's now been two years since the Call was created, hailed by Ardern as unprecedented and decried by sceptics as a big talkfest that has achieved nothing.
It's easy to equate the stream of press releases echoing limitless good intentions with a big talkfest. It's also easy to find some horrible video online and declare that the Call has failed to eliminate terrorist and violent extremist content.
But these aren't the right yardsticks of success. The likes of 4chan are never going to oblige the Call's voluntary commitments, and even if it were to be shut down, something else would replace it to host the darkest corners of the web.
Ardern gives us a better gauge of the Call's impact by viewing it through her profit-vs-responsibility lens.
Firstly, there are the platforms themselves - what have they done to self-regulate, even if that meant hurting their bottom lines?
The March 15 livestreamed video went viral. The platforms are now part of response protocols that didn't exist on March 15 in any effective way.
If it were to happen again, a series of responses would kick into action so governments and companies could coordinate to quickly identify and take down content - including content that's been tweaked to specifically avoid detection.
These crisis protocols have been used twice since March 15, and though they won't capture all relevant content and the responses could have been quicker, they're a home-run improvement on what existed beforehand.
Social media platforms have also tried ways to intervene if a user searches for white supremacy or neo-Nazi content.
A Facebook Redirect Programme pilot - from November 2019 to March 2020 - found 57,523 relevant user searches, most of whom - 96 per cent - didn't click on the intervention. Of the 2288 users that did, 25 people began a conversation with a group like Life After Hate.
This isn't breathtakingly high uptake, but if any of those 25 could have ended up like Tarrant, it's an intervention worth having.
Online platforms have also responded to the added scrutiny with better policing of their content, and tightening rules around livestreaming. Can they go further? Of course. Would they have moved this far without the Call? Probably not.
The main area where progress has been limited is the work on algorithm outcomes, which Ardern has pushed for from the start and is pushing for once again.
Progress here relies on the tech companies' researching and sharing what happens on their own platforms, and although the US joining the Call is hoped to make a difference, this remains very much a watch-this-space issue.
Secondly, there are each country's own laws - what are they doing to force tech companies to behave responsibly?
If an online platform can have legal immunity from the content it hosts, then a government can't step in, and we end up relying on the likes of Facebook and YouTube to do the right thing.
When Ardern made her "not just the postman" comments, there were gaps in New Zealand law about who was liable for social media content - but a law change is now making its way through Parliament.
If it had been in place before March 15, Facebook could have been served with a takedown notice and fined $200,000 if it failed to act in time.
Germany, Australia and the UK have also moved to introduce specific laws to make online platforms liable if they fail to take down illegal content within certain timeframes.
The US could again make a huge difference because most major tech companies are based there - but section 230 of the US Communications Decency Act provides legal immunity.
Yesterday Ardern declined to press for reform in this space, but Democrats and Republican lawmakers are trying to amend s230. One suggestion is to revoke immunity for online platforms that use algorithms to boost user engagement without the user specifically agreeing to the algorithm funnel.
Any potential reform is still a way off, but the fact that the conversation is ongoing can only be a good thing.
Finally there are the users themselves - protecting them from the stirring of haters and empowering them to recognise online disinformation, or content that could radicalise.
This kind of preventative work to build resilient societies and resilient citizens is no less important, but is not really measurable.
Which, in a sense, is representative of the Call: the tangible benefits may not always be easy to see, but that doesn't mean it doesn't have value.