Twitter chief executive Jack Dorsey with Prime Minister Jacinda Ardern during their September 9 meeting at Parliament. Photo / @Jack
Social networks put a number of new safeguards and restrictions in place, but have largely resisted sweeping changes in the six months since the mosque massacres.
Hate-content researcher Eric Feinberg complains that external pressure - such as initially impressive-sounding advertiser boycotts - quickly melted away after only very modest concessionsmade by Facebook and others.
And local tech industry leader Don Christie complains that while good first steps were taken to coordinate global action - always something that was going to take some time - government agencies sent the wrong signal by quickly returning to Facebook as a promotional tool.
He also complains about the degree to which "The government is using Facebook Live as a primary distribution channel. "There was a big government announcement a couple of weeks ago [Chris Hipkins' forum on vocational education reform] that required a Facebook account to interact.
Google-owned YouTube has taken the most concrete steps. It blocked searches in the immediate aftermath of the shootings. And in May, on the eve of the Christchurch Call summit, it removed its livestream feature for all mobile users - bar those with 1000 or more followers. The step means live video is no longer an option for most YouTube users, and that it will be cumbersome and time-consuming for any banned user to create a new, livestreaming-capable account.
Facebook chief executive Mark Zuckerberg resisted calls to disable livestreaming. In his first-post Christchurch shootings interview, on April 7, he also opposed the idea of introducing a delay, which he said would "fundamentally break what livestreaming is for people. Most people are livestreaming, you know, a birthday party or hanging out with friends when they can't be together".
However, in May, Facebook did introduce new "one-strike" policies that could see individual users banned from livestreaming service for 30 days or more - if they posted hate or terror content, and potentially banned from the social network altogether if they repeatedly violated its posting policies. Many nationalist-themed Facebook groups have been removed since March 15.
Facebook also reinforced its human and AI defences against terror content, including new measures such as a filter that listed for gunshot-like noises - but with copies of the alleged gunman's clip still appearing (Feinberg found multiple fresh copies even this week), it concedes the effort is a work in progress.
As part of the Christchurch Call agreement, Facebook put US$7.5m ($11.7m) toward research with three US universities to help stamp out terror and hate content online.
Twitter has not changed its free-for-all livestreaming policy, but in July it introduced new "hateful conduct" guidelines that tightened rules around what could be posted - with hate speech against religious groups used in examples. Chief executive Jack Dorsey met with Ardern in Wellington earlier this week. The PM said the pair discussed the persistence of 8Chan or 8Chan-style content on Twitter, but with no concrete measures being announced, the meeting seemed largely cosmetic.
Met PM @jacindaardern at the Beehive today for a followup discussion on the Christchurch call. Also my first time in New Zealand. Kind folks and beautiful environment. pic.twitter.com/XQWdPrHrf4
In the immediate aftermath of March 15, our big three telcos - Spark, Vodafone and 2degrees - took the unprecedented step of blocking Kiwis' access to 8Chan and other sites hosting the alleged shooter's video - though they also stressed they did not want to play the role of deciding which content New Zealanders can and can't see.
In April, InternetNZ, which administers the NZ domain, said it had put "emergency measures" in place that "lock" a local website address with harmful content. During August, InternetNZ head Jordan Carter said although his organisation had the ability to make content inaccessible, he didn't want to play sheriff. The Government needed to clarify if the Chief Censor, Police or another agency - or combination of agencies - should make a call about which content to block, and when.
Also in April, Australia's Parliament passed a tough new law allows for fines of up to 10 per cent of a tech giant's revenue or up to three years' jail for their executives if they fail to take "swift" action against "abhorrent" content.
But NZ Council For Civil Liberties chairman Thomas Beagle told the Herald the new legislation was poorly thought-through and would likely prove ineffective.
Beagle cautioned against over-reach. He said our current laws were enough to deal with online wrongdoing, and noted that multiple people in NZ had been arrested for sharing the alleged gunman's clip (and one, Philip Arps, is now jailed).
In terms of modifying the behaviour of social networks themselves, he favoured globally coordinated action. That put him on the same page as the government, which has so far not made any unilateral moves against Facebook and others, as has happened across the Tasman.
Australian Prime Minister Scott Morrison has joined Ardern in calling for globally coordinated action, raising the topic of social networks at G7 and Five Eyes meetings.
However, so far, no proposals have emerged that would fundamentally change the major social networks' business model, which relies on speed and volume of user content.