Facebook says only a handful of people watched the alleged Christchurch shooter's 17-minute livestream - but also acknowledges that stamping out copies is an ongoing effort.
"In the time that it was actually live, fewer than 200 people viewed it," the social network's vice president for global policy Monika Bickert told the Herald.
"And in the time that it took us to remove any version of that initial video from Facebook, fewer than 4000 people total - including the initial number who saw it live - were able to view that video."
Bickert, who is based at Facebook's head office in Menlo Park, California, said Facebook's AI (artificial intelligence) systems did not detect the gunman's broadcast.
"Our technology didn't find this, and there were no reports by users during the livestream," she said.
The first alert came from NZ law enforcement "within an hour," she said, at which point it was "taken down within minutes."
Although the livestream, which began at 1.40pm on Friday, was only seen by a relatively small number of people - by Facebook standards - it was enough for it to be copied, then spread virally through Google-owned YouTube, Twitter and other platforms.
"The video has been distributed elsewhere and yes we're looking to find the new versions of it," Bickert said.
In the first 24 hours after the shooting, there were 1.5 million attempts to upload the shooter's video, or edited variants, to Facebook. Some 1.3m were blocked on upload, and the others hunted out soon after.
Bickert said Facebook was working with other social media rivals, and also proactively seeking out variants of the clip elsewhere on the web, which makes it easier to block them.
Earlier today, Privacy Commissioner John Edwards - who sees distribution of the clip as an "egregious" violation of victims' privacy rights - called on Facebook to hand over the identity of every person who shared the video to NZ Police, putting them in line for stiff fines or even jail time.
"I'm not weighing in on his specific proposal at all," Bickert told the Herald early this afternoon.
"We have to follow the law," she said. In general, Facebook did not proactively hand information to police unless there was "something like an imminent threat of violence".
Edwards told the Herald, "The over-arching questions is: Why did they launch Facebook Live [in 2015] without adequate mechanisms to prevent this kind of predictable abuse?"
Bickert refused to give a yes/no answer to whether she thought Facebook had put enough safeguards in place before it launched Facebook Live. She said any service was imperfect, but Facebook Live's AI and other safeguards were being improved all the time.
'Public safety' argument for leaving livestreaming
In anger, I raised questions about Facebook's seeming commercial motivation to maintain Facebook Live and, at the end of the day, not budge an inch.
And the Washington Postwrote, "Friday's uncontrolled spread of horrific videos - a propaganda coup for those espousing hateful ideologies - also raised questions about whether social media can be made safer without undermining business models that rely on the speed and volume of content uploaded by users worldwide."
But while there have been suggestions that Facebook Live and various video sharing mechanisms should have been temporarily disabled in the aftermath of the Christchurch attack, Bickert said the livestreaming feature also had an important role to play in public safety.
She said last year Facebook's AI and other alert systems picked up around 3400 self-harm or suicide threats made through Facebook Live.
"We see a real-time safety benefit. And part of that is allowing people to share information about human rights abuses, holding government authority accountable … and we've seen people turn to Facebook Live because they're thinking of taking their own life. We're able to identify that either through a user report or technology - and more than 60 times a week alert authorities to someone who needs help."
There were no policy changes planned as a result of the Christchurch livestream, she said.
"The policy is already in place, this already violates our policy," she said.
"We will continue to work on improving our technology as we have steadily been.
She said a few years ago, most hate speech or terrorist-related content was identified through user reports. Now 99 per cent was found via automated systems. But she said with more than 1 million Facebook Live posts per day, it was still challenging to identify all questionable material.
Ardern: 'Horrendous'
It remains to be seen if reassurances from Bickert and other Facebook execs will be enough to reassure the Government.
"It's our view that it cannot, should not, be distributed, available, able to be viewed. It is horrendous," Ardern told reporters this morning about the Christchurch clip.
"While they've given us those assurances, ultimately the responsibility does sit with them.
"I want them, very much, to focus on making sure that [the video] is unable to be distributed," the PM said.
GCSB Minister Andrew Little said social media would be on the agenda at the next Five Eyes meeting, while the CEOs of Spark, Vodafone and 2degrees have released a joint letter calling on greater action from social media platforms to remove objectionable content.