Facebook's failure to automatically shut down the gunman's March 15 livestream because it was not gruesome enough is further evidence that change is needed, Prime Minister Jacinda Ardern says.
That may include increased transparency around the technology that Facebook and other social media platforms including Twitter and Instagram use to detect harmful content.
The accused gunman livestreamed the attack at the Al Noor Mosque on March 15 that claimed 43 lives.
Brian Fishman, Facebook's policy director for counter-terrorism, reportedly told US Congress members that its algorithm did not initially detect the massacre livestream because there was "not enough gore".
Ardern said this emphasised her previous call for more to be done to protect against harmful content.
"You'd be hard-pressed to find anyone who would say that that video did not deserve to be removed immediately, that the fact it's been shared so extensively is hugely damaging."
She has been pushing for a global co-ordinated response to harmful content on social media platforms, and she said that should include the platforms themselves.
"One of the issues we have is there is not a lot of transparency around the technology that is used, the algorithms that are used, the methodology that is used, or what is possible within these platforms.
"That's why it's often very hard to critique the way these companies have responded. There's an argument to be made around transparency in that regard.
"There is no place in society for that video."
Ardern has been critical of Facebook, saying in the days after the attack: "We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of a place where they are published. They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility."
According to website the Daily Beast, Fishman's comments came during a March 27 briefing between representatives from Facebook, Google, Twitter and Microsoft and the House Homeland Security Committee.
One Congress member reportedly said the video was so violent, it looked like footage from the video game Call of Duty.
Facebook declined to comment because the meeting was behind closed doors, the Daily Beast said.
Facebook has previously addressed criticism of its artificial intelligence systems by saying it is based on "training data, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video".
"This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems.
"However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."
Facebook said it removed about 1.5 million videos of the attack globally in the first 24 hours, with more than 1.2 million of those blocked at upload.
Twitter's head of legal, policy and trust Vijaya Gadde has said the social media firm had removed 20,000 tweets since the attacks, but admitted it "feels like a leaky bucket".
In an op-ed piece at the end of March, Facebook founder Mark Zuckerberg called for Governments to be more active and regulate four areas: harmful content, election integrity, privacy and data portability.
Facebook has also responded to the Christchurch shootings by saying it will ban posts about white nationalism and white separatism.
Justice Minister Andrew Little has fast-tracked a review of the Human Rights Act, which would look at hate speech and possible hate crime laws.