Other countries have also responded, with Australia bringing in a tough new law, and Britain referencing what happened in Christchurch in its White Paper on online harm.
But Facebook has said it doesn't want the type of content like the gunman's livestreamed video on its site either.
So what is Facebook doing, what can it do better, and is the Paris summit on Thursday (NZT) needed at all?
How Facebook tries to keeps us safe
Facebook and other social media platforms have their own community standards about suitable content, but those standards did nothing to prevent the footage from being livestreamed and then uploaded 1.5 million times.
In an ideal world, any company wouldn't be able to provide a service unless it knew its products were safe. For Facebook, that would mean artificial intelligence (AI) would block every piece of content that was potentially harmful.
In this video's case, 80 per cent - or 1.2 million uploads - were caught automatically by Facebook's algorithms.
About 900 variations of the video were made to sidestep AI detection, by for example adding a colour filter.
In basic terms, the AI looks at the content's metadata including shapes, colours, sounds, sights, location, or device information.
It can also look for data within a certain range; for nudity, for example, it can scan for human shapes within a colour range that captures likely human skin tones, and then flag content to be reviewed by a human moderator.
The AI becomes more sophisticated as it moves through more volume. While there are thankfully not many March 15-type videos, this also means that the technology is still relatively undeveloped.
One of Facebook's technicians told a congressional briefing in the United States that the footage from the March 15 video wasn't gruesome enough for the AI to pick it up easily.
A less crude way of saying it would have been to talk about the colour-matching technology, which might have looked for splashes of certain colours that were lacking in the gunman's video.
Facebook also said it used audio-matching AI to find variations of the video, but this handed the playbook to users wanting to share it; copies sprang up that had no sound, or had a music soundtrack added.
The other AI challenge was that the footage was from helmet-cam, making it look similar to videos of first-person shooter computer games. As Facebook scanned for possible copies of the gunman's video, footage from legitimate gamers was flagged for review.
Because the technology does not yet prevent all inappropriate material from being broadcast, Facebook has between 15,000 and 20,000 human moderators to help out.
But Facebook has 2.3 billion monthly users who upload 3000 years' worth of "watch time" every day across the world. How many moderators would it take to ensure an infallible filter?
Similarly, YouTube has 1.9 billion users that upload 300 hours of video every minute, while 500 million tweets are sent on Twitter every day.
Facebook's willingness to stop content going viral has also been questioned, given that its business model is based on engaging content. It has been accused of sluggish responses to the danger of echo chambers on its site that could lead to radicalisation.
And even if Facebook already had all the technology to block another mass shooting video, we wouldn't necessarily know because these are all commercially sensitive secrets.
This leads into another concern: transparency. Facebook releases information about content that gets blocked, but there is no independent oversight, leaving a credibility question over all of its claims.
One of the aims of the Christchurch Call is to have tech companies examine and invest in their AI technology, and also to share more data with government authorities to help them block violent content.
But it will be voluntary framework, so whether tech companies do this will ultimately be up to them.
Governments that sign the Christchurch Call will be expected to take measures to ban objectionable content, but it is questionable how effective this will be given Facebook's global reach.
This move was portrayed as an effort to tackle hate speech before it could erupt into something more destructive.
More options for Facebook to improve safeguards include the way content is reported, the way it's reviewed and how it's reviewed, and the guidelines for how moderators make decisions.
All reported livestream content on Facebook gets put in a priority queue to be reviewed by a moderator.
Because the March 15 livestream had already ended by the time Facebook was alerted to it (by police, not from its algorithms or a human moderator), Facebook now puts recently finished livestreams into the priority queue as well. The content stays online until it is reviewed.
There have also been suggestions to stop a video going viral, such as a more efficient digital fingerprint on every video to make copies easier to identify.
While Facebook chief operating officer Sheryl Sandberg said livestreaming safeguards would be explored, chief executive Mark Zuckerberg has already said that putting a delay on livestreams would fundamentally break the service.
YouTube, which was dealing with an upload every second at one stage, responded to March 15 by limiting those who could livestream to channels with at least 1000 subscribers.
Should Facebook also limit livestreaming to pages that have a certain number of followers? Should such restrictions only apply to livestreams, or to all videos?
What can Paris achieve?
Facebook, Microsoft, Twitter, and YouTube have already committed to blocking people from using their platforms to boost support for terrorism through the Global Internet Forum to Counter Terrorism, which they formed in 2017.
It voluntarily binds them to a commitment that Ardern is seeking: to invest heavily in the technology to remove inappropriate content.
YouTube said 98 per cent of violent extremism content is currently flagged by machine-learning algorithms.
Between July 2017 and December 2017, 274,460 Twitter accounts were permanently suspended because they promoted terrorism.
Facebook said 99 per cent of Isis and Al Qaeda-related terror content was blocked before it could be uploaded. If uploaded, Facebook said 83 per cent of terror content was taken down within an hour.
But the mere fact that the March 15 terror attack was still livestreamed and shared on Facebook and other platforms strongly suggests more needs to be done. Ardern herself was inadvertently exposed to the video.
While Zuckerberg and US President Donald Trump are not attending Paris, there will still be several world leaders and representatives of all major tech companies.
It is a solid launching pad for further action. Whether that transpires remains to be seen.