One of the Call's main aims is to eliminate online terrorist material, the kindthat was watched inadvertently by millions of people - including Prime Minister Jacinda Ardern herself - in the aftermath of Brenton Tarrant's terrorist attack.
And while the global response that was triggered by the Buffalo crisis worked in a more efficient and comprehensive way than it had previously, millions of people still saw footage of the Buffalo shooting.
This highlights the improvements since the Call was established, but also how difficult it is not only to stop the online spread of such material altogether, but also the radicalisation that can lead to it.
There are many similarities between the Buffalo video and Tarrant's: they were both livestreamed on online platforms from a head-mounted camera on a lone attacker, who was driven by a hateful ideology expressed in a published manifesto.
Ten people were killed in the Buffalo shooting, and three others were injured; 11 of the victims were black.
The Buffalo manifesto referenced the Christchurch terrorist attack as inspiration, and even said the Buffalo shooter had seen Tarrant's footage and manifesto.
Tarrant livestreamed his attack on Facebook, which was watched by under 200 people. It took 12 minutes after his 17-minute livestream ended for Facebook to take down the video. By then, it had been viewed about 4000 times.
In the following 24 hours, Facebook's AI blocked 1.2 million attempts to upload the footage, and 300,000 uploads were taken down after they were posted.
In the hours after Tarrant's attack, YouTube reported tens of thousands of video clips - re-cut and re-packaged to avoid AI detection - were being uploaded at a rate of one per second. Twitter users also described frantic attempts to stop videos in their feeds from autoplaying.
Meanwhile, a series of unsatisfactory phone calls from the Beehive to tech companies exposed the lack of a tested, co-ordinated response to the viral online spread of Tarrant's footage.
A crisis response system was already in place, but according to Ardern, "not well developed, and not in a way that meant it did what it needed to do".
What changed for Buffalo?
The global, collaborative response that the Christchurch Call put in place kicked into gear. The independent Global Internet Forum to Counter Terrorism (GIFCT) activated its top-level alert - called the Crisis Incident Protocol (CIP) - within two and half hours of the attack. This meant that digital signatures of the footage were created and added to its hash-sharing database, so online platforms can more easily identify and remove the content.
Just over 24 hours later, GIFCT members - which include Facebook, YouTube, Twitter and Amazon, whose platform Twitch hosted the Buffalo livestream - added 870 distinct items to the database including 740 images and 130 videos.
"We do know that this attack has not had the same level of online impact [as Christchurch] in terms of the quantity of material shared, the virality of it, the ability to gain momentum and critical mass to overwhelm online platforms," says Paul Ash, the Christchurch Call co-ordinator at the Department of Prime Minister and Cabinet.
"The companies involved have much better detection systems in place and a much stronger ability to hash that content quickly, and share those hashes across 18 different platforms."
The Buffalo shooter said he chose Twitch because Facebook's rules could limit the video's reach. Twenty two people watched his livestream, and Twitch says it was removed within two minutes of the first shots fired.
But the footage was copied and republished elsewhere. One copy, according to the Washington Post, was viewed more than 3 million times on little-known site Streamable, which was also linked to in a Facebook post that had more than 46,000 shares; the post wasn't removed for more than 10 hours, the Post reported.
Ash concedes there is no hard data to know how little the Buffalo video was watched compared to Tarrant's.
"I'm not saying that means there has been no spread, or that there are not communities that have sought to use this material for promulgating the messages - as happened in Christchurch. But it is much, much harder for them to get the same sort of traction.
"You may see a post that's been viewed a large number of times. That doesn't necessarily tell you much about the time any user spends viewing it, or whether it just appeared in their feed. Sometimes those may actually have a very small viewership, or not be viewed at all. We don't have those statistics."
Ongoing improvements
Ash is adamant the collaborative and global response has improved, which means the Buffalo video would have been more widely available online if the global response had been the same as it was for Tarrant's video.
The new CIP response has been used three times since Christchurch.
There was the livestreamed shooting in Halle, Germany, in October 2019, which was livestreamed on Amazon's Twitch and viewed about 2200 times. It took about five hours between the earliest communication among GIFCT members and the activation of the CIP.
Then there was the shooting in Glendale, Arizona, in 2020, when a CIP was declared about 90 minutes after the first shots were fired. And then, Buffalo.
"I'm confident the measures are in place to identify the content quickly, and to then put in place the hash signatures and share those across the platforms," Ash says.
"The Christchurch attack - we know it had a global scale and impact unlike anything that we'd seen before by way of a terrorist or violent extremist content online. And I've seen nothing to suggest that what has happened [since Buffalo] has approached anywhere near that level of prevalence or viewership at this point in time."
He adds while the response framework has improved, a core part of the Christchurch Call work is prevention.
"We've still got quite a bit of work to do to understand how a young person with a rifle finds themselves in this position, and how the radicalisation process has occurred. What are the conditions off- and online that contribute to that? How, as a Call community, can we build some better tools to try and prevent it?"
This is the crux issue. Tech companies' bottom lines are built on engagement, and more extreme content is more likely to be engaging.
And while those companies are throwing resources at trying to prevent users from falling down rabbit holes of increasingly extreme content, they've also shown no inclination of changing the algorithms that create those rabbit holes in the first place.
They even have legal protection in the US, where platforms are not liable for the content they host.
"That's proving very tricky," Ash says when asked about these legal protections, and tech companies' unwillingness to change their business models.
"The major platforms have put in place a range of tools to try and address the problem. They continue to evolve those, they continue to work with us constructively on the Call, and we continue to work constructively with them.
"I'm encouraged that we've got quite a bit of momentum around algorithmic questions across a wide range of those involved in the problem. We see this as a priority area, and one that leaders and the Call community will not take their eyes off."
And then there are the online platforms - such as Streamable - that aren't part of the Call, and aren't moderated, but can be linked to via the major platforms.
Says Ash: "That's going to be, increasingly, a subject of discussion across supporters of the Christchurch Call."