Britain's Home Secretary Sajid Javid said on Twitter that "enough is enough":
"You really need to do more @YouTube @Google @facebook @Twitter to stop violent extremism being promoted on your platforms. Take some ownership. Enough is enough," he posted.
Damian Collins, the UK MP who leads the Digital, Culture, Media and Sports Committee in the House of Commons, said there needs to be "a serious review" of why the companies' attempts to police the content weren't more effective: "It's very distressing that the terrorist attack in New Zealand was live streamed on social media & footage was available hours later. There must be a serious review of how these films were shared and why more effective action wasn't taken to remove them."
The growing international outcry could be a game-changer for Silicon Valley companies wary of more regulation.
Other countries, particularly in Europe, have been adopting tougher rules when it comes to hate speech - and it's likely that the toughest restrictions on the technology companies' content moderation practices will continue to be outside the United States.
Countries such as Germany and United Kingdom are setting penalties for the companies when they fail to remove harmful content.
In Germany, regulators can fine companies if they fail to remove illegal content in less than 24 hours.
In the United Kingdom, ministers are planning to establish a new technology regulator that could dole out fines in the billions if companies such as Facebook or Google (which owns YouTube) fail to remove harmful content from their platforms.
Actions regulators take in those countries take could set the tone globally for how governments should address the proliferation of violent content on social media.
There could also be action in the US. The extremely broad volume of videos spread across various social networks could reignite debate over whether Congress needs to update a decades-old law that shields companies from legal liability for content posted on their platforms.
Less than six months ago, in the wake of the massacre at a Pittsburgh synagogue, hate speech linked to the attack rekindled debate in Congress over whether Section 230 of the Communications Decency Act needed to be updated.
The provision generally protects tech companies from legal action related to content that people have posted on their websites. Senator Mark Warner, D, said last year this law might need an overhaul.
"I have serious concerns that the proliferation of extremist content - which has radicalised violent extremists ranging from Islamists to neo-Nazis - occurs in no small part because the largest social media platforms enjoy complete immunity for the content that their sites feature and that their algorithms promote," Warner, the top Democrat on the Senate Intelligence Committee, said. He did not comment this weekend on whether he would renew this charge after the Christchurch attack.
The industry has largely resisted any changes to the law. As Washington Post writer Tony Romm said on Twitter in the hours following the Christchurch shooting: "At what point will US lawmakers just say "enough" and strip these platforms of CDA 230 protections in response to the mass proliferation of videos from a shooting? I mean that - like what is it actually going to take for that convo to happen despite the intense industry lobbying."
The companies have already made investments to better police harmful content, ranging from improved algorithms to expanding their ranks of human content moderators under previous political pressure. But expect renewed questions from policymakers across the world over whether these investments were enough.
Tech companies "have a content-moderation problem that is fundamentally beyond the scale that they know how to deal with," Becca Lewis, a researcher at Stanford University and the think-tank Data & Society, said. "The financial incentives are in play to keep content first and monetisation first."