Efforts by Prime Minister Jacinda Ardern and French President Emmanuel Macron to bring the world together against online extremism and terrorism face a major obstacle: the President of the United States of America.
The dangers posed by social media are certainly global, but the companies at the centre of the mess are American – and that doesn't bode well for any hope of constraining them through regulation.
First off, the US already has a long history of not playing ball with the rest of the world. The superpower has withdrawn from or declined to ratify the International Criminal Court, the Kyoto Protocol, the Comprehensive Nuclear-Test-Ban Treaty and, most recently, the Paris Climate Accord.
Each of those was an attempt to deal with a major global issue, but in each instance the US went its own way. If the threat of nuclear apocalypse wasn't quite enough to bring the US on side, what hope do a few social media posts have?
Under a President who has pledged to put America first and had few qualms about raising international tensions with a series of trade fights, there's little indication that the US will suddenly adopt a global approach when sitting down with the G7 leaders next month.
If recent reports out of the US are anything to go by, the country is already charging down a different path from the rest of the world.
At a private meeting on Tuesday, President Donald Trump met Twitter CEO Jack Dorsey to discuss the importance of "protecting the health of the public conversation ahead of the 2020 US elections". This followed numerous accusations of bias, with Trump claiming conservative voices were being silenced amid social media companies' efforts to clamp down on extremist content and fake accounts. While the rest of the world is concerned about how little is being done to curb extremism online, the message from the most powerful politician in the US seems to be that social media's censorship efforts sometimes go too far and that tech giants wield too much power over public discourse.
With these two worldviews set to meet next month, it could devolve into a collision course rather than a collaborative show of unity.
Where do we draw the line?
As the volume dials up on calls for stronger regulation of social media, the free speech acolytes are also becoming louder in airing their concerns about the impact any regulatory efforts might have on online freedom.
Ardern isn't oblivious to that debate and was quick to distance the G7 meeting from any consideration of free speech.
"This isn't about freedom of expression," she said during a press conference on Wednesday.
"This about preventing violent extremism and terrorism online. I don't think anyone would argue that the terrorist on the 15th of March had a right to livestream the murder of 50 people. And that is what this call is specifically focused on."
These points are all valid, but any steps to clamp down on the publishing or distribution of content will invariably lead to questions about what is and isn't appropriate. Even viewed in a vacuum, Ardern's specific focus poses complex questions: what do we define as terrorism? what types of groups do we allow on social media? and do we continue to permit anyone in the world to livestream when they please?
The Ted Talk stage recently provided a glimpse of how seemingly simple problems can become complex when they play out on social media. When asked why Twitter simply doesn't get rid of Nazis, Dorsey explained that even something as widely reviled as Nazism can be difficult to define on social media.
"We're in a situation right now where that term is used fairly loosely, and we just cannot take any one mention of that word accusing someone else as a factual indication that they should be removed from the platform," Dorsey said.
Like the rest of the tech world, the Twitter boss is betting heavily on the potential of artificial intelligence to eventually allow truly offensive and harmful content to be weeded out efficiently and effectively.
But until that happens, many of the tough decisions on social media are being out-sourced to human eyes, like those of the content moderators cramped into Mumbai offices and charged with the responsibility of identifying fake news on Facebook in the lead-up to the Indian election.
Social media organisations get away with such a bare-minimum response to fake news and extreme content only because they have long been able to hide behind the fact that they are only platforms, not media channels. This rule remains enshrined in the Communications Decency Act passed by Congress in 1996, and there have been few indications of it changing any time soon.
Without the burden of having to employ the staff numbers necessary to run a media company responsibly, the big tech companies' profits have soared. And the tech giants aren't simply going to sit back and watch their margins being ripped apart by the efforts of a few foreign governments.
Facebook's Mark Zuckerberg has already called in reinforcements, this week appointing Jennifer Newstead as general counsel, tasked with defending Facebook as international pressure mounts.
Newstead isn't your standard, run-of-the-mill lawyer. She was plucked straight out of Trump's State Department and previously played a major role in selling the Patriot Act to the US Congress.
That controversial piece of legislation was developed in the aftermath of the September 11 attacks and greatly increased the US government's ability to conduct surveillance on its own citizens.
If anything, this sends a strong message that Facebook won't be a pushover as it starts to work with governments on reshaping social media in a world that no longer accepts the hands-off approach.