Authoritarian-leaning countries have long worked to rein in social media when it challenged their ability to control information. But over the last year, more democratic governments have started to target social-media sites, considering new regulations to stamp out disinformation during elections and to prevent their use as rallying points for hatred and extremism.
Google and Twitter declined to comment. Facebook did not immediately respond to requests for comment. "People rely on our services to communicate with their loved ones and we are committed to maintaining our services and helping the community and the country during this tragic time," Facebook said over the weekend.
In Sri Lanka, government officials ordered the social-media blackout within hours after bombs exploded in churches and hotels, killing nearly 300 people. Before the mandate took effect, researchers said they saw hoaxes spreading online that misidentified those behind the attack and the total number of people killed.
It marked the second time in as many years that Sri Lanka sought to prevent citizens from accessing social media out of fear that misinformation could stoke ethnic unrest.
For a week in March 2018, the Government blocked access to Facebook and its apps, Instagram and WhatsApp, along with the messaging app Viber, as they sought to curb hateful posts against Muslims while riots spread across the central part of the country.
Government officials at the time said that Facebook had "been used to destroy families, lives and private property," and sharply rebuked the company for failing to act swiftly to take down harmful content.
"I can't say whether it's right or wrong, but it shows for sure the concern about misinformation in this part of the world," said former FBI agent Clinton Watts, who studies misinformation for the Foreign Policy Research Institute.
"The shutdown keeps communication from accelerating misinformation and organising attacks," he said. "Essentially the shut down might slow things down so authorities can get a handle on what's happening before violence spins out of control."
In doing so, Sri Lanka became the first country this year to shut down social media in response to a national incident, said Alp Toker, the executive director of NetBlocks, a London-based digital rights group that estimates there have been 60 incidents of full or partial online shutdowns since 2015.
In general, Toker said there is a "growing desire" on the part of governments to "have the last say on whether social media is censored, blocked or restricted."
Strict restrictions long have been in place in authoritarian countries, such as China, where Facebook and Google remain banned. YouTube, meanwhile, has been periodically blocked in more than two dozen countries since the service was founded nearly 15 years ago, including an incident in 2007 when a Turkish court ordered the removal of videos critical of the country's founder.
More recently, countries such as Russia have sought to criminalise the spread of "fake news." These laws can serve as a "pretext for enforcing against political dissidents or journalists," said Emma Llanso, the director of the Centre for Democracy and Technology's Free Expression Project.
In the United States, social-media giants benefit from the First Amendment's guarantee that government will keep its hands off speech. "The real solution is for social media companies to do a better job in removing hate speech and not allowing their platforms to incite violence," said Democratic Representative Ro Khanna, who represents a portion of Silicon Valley.
But the desire to regulate social media has gained global appeal, even in countries with strong protections for free expression, due to Silicon Valley's recent missteps.
Meddling by Russian agents on Facebook and Twitter helped divide the public during the US 2016 presidential election and Europe's Brexit campaign. Automated accounts, or bots, falsely amplified online campaigns in an attempt to sow discord around sensitive political topics.
Often, the consequences have been deadly: The United Nations has linked hate speech on social media including Facebook with the mass killings in Burma, and a top general long had used the site to spread false information about the Rohingya, a Muslim minority Burmese authorities refuse to recognise as citizens.
The deadly attack on two mosques in Christchurch last month illustrated that Facebook and Google continue to struggle to take down harmful content. Both sites' human reviewers and artificial intelligence tools could not keep up with users who sought to upload videos of the violent attack in the city of Christchurch.
The failings of these social-media sites prompted New Zealand to propose a sweeping new law that would give regulators power to order the removal of harmful online content - and tough fines for tech giants that fail to heed their warnings.
European governments have offered similar proposals, introducing new laws targeting hate speech and proposals to remove terrorist content. Germany began implementing a tough online anti-hate speech law in 2018.
A sweeping plan put forward in the United Kingdom this month would impose steep fines and other penalties for social-media sites that don't swiftly remove a range of offending content, from violent videos to disinformation. Tech giants have lambasted the UK blueprint as a threat to users' ability to communicate unfettered online.
"I think across the world we've seen online material left unregulated for a considerable period, and I think that's what we're seeking to remedy here," Jeremy Wright, the UK secretary of state for digital, culture, media and sport, in a recent interview with the Washington Post. "If we can put into place a system of regulation that is sensible. . . we won't be the only country to want to do that."
The global blowback represents a landmark shift in political opinion nearly 10 years after experts credited Silicon Valley with being a critical element of the Arab Spring.
In 2010, a Google employee named Wael Ghonim analysed the role social media would play in the pro-democracy uprisings with a Facebook page he created to commemorate a fellow Egyptian who had been killed by police. The page, "We are all Khaled Said," garnered hundreds of thousands of followers, and helped to springboard the Tahrir Square protests that toppled the country's longtime dictator Hosni Mubarak.
Top executives, including Facebook CEO Mark Zuckerberg and Google's Eric Schmidt, used their stories to justify their platforms as forces for democratic values, freedom, and global good. Ghonim himself received a standing ovation during a company-wide town hall at the time.
"It really put the focus on the liberating qualities of technology," said Freedom House's Abramowitz.
Those benefits haven't dissipated, he said, and activists around the world continue to take advantage of Silicon Valley's powerful social-media tools. But, he added: "In the last number of years, there's been a greater focus on the detrimental side effects on this explosion of technology."