British regulators on Sunday unveiled a landmark proposal to penalise Facebook, Google and other tech giants that don't stop the spread of harmful content online, marking a major new regulatory threat for an industry that's long dodged responsibility for what its users say or share.
The aggressive, new plan - drafted by the United Kingdom's leading consumer-protection authorities and blessed by Prime Minister Theresa May - targets a wide array of web content, including child exploitation, false news, terrorist activity and extreme violence. If approved by Parliament, UK watchdogs would gain unprecedented powers to issue fines and other punishments when social media sites don't swiftly remove the most egregious posts, photos or videos from public view.
Top British officials said their blueprint would amount to "world-leading laws to make the UK the safest place in the world to be online." The document raises the possibility that the top executives of major tech companies could be held directly liable for failing to police their platforms. It even asks lawmakers to consider whether regulators should have the ability to order internet service providers and others to limit access to some of the most harmful content on the web.
Experts said the idea potentially could limit the reach of sites including 8chan, an anonymous message board where graphic, violent content often thrives and that played an important role in spreading images of last month's mosque attack in New Zealand.
"The Internet can be brilliant at connecting people across the world - but for too long these companies have not done enough to protect users, especially children and young people, from harmful content," May said in a statement.
For Silicon Valley, the UK's rules could amount to the most severe regulatory repercussion the tech industry has faced globally for not cleaning up a host of troubling content online. The sector's continued struggles came into sharp relief last month, after videos of the deadly shooting in Christchurch, New Zealand, proliferated online, despite heightened investments by Facebook, Google and Twitter on more human reviewers - and more-powerful tech tools - to stop such posts from going viral.
The March shooting prompted Australia to adopt a content-takedown law of its own, and it has emboldened others throughout Europe to consider similar new rules targeting the tech industry. The wave of global activity stands in stark contrast to the United States, where a decades-old federal law shields social media companies from being held liable for the content posted by their users. US lawmakers also have been hesitant to regulate online speech out of concern that doing so would violate the First Amendment.
"The era of self-regulation for online companies is over," UK Digital Secretary Jeremy Wright said in a statement Sunday.
In response, Facebook highlighted its recent investments to better spot and remove harmful content, adding that e UK's proposal "should protect society from harm while also supporting innovation, the digital economy and freedom of speech." Twitter said it would work with the government to "strike an appropriate balance between keeping users safe and preserving the internet's open, free nature." Google declined to comment.
The UK's fresh call for regulation reflects a deepening scepticism of Silicon Valley in response to a range of recent controversies, including Facebook's role in the country's 2016 referendum to leave the European Union. British lawmakers learned after the vote that an organization created by Brexit supporters appeared to have links to Cambridge Analytica, a political consultancy that improperly accessed Facebook data on 87 million users in order to help clients better hone their political messages.
The revelation sparked a broad inquiry in Parliament, where lawmakers unsuccessfully demanded testimony from Facebook CEO Mark Zuckerberg. In the aftermath, many there have called for strict new regulation of the social networking giant and its peers.
"There is an urgent need for this new regulatory body to be established as soon as possible," said Damian Collins, the chairman of the Digital, Culture, Media and Sport Committee in the House of Commons. He said the panel would hold hearings on the government's proposal in the coming weeks.
For now, the UK's plan comes in the form of a white paper that eventually will yield new legislation. Early details shared Sunday proposed that lawmakers set up a new, independent regulator tasked to ensure companies "take responsibility for the safety of their users." That oversight - either through a new agency or part of an existing one - would be funded by tech companies, potentially through a new tax.
The agency's mandate would be vast, from policing large social-media platforms such as Facebook to smaller websites' forums or comment sections. Much of its work would focus on content that could be harmful to children or pose a risk to national security. But regulators ultimately could play a role in scrutinizing a broader array of online harms, the UK said, including content "that may not be illegal but are nonetheless highly damaging to individuals or threaten our way of life in the UK." The document offers a litany of potential areas of concern, including hate speech, coercive behaviour and underage exposure to illegal content such as dating apps that are meant for people over age 18.
Many details, such as how it defines harmful content, and how long companies have to take it down, have yet to be determined. UK regulators also said they would prod tech companies to be more transparent with users about the content they take down, and why.
"Despite our repeated calls to action, harmful and illegal content - including child abuse and terrorism - is still too readily available online," said Sajid Javid, the UK's home secretary. "That is why we are forcing these firms to clean up their act once and for all."