It is virtually impossible to pin down the legal jurisdictions that govern the likes of Facebook given the global reach of social media platforms. Photo / AP
ANALYSIS:
The Christchurch Call summit in Paris tomorrow (NZT) will look at how to make tech companies more responsible for the content they host.
But whether it will have any teeth will be a key issue, given it will be a voluntary framework.
The global reach of social media platformsmakes it difficult to pin down the legal jurisdictions that govern them.
Someone in New Zealand could post something that might be called violent terrorist content, the type of content that the Christchurch Call in Paris is specifically addressing.
That content might then be shared, downloaded and then uploaded to different platforms by users in different countries. Which law then applies?
Facebook users were reportedly under Irish law, given Facebook's Irish subsidiary. But when tougher privacy rules came into effect in Europe last year, Facebook tweaked its terms and conditions so that users outside the US, Canada and the EU would be governed by California law.
New Zealand Privacy Commissioner John Edwards has challenged this, saying Facebook in New Zealand should be subject to New Zealand privacy law.
And he has added that Facebook cannot be trusted to self-regulate, calling Facebook morally bankrupt.
The confusion over jurisdiction is partly why Prime Minister Jacinda Ardern is seeking a global solution, though global regulation - which Facebook boss Mark Zuckerberg has called for - is not the aim of the Paris summit.
As Ardern has pointed out, there is no such thing as the international court for social media, and establishing one would demand years of work, if it were even possible to get enough broad agreement to provide a strong mandate.
A duty of care
The Christchurch Call is expected to commit governments that sign on to look at adopting and enforcing laws that ban violent content.
Several countries have moved to establish a duty of care, or something similar, for social media platforms in those countries, making them liable for the content they host.
Following March 15, Australia passed a law aimed at removing the most violent online content. Failure to do so would be punishable by three years' jail and a fine up to A$10.5 million, or 10 per cent of the tech company's annual turnover.
The UK is currently considering measures that would hold the top executives of social media companies liable for failing to police their platforms.
France has proposed laws that would appoint a government regulator to oversee social media platforms, and punish them if they host hate speech and violent content.
Under German law, online content hosts have one hour to take down "manifestly unlawful" posts or face a fine up to €50 million.
One hour is also the timeframe being considered by the European Commission. Failure to do so would mean tech companies could face a fine of 4 per cent of global turnover for the previous year.
The commission already has a voluntary code of conduct to combat hate speech, which Facebook, Twitter, YouTube and Microsoft have signed up to.
US law does not impose a duty of care and, perhaps crucially, the US is yet to confirm whether it will be represented in Paris despite New Zealand efforts to include it.
No duty of care in New Zealand
New Zealand is similar to the US in that the legal framework allows social media platforms to escape liability for the content they host.
New Zealand laws make it illegal to incite violence or racial disharmony, or subject someone to unlawful discrimination. But there was a gap in the law for digital media, which was filled by the 2015 Harmful Digital Communications Act (HDCA).
It addresses digital content including social media, texts, and email, but is more about online bullying and harassment rather than terrorist content.
The escape clause in the HDCA is the "safe harbour" provision that gives online content hosts up to 96 hours to act – a timeframe seen to balance the need for a free and open internet.
Under the provision, a person can lay a complaint to Netsafe about material with an online content host, and the host then has 48 hours to contact the material's author, who then has 48 hours to respond.
Social media platforms that follow this process cannot be held civilly or criminally liable. Even if they don't follow the process, liability may still fall on the material's author.
Offences are punishable by up to two years' jail, or a maximum fine of $50,000 for individuals or $200,000 for companies – far much more lenient than penalties in Germany or Australia for terrorist content.
Internet NZ chief executive Jordan Carter told the Herald that Internet NZ did not have a position yet about whether New Zealand needed a specific law about online terrorist content, but he was personally open to the idea.
Privacy Commissioner John Edwards supports the idea, saying it was time for social media platforms to take more responsibility for the content they host.
A paper called Anti-social Media, released by the Helen Clark Foundation, said that the legal framework on digital harm is fragmented and proposed a duty of care with substantial penalties to encourage compliance.
The Government is yet to consider the issue because the focus has been on Paris.
Because the Christchurch Call will be a voluntary framework, it invites a cynical view about whether it would change anything, especially in light of the scandals that have exposed Facebook's apparent inability to police itself properly.
Ardern would argue that the call would at least improve the status quo, and that in itself makes it worthwhile.
Five Facebook scandals
• Myanmar: UN finds Facebook played a "determining role" in the human rights violations committed against the Rohingya population in spreading disinformation and hate speech.
• Cambridge Analytica: The company used Facebook users' data and in multiple political campaigns, including Donald Trump's 2016 successful presidential bid. Facebook said 87 million users' data was affected.
• Privacy: Facebook reportedly shared users' personal data with device companies including Apple, Amazon, Microsoft, and Blackberry. Facebook also had data arrangements with dozens of companies, including a Russian internet giant.
• Russian interference: Facebook used by Russian agents to manipulate public opinion, including about US politics, with misleading information.
• Photos: A programming bug led to up to 1500 apps having access to 6.8 million Facebook users' photos, whether they were shared or not.