Tohatoha and InternetNZ, which also criticised a draft released in December, remain unimpressed with the final version of the code. Photo / 123rf
A new online safety code signed into effect today by Netsafe, NZTech, Meta (owner of Facebook, Instagram and WhatsApp), Google owner YouTube, Twitch-owner Amazon, Twitter and TikTok, has drawn criticism from user-advocacy groups.
Tohatoha calls it an attempt to avoid regulation, and the real change that would bring, while InternetNZhas labelled it disappointing and Muslim community leader Aliya Danzeisen says its vague and lacks bite (more from all below).
Signatories to the Aotearoa New Zealand Code of Practice for Online Safety and Harms say it is a world-first, and obliges tech companies to actively reduce harmful content on relevant digital platforms and services in New Zealand as the country grapples with what Netsafe calls a 25 per cent increase in complaints about harmful content over the past year.
But Tohatoha and InternetNZ, which also criticised a draft released in December, remain unimpressed with the final version, which leaves key elements as a work-in-progress.
The code provides for a complaints mechanism for the public, but this has yet to be set up, and there no details of how it will work, or how it could address longstanding user concerns that social media platforms are hard to reach and slow to respond.
And there is no detail on what sanctions will be imposed if signatories fail to remove content within an unspecified "reasonable time".
The code's administrator will fill in those blanks within the next six months as a governance framework is put together.
While Netsafe gets 95 per cent of its funding from the Ministry of Justice and the Ministry of Education, the role of the administrator for the new code is funded by the industry.
The administrator was named this morning, and is not a person but an organisation: industry group TechNZ, headed by Graeme Muller.
TechNZ is an industry group representing multinational and local tech companies.
Netsafe - the approved agency for the Harmful Digital Communications Act - began working on the code in April last year.
Its then CEO Martin Cocker held talks with Meta, Google, Amazon, Twitter and TikTok before Netsafe released a draft in December.
InternetNZ public policy manager Andrew Cushen said while the draft had gone to public consultation, submitters could only tweak a code that the industry had had a strong hand in creating.
Instead, affected community groups should have been involved from the ground-up, Cushen said.
And Mandy Henk, chief executive of anti-hate speech and disinformation group Tohatoha said, "I'm concerned that, as written, the [draft] code creates a financial conflict of interest. It's hard to see how the code could be fairly administered for internet users if the administrator is dependent on funding from the industry."
The final version of the code failed to address Henk's concerns.
"This code looks to us like a Meta-led effort to subvert a New Zealand institution so that they can claim legitimacy without having done the work to earn it," she said this morning.
"In our view, this is a weak attempt to pre-empt regulation – in New Zealand and overseas – by promoting an industry-led model that avoids the real change and real accountability needed to protect communities, individuals and the health of our democracy, which is being subjected to enormous amounts of disinformation designed to increase hate and destroy social cohesion."
Nor did she agree with the choice of administrator. "NZ Tech is a technology industry advocacy group that lacks the legitimacy and community accountability to administer a Code of Practice of this nature. They have no human rights expertise or experience leading community engagements. While we have no qualms with what they do, they are not impartial or focused on the needs of those who are harmed by these platforms," Henk said.
"NetSafe, as the Approved Administrator for the Harmful Digital Communications Act, should not be involved in creating industry codes of practice. This is a conflict of interest as it aligns them too closely with the companies impacted by the HDCA and increases the risk of regulatory capture," Henk said.
"This code is a distraction from their core work of administering the act, which is crucially important. NetSafe's focus should be on serving the New Zealand public and enhancing the safety of every New Zealander who uses the internet.
"We badly need regulation of online content developed through a government-led process. Only government has the legitimacy and resourcing needed to bring together the diverse voices needed to develop a regulatory framework that protects the rights of internet users, including freedom of expression and freedom from hate and harassment."
Cushen - now acting chief executive of InternetNZ - said while Netsafe sought wider input during the submissions process, it was too little, too late.
"We remain disappointed with the process to get here, which started with online services rather than communities. While the process has involved more discussion lately, this has been only after concerns about the nature of community involvement were raised, and late in the code's development," he told the Herald this morning.
"This code approach must not become an end-run around community input and regulation by those services. We expect the Government to address the need for engagement and regulation, and online services to engage constructively with that.
"In the meantime, we will be monitoring whether the code is helping or hurting the overall response to the Internet issues that arise for communities in Aotearoa."
New Netsafe chief executive Brent Carey stood by the big social platforms' role in helping to create the code, and the fact the administrator is funded by the signatories. He said both elements were standard practice for a self-regulatory code. The code supplemented rather than replaced Netsafe's work around the HDCA. "Multi-stakeholder events" were held to gather more feedback over a 10-week period after the initial draft, Carey said.
The Netsafe CEO said the code had been designed with input from civil society groups, interest groups and will be monitored by a new multistakeholder governance group.
Carey said cooperation between the six companies and various stakeholders has been essential in establishing an online safety framework for New Zealanders.
The Netsafe CEO likened the code to the voluntary Christchurch Call, and said it was a way to get immediate change.
(The Department of Internal Affairs is currently assessingnew regulations for media and online content. But consultation is not expected to open until September, and it could take a year from then to get a paper before cabinet.)
Muslim community leader Aliya Danzeisen was unimpressed by that argument, posting this morning, "Oh please. The companies could have done this for the last two decades."
Danzeisen added in followup comments to the Herald, "As a person who has received multiple threats and online verbal attacks, this code of conduct is the most basic of guidelines. There needs to be real consequences for companies that fail to moderate their products and services - criminal and civil penalties with bite and will motivate them to act."
NZTech's Muller said, "This unique collaborative approach toward creating a better digital environment for all Kiwi's is just the start and as more organisations join and sign up to the code we will be in a much better place as a country to ensure our experience on the Internet is as safe as possible."
While voluntary, the code builds on solid online safety principles from New Zealand, Australia, and the EU - including bringing to New Zealand the same regime on mis and disinformation currently in operation in Australia, Carey said.
Carey said on top of being closely evaluated, each company will publish annual reports about its progress in adherence with the code, be subject to sanctions for breaches of their code commitments (which, as mentioned, is a work-in-progress) and take part in a public complaints mechanism (ditto).
"The reports will provide an opportunity for consumers to protect their interests and the public to scrutinise action being taken by a company if it fails to meet its commitments under the code," he said.
Independent scrutiny. Maybye
The code says the signatories can seek an independent evaluation of their efforts - but as with other provisions, that is voluntary. They might, they might not.
Muller said, "The code will be a living document, it can be amended biannually and we hope the governance framework will enable it to evolve alongside local conditions, while at the same time respecting the fundamental rights of freedom of expression.
"We've long supported calls for regulation to address online safety and have been working collaboratively with industry, government, and safety organisations to advance the code."
Meta head of public policy for New Zealand and Pacific Islands Nick McDonnell said, "This is an important step in the right direction and will further complement the Government's work on content regulation in the future. We're looking forward to working with the stakeholders to ensure the code sets in place a framework to keep Kiwis safe across multiple platforms by preventing, detecting, and responding to harmful online content."
'Window dressing'
For Henk, the code is lacking from start to finish.
"Any code of practice should begin by acknowledging the extreme power imbalance between users and the platforms. This code and the process that led to it shows that those behind it have no awareness of this inequity and are not interested in rebalancing that power differential," the Tohatoha chief executive said.
"This code talks a lot about transparency, but transparency without accountability is just window dressing. In our view, nothing in this code enhances the accountability of the platforms or ensures that those who are harmed by their business models are made whole again or protected from future harms."