The big social media companies have signed up to the code. Photo / AP
ANALYSIS
An online safety code initially developed by Netsafe, NZTech and the major social media platforms is under fire for being toothless and vague.
Amnesty International Aotearoa New Zealand, Inclusive Aotearoa Collective Tāhono (IACT), and the InternetNZ-funded Tohatoha have banded together to promote a “fix the code” petition.
Netsafe -the “approved agency” for the Harmful Digital Communications Act (2015), making it the first port-of-call for anyone who wants to complain about harmful social media content, or the likes of online bullying - started working on the safety code in 2021.
It roped in Facebook and Instagram owner Meta, Google-owned YouTube, Amazon-owned Twitch, plus TikTok, Twitter and others for their input.
The result was the Aotearoa New Zealand Code of Practice for Online Safety and Harms, which emerged last year.
The groups behind the petition say “the code’s self-regulation is not credible” and that it “fails to ensure independent oversight of the signatories” (Meta, TikTok et al).
They say the code “invokes Te Ao Māori, but the content is generic”. The petition says it lacks detail specific to Aotearoa. It’s light on details full stop. Although its descriptions of harmful content and desired outcomes involved definitions everyone can readily agree on, it lacks specifics on the likes of how soon social media platforms should act on harmful content, and what happens if they fail to adequately address a complaint.
And the petitioners maintain the signatories’ community engagement has “not been effective nor credible”.
Industry group NZTech was named the administrator for the safety code, and in turn appointed Carrie Stoddart-Smith on contract as the interim director for the code.
NZTech chief executive Graeme Muller told the Herald this week that independent oversight was on the way (albeit outside of a target to introduce it within six months; the code was signed on July 25 last year.)
“We are focused on establishing a diverse, credible and effective oversight committee to support the continued implementation of the code and to ensure that industry builds cohesion around addressing online safety and harm concerns, and that internet users in Aotearoa New Zealand have a positive experience on signatory services,” Muller said.
When?
“The oversight committee structure and governance process has been designed and we are aiming to complete the process of filling the seats on the committee by the end of April.”
Asked for specific examples of social media firms taking action against harmful content or behaviour under the code, Muller pointed to the “baseline” reports that the social media companies published under the code late last year. The reports list the firms’ own content policies and education efforts, and several reiterate the various provisions of the code, but are very light on local enforcement - particularly in terms of anything directly tied to the code.
Twitch only made one reference to NZ, and that was in passing. Ironically, Twitter - where new owner Elon Musk has so enthusiastically swung the axe - provided some of the only local stats on blocked accounts. (Muller says NZTech has been able to maintain “excellent lines of communications” with Twitter and its peers through various cutbacks). Meta also provided dedicated NZ stats.
Muller says more detail is on the way, and it’s all per the plan.
“At this stage the signatories of the code have submitted their baseline reports documenting what they are doing to protect New Zealander’s online safety. Later in the year when they submit their first annual report we will have visibility of any work they have completed in response to the code,” the NZTech boss told the Herald.
Earlier, Netsafe chief executive Brent Carey said the code, modelled on similar efforts in Europe and elsewhere, was only ever intended as voluntary self-regulation. (Carey - who took the reins part-way through the process - did acknowledge it was a misstep to not involve affected community groups in the first draft, but said there was public consultation before the “Version 1.0″ release.)
Muller picked up on that theme this week, saying, the code was an “industry-led” effort to foster initiatives to prevent or reduce harmful content. “This could include, for example, identifying where company product fixes are needed, recommending technical changes to relevant company procedures, or investing in independent research on emerging issues or technologies that could impact online harm and safety protocols in a rapidly changing online environment.” The code was meant as a complement the legal system - which could bare its teeth, where necessary - not a replacement.
“From our perspective, the code does not – and indeed cannot - exempt signatories from meeting their obligations under new or existing laws and regulations and the signatories already cooperate with authorities in meeting their obligations,” Muller said.
“The code actually recognises that there are a range of laws or regulatory arrangements that already exist in Aotearoa New Zealand to address harmful content online, which may overlap with some of the commitments covered by the code.
“However, many of the issues raised by concerned stakeholders are outside the scope of the code and we consider these would be better addressed to the Government as they undertake their content regulatory review of online harm.”
The Government’s Content Regulatory Review rivals the online safety code in terms of an initial process that’s dragged on for years.
The content review - designed to update our laws around harmful content for the digital age - was announced in April 2021. A draft is due to go to cabinet in July, if it avoids the policy bonfire that has already consumed a related hate speech bill. A draft would then go to public consultation through to November, with legislation possible in 2024.
Netsafe gets one-off funding boost
When Carey took over Netsafe in mid-2022, he noted that reports of cyberbullying to the agency increased 25 per cent each year since 2020. Over the same period, its budget had been static. Around 95 per cent of Netsafe’s funding comes from the Ministry of Education and Ministry of Justice, and the balance from private sources.
This week, the Netsafe CEO revealed a funding boost.
“On November 1, Netsafe and the Ministry of Education finalised an agreement to strengthen cyber security and digital services in kura and schools for $691,135,” Carey said. The contract - which is a one-off - increases Netsafe’s funding for its current financial year, ending June 30, to around $4.5m.
The new funding will go towards measures including classroom-friendly materials and bite-sized interactive modules designed for Year 9-11 students and recruitment and appointment of 13 young people from across New Zealand to NetSafe’s Youth Action Squad who are designing a Youth Digital Safety toolkit “by young people for young people”.
Separately, Carey has launched Netsafe Labs, charged with analysing abusive content on social media, work on a tool to “remove the risk of vicarious trauma affecting those who remove harmful content” and a revival of DDB’s “Re:Scam” project - an AI-powered chatbot design to reply to scammers, eating up their time by leading them down the garden path. Under an earlier iteration, launched in 2017, a victim could forward a scammer’s message to Re:Scam, which would then ask the scammer a never-ending series of questions.