Prime Minister Jacinda Ardern after attending the big Friday prayer meeting in Hagley Park a week after the Christchurch mosque attacks. Photo / File
Dame Jacinda Ardern has pitched the Christchurch Call as offering a model to the risks posed by artificial intelligence.
In an opinion piece for the Washington Post, Ardern says the action needed to manage artificial intelligence needs a collaborative approach of the sort developed after the mosque massacre in 2019.
“There is no shortage of calls for AI guardrails - but no one seems able to tell us exactly how to build them,” she said.
Ardern described artificial intelligence as the issue which had disrupted her plan to “step back from the fray”.
She said it offered “huge benefits for humanity” and also risks.
“I want to know the destination for these tools and what they will mean for democracy and humanity.”
Ardern said answers to those questions were not currently available and difficult to find with a fragmented society - academia, government, industry and others - seeking their own solutions to managing artificial intelligence.
The past month had seen “blueprints for governing AI” coming from “every corner of Big Tech”.
Ardern said she did not have an answer to the end question but believed the sort of oversight that would find it should be modelled on the response to the terror attack in Christchurch that killed 51 people.
In that instance, she said “exploitation of technology” allowed the attacker to livestream his attack for 17 minutes with images proliferating across the world.
“Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.”
Ardern said French president Emmanuel Macron was also “grappling with the connection between violent extremism and technology” and he agreed to join in “crafting a call to action.
“We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.”
The result was the Christchurch Call to Action which today had a membership of 120 partners that included governments, tech companies and civil society organisations.
Ardern said the “large-scale collaboration” saw leaders meet annually to determine priorities which were pursued throughout the year by the Call Secretariat of French and New Zealand officials.
“All members are invited to bring their expertise to solve urgent online problems,” she said.
Ardern said the approach had bolstered the ability of governments and communities to respond to attacks like that which took place in New Zealand. New “crisis-response protocols” saw the 2022 Buffalo attack livestream stopped within two minutes with footage quickly removed across platforms.
There were new “trust and safety measures” that prevented livestreaming of violent extremist content. The industry-founded Global Internet Forum to Counter Terrorism was strengthened with dedicated funding and staff.
She said it was dealing with “intransigent problems” such as the partnership with companies and researchers to better understand algorithms and improve ways to stop online radicalisation to violence.
“In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.”
Ardern said the Christchurch Call had always seen the need to meet the challenges created by artificial intelligence.
“It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI.”
She said the Christchurch Call had led to a collaboration in which governments, researchers and tech companies took on roles of responsibility to deal with violent extremism that had real-world results.
“After this experience, I see collaboration on AI as the only option. The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems.
“There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.”
The model for governance of artificial intelligence “already exists” in the Christchurch Call, she said, “and it works”.