Who's Zooming who? Scammers are using AI for everything from writing emails with better English to scraping company videos to generate "deepfake" video versions of staff. Some experts see a role for regulation in stamping out scams and helping to boost general adoption of the technology. Photo / Getty Images
Research commissioned by InternetNZ revealed 72 per cent of New Zealanders are concerned AI will be used for malicious purposes and without regulation.
Malicious use is already happening. In November, Zuru boss Nick Mowbray sounded the alarm after his chief financial officer received a Microsoft Teams video callfrom a realistic-looking Mowbray - down to clothes he regularly wore - asking for money to be transferred. In fact, it was a deepfake video (with the fake “Mowbray” claiming audio problems and communicating via text - which was ropey enough to raise a red flag with the CFO).
On the regulatory front, in the EU, the sweeping Artificial Intelligence Act - which calls for risk-based assessments of new AI technologies - came into force in December, while in the United States, President Joe Biden issued a lengthy executive order last October to establish new standards to maximise the benefits of AI while safeguarding against its risks - including an order for Big Tech firms to share the results of their AI safety tests, directions for various privacy protections, and an order for federal agencies to create digital “watermarks” to guarantee the authenticity of government or private sector content. Australia has started public consultation on how AI should be regulated.
The Herald asked Technology Minister and Attorney-General Judith Collins if any AI regulation was in the works here.
“This Government is committed to getting New Zealand up to speed on AI. We have a cross-party AI caucus, which is due to meet soon. Its first step will be providing feedback on the AI framework we are developing to support responsible and trustworthy AI innovation in government, which the public should expect to hear more on in the coming months,” Collins said.
“There will be no extra regulation at this stage.”
Don’t go it alone
“It’s natural for New Zealanders to have concerns about such a powerful and fast-moving technology,” Brainbox Institute director Tom Barraclough said.
“Regulatory settings we implement now will have a substantial influence on whether AI does more good than harm.
“By working with close partners like the EU and engaging in international forums, New Zealand can amplify its overall influence and learn from implementation elsewhere with modifications for our domestic priorities.
“If government agencies are clear and transparent about what steps they may be taking or planning to take, then that helps everyone respond more effectively.”
No AI laws in NZ
“There are no AI-specific laws in NZ so far,” the Prime Minister’s chief science adviser Professor Juliet Gerrard said in July last year.
“The only AI-specific policy is the Algorithm Charter, which most Government agencies have signed up to.”
While most Government agencies are signatories to the charter, day-to-day they are taking a variety of hands-on approaches, from the Ministry of Business, Innovation and Employment’s (MBIE) ban on staff using ChatGPT and similar tools to others who are actively experimenting with it, with a variety of in-house guidelines in the mix.
Use the laws we already have, in most instances
Barraclough was monitoring AI developments long before it hit the mainstream.
In 2019, he co-authored a study for The New Zealand Law Foundation on deepfakes that highlighted that NZ has multiple laws and guidelines that already cater to the risk - primarily the Crimes Act, which covers when deception is used for gain, the Harmful Digital Communications Act, which covers when it’s used for malice, and the Privacy Act, because “the wrong personal information is still personal information”.
He warned against a knee-jerk response that would put human rights and free speech at risk.
Deepfake porn legislation needed
An exception is deepfake porn, which recently hit global headlines as Taylor Swift’s image was co-opted for an X-rated video, but which has also been used by malicious high school kids on their peers as AI makes deepfakes easier and easier to create.
“These extreme harms require a carefully designed, fit-for-purpose legal response – which New Zealand currently lacks. This response must involve the explicit criminalisation of non-consensual pornographic deepfakes,” Brainbox Institute fellow Bella Stuart wrote in February.
“Unfortunately, while New Zealand has several offences targeting image and communication-based harms, they all fail to adequately capture this emergent phenomenon.
“We cannot simply wait and see whether a judge is willing to apply these inadequate existing offences in ways which are both unnatural, and inconsistent with Parliamentary intentions.
“To vindicate victims’ interests and deter creation of this harmful content, the distribution of non-consensual deepfake pornography must be explicitly and comprehensively criminalised through a for-purpose offence.”
Toothless
Earlier, chief science advisor Gerrard, with her more general overview, noted that although the Privacy Act 2020′s 13 principles have provisions that cover generative AI, “these may be difficult to enforce” due to fines for violations of the Act being capped at $10,000 compared to penalties of up to €20m or 4 per cent of an organisation’s revenue in the EU (the previous government snubbed a submission by then-Privacy Commissioner John Edwards to levy $1m fines.)
Terminator
Gerrard also canvassed AI fears at the apocalyptic end of the spectrum.
“There are also gaps created by new technologies. For example, lethal autonomous weapons are not addressed by any current New Zealand law,” she said.
Australia 12th, NZ 42nd
InternetNZ chief executive Vivien Maidaborn says the Internet evolves at a rate that can be hard to keep up with and it will keep presenting us with new challenges, like AI.
“We need our Government to be thinking about what guidelines, policies and laws are required to keep us on the cutting edge.”
“My big concern is that we won’t identify how fundamentally this will change our society and get ahead of it. We call on the Government to start the process of developing guidelines, policies and laws.”
Microsoft, Amazon, Google and other tech firms in the AI space have told the Herald they favour regulation to complement their own “guard rails”.
Maidaborn said the 2023 Government AI Readiness Index ranked New Zealand 42nd in the world, well behind Australia in 12th - a position buoyed by the creation of the A$1 billion Critical Technologies Fund to boost the adoption of artificial intelligence and other cutting-edge tech.
In discussions
MBIE’s 53-page BIM (Briefing to the Incoming Minister) for Collins made two references to AI.
One said, “MBIE is in discussion with United States counterparts to develop strategic co-operation regarding Antarctica, artificial intelligence and quantum technologies. These discussions are linked with the United States-New Zealand Strategic Technology Dialogue centred on national security and defence R&D.”
The other, which followed a paragraph on the liberalisation of genetic engineering rules, where MBIE saw enabling legislation leading to “a high growth, high productivity sector”, said: “Other sectors where we see the potential for similar enabling regulatory regimes and focused innovation policies include continuing work with the aerospace sector and the medical technology sector. Artificial intelligence is another area in which many other advanced economies are moving to develop governance and regulatory regimes, to assure consumers and the public that the technology is being used responsibly, and to provide visible permission space and general guidance to companies to expand and develop new technologies and applications.”
“It usually takes governments a while on this type of thing,” Mowbray told the Herald this morning.
In the meantime, his firm had taken several steps against future deepfake scams - from its CFO’s candid account to all staff on the bogus video call to new security measures.
The Brainbox Institute says there are several “achievable tweaks”, including security questions for staff to authenticate themselves.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.