The ministry was worried staff could put sensitive information into the technology which could later resurface. “MBIE has a fundamental responsibility as a government agency to protect the information provided to us by the people we support,” MBIE chief information security officer David Habershon told the Herald.
The ban was billed as a temporary pause while guidelines were developed. And it wasn’t out of line with what was happening in parts of the private sector, here and offshore. A number of large banks and technology companies, including Amazon, Apple and Samsung, put ChatGPT bans in place around the same time.
But five months on, it’s still in place.
“MBIE continues to block access to the ChatGPT website as well as other similar sites until we can ensure we have the appropriate security and privacy measures in place to protect our data and systems,” Habershon told the Herald this week.
“MBIE cannot give a timeframe at this stage for when the appropriate security and privacy measures will be in place.”
Already public service advice, of sorts
MBIE’s position is a little surprising given the publication only a couple of months ago of tactical guidance on the appropriate use of GenAI by the public service, as well as generative AI guidance from the Privacy Commissioner,” said Simply Privacy principal Frith Tweedie, who specialises in AI governance and privacy.
“Hopefully this ongoing caution indicates MBIE is working on its own internal AI governance settings to ensure staff use ChatGPT and other forms of AI responsibly.”
“I agree that it’s important that government agencies are especially conservative in how they look at privacy implications of generative AI use. They face unique risks with the sensitive data they hold”, said Matt Ensor, who chairs the AI Forum NZ’s working group on generative AI.
“But as generative AI’s capabilities and customer expectations rise, every day without AI adoption leads to more missed opportunities for improving sector performance.”
At least three different approaches
No AI laws have been passed in NZ, in contrast to the AI Act due to come into force in the European Union by the end of this year, or President Joe Biden’s executive orders on AI protections. And none is on the immediate horizon. Neither major party had an AI policy going into the election.
In the absence of legislation, government departments have taken at least three different approaches. Super ministry MBIE has its ban; others have adopted the public service guidelines issued by the DIA, GCSB and Stats NZ, while others like the Parliamentary Service and Te Whatu Ora Health NZ have adopted their own protocols.
‘We strongly recommend ...’
In July, the offices of Government’s chief digital officer Paul James (who sits with the Department of Internal Affairs), the chief data steward Mark Sowden (who falls under Statistics NZ), and chief information security officer Andrew Hampton (also director-general of the GCSB). teamed to create a document billed as “Joint guidance ... on responsible and trustworthy use of Generative Artificial intelligence across the New Zealand Public Service.
The document (online here) says AI “could offer many benefits for the public service” which could include productivity and efficiency gains, “innovation from access to ‘big-data’ based insight and improved policy development” through “fuller data insight” and “improved service design and delivery through targeting and personalisation”.
The guidance also says “We strongly recommend that you don’t use GenAI tools for data classified at “sensitive” or above. The risks for security and potential impacts if sensitive or above datasets were to be compromised could be catastrophic for our society, economy and public services.”
And: “We also strongly recommend that you avoid inputting personal data, including client data, into GenAI data, and exercise extreme caution if personal information is involved.”
It also recommends public servants “avoid inputting information into GenAI tools that would be withheld under an Official Information Act request” or using GenAI as a “shadow” helpdesk for IT problems.
MPs and parliamentary staff
Those in the Beehive are allowed to use ChatGPT but are asked to “refrain” from various activities.
Parliamentary Service chief executive Rafael Gonzalez-Montero said GenAI guidance for MPs and staff included:
- Refraining from using their @parliament.govt.nz address within these tools
- Refraining from inputting any private or confidential information, and not using it to write code for Parliament’s digital environments – as well as being careful with any information or detail put in
- Verifying the output with other reliable sources, especially if it is being used for decision-making or communication with the public
- Clearly marking any outputs or results that have been AI-generated.
“Members and staff are encouraged to approach our cyber-security team for any advice relating to ChatGPT and other generative AI systems.”
Hospital staff and clinicians
Te Whatu Ora Health New Zealand, which took over the running of our hospitals and other public health services after the shift from 20 district health boards, says its AI policy is a work in progress.
“The message to our kaimahi [workers] is to only use ChatGPT and other generative AI systems with caution,” the group manager, of emerging health technology, Jon Herries said.
“Our data is taonga – so we encourage them to treasure it and protect it. That means don’t feed private or unpublished information to public artificial intelligence tools like ChatGPT. And definitely don’t trust them for clinical decisions, patient care or advice, or documentation.
“We recognise the speed with which AI is moving and the potential for its use in the health sector, both good and bad. Te Whatu Ora is still identifying how to best use these platforms safely.
“To this end, we are working with our AI expert advisory group to understand how we best manage access to AI in ways that align with our strategic intent and values, as well as evolving legal and community standards.”
Te Whatu Ora’s full policy is online here.
Queensland models a safe-adoption strategy
The AI Forum NZ’s Ensor - whose day job is running Beca spin-out FranklyAI, has some advice for our civil servants and lawmakers about where to proceed from here. And he says we only have to look across the Tasman for a safe-adoption model.
“There’s a big difference between generative AI tools that have access to organisational information, and those that do not,” he said.
“The most successful moves I am seeing are introducing the latter. This enables staff to experiment, learn and get comfortable on how generative AI works without fear of inadvertantly sharing information or breaching policy.”
The Queensland state government is a good example, where they have recently rolled out Qchat, giving their teams access to generative AI in a safe way, Ensor said.
Testing policy, adding diversity
“For Government, my particular interest is in a use case of generative AI that enables any policy to be tested against the perspective of hundreds of different community personas,” Ensor said.
“It provides diversity of thought far outside the lived experience of policy analysts, and brings voices to the table that otherwise would be left behind. While it might not replace consultation, it provides a tool for Government to better understand how to tweak services and intiatives to more quickly achieve better outcomes.
“Departments uptake of AI could be more important than we think as politicians too could now use generative AI to test alignment of agencies’ strategies against there own policy objectives.”
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.