Some local authorities are already putting Artificial Intelligence (AI) and ChatGPT to use as government departments take a “cautious but optimistic” approach to the technology. The Ministry of Business Innovation and Employment has blocked the use of AI to “protect” its data and systems with ethics, bias, privacy,
AI is doing work for councils but Government is cautious about technology
“AI has the potential to modernise public services in New Zealand, benefiting everyone,” the spokesperson said.
“Ultimately, each agency decides how to use digital technologies like AI based on their specific context and the needs of their customers.
“These agencies are interested in using AI to enhance public services, and we provide guidance, resources, and support for collaboration.”
Agencies were in the early stages of trialling AI and planned to grow their use of AI over the next two years.
“We are seeking to take a balanced and agile approach across the system to enable safe AI innovation, but with flexibility to change as the tech and our context evolves.”
New Zealand was guided by existing laws and regulations such as the Human Rights Act and the Privacy Act.
Ministries not using AI and those who are
The Ministry of Business Innovation and Employment has blocked staff from using ChatGPT and similar ungoverned open generative AI tech.
Chief data officer Puawai Wereta said it had a fundamental responsibility as a government agency to protect the “information provided to us by the people we support”.
MBIE would continue to block AI until it could ensure the appropriate security and privacy measures to “protect our data and systems”.
The ministry did use several solutions involving older-style machine learning forms of AI technology, where it made sense; for example, patent searching in the Intellectual Property Office.
A Stats NZ spokeswoman said it had no AI initiatives underway.
Kāinga Ora technology director Jan Serfontein said it was not using AI.
“We’re staying current with what’s being done across the sector so we can understand the implications of AI and where its use may warrant further investigation in future”.
A Ministry for Primary Industries spokesperson said it did not use any AI technologies. It was in the process of developing a workforce policy.
Ministry for Social Development information general manager Hannah Morgan said
it allowed limited use of generative AI tools like ChatGPT. These were not used as part of any business processes. It permitted AI tools for ideation only, such as brainstorming or outlining.
This did not include developing content for reports, policy advice, or briefings or any Ministry or client information, she said.
There were no confirmed plans for future uses of generative AI.
What some councils are doing with AI
Tauranga City Council general manager corporate services Alastair McNeil said it used ChatGPT and similar AI tools for tasks such as generating text or content for reports, emails, presentations and images.
It had created a Generative AI policy to provide support and guidance.
It had strict checks and balances in place to mitigate potential risks, especially around data protection and privacy.
“As the technology evolves we will continue to investigate Generative AI and how best to incorporate this powerful tool into our daily operations.”
Bay of Plenty Regional Council digital manager Evaleigh Rautjoki-Williams said it was using AI tools. AI had potential to unlock opportunities forfuture-ready ways of working.
“Our initial focus is using it for internal efficiencies, such as summarising information and generating ideas. This enables us to learn about the capabilities alongside potential risks and issues before providing public-facing services using AI components.”
Microsoft Copilot was available to all staff and wasa relatively low-risk and intuitive tool for Generative AI.
“We have established an AI staff network to build internal expertise which is made up of key individuals from various teams, and includes digital, legal, privacy, and security representatives.”
Western Bay of Plenty District Council chief executive John Holyoake said it had no set goals around using AI.
“We are watching this space with interest and are keen to be a ‘fast follower’, rather than an early adopter.”
AI could potentially create greater efficiencies but there were privacy concerns.
Rotorua Lakes Council chief executive Andrew Moraes said AI and GenAI had the potential to enhance services and functions and create efficiencies “but the risks, as well as the benefits, need to be well understood and managed”.
In March it finalised a policy to guide the use of GenAI by council staff, with a preference for council-managed GenAI services.
It was looking at AI initiatives such as automated meeting minute transcription and voice-activated digital assistants similar to Siri and Alexa.
Slow adoption of AI a disadvantage
University of Waikato Artificial Intelligence Institute director Professor Albert Bifet said the government could be disadvantaged if councils were quicker to adopt AI.
“Councils often have more flexibility to try new technologies and can implement them quickly.”
The government could miss out on improving efficiency and services.
“Slow adoption might lead to inconsistencies in AI use across different levels of government, causing inefficiencies and unequal service quality. It’s important for the government to adopt AI proactively to ensure consistent and high-quality public services.”
AI would be important in the future workplace and can make public services more efficient, simplify administrative tasks, and improve decision-making with data analysis.
He acknowledged privacy concerns were valid since the government held a lot of personal information about people.
“The fear that AI could control the government is understandable, but AI itself doesn’t have the ability to control anything. It is a tool created and controlled by humans. However, if not managed properly, AI systems could potentially make decisions that significantly impact people’s lives.”
“This is why it’s crucial to have strong regulations, transparency, and oversight in place.”
Privacy and human rights
The Privacy Act applies to protect your personal information and includes AI.
A Privacy Commission spokesperson said while technologies bring significant benefits to individuals and agencies, they also introduced new and novel privacy risks that need to be managed. Examples included leaking of personal information in AI outputs, impersonation and fraud, nonconsensual fake images, audio and video, and automated decisions based on incorrect information or biased processing.
It has developed guidance to help ensure agencies using AI tools meet their obligations.
“We also need to ensure the Privacy Act can keep up with AI uptake over time”.
A Human Rights Commission spokesperson said in practice it could be hard for individuals to know whether an AI-based system had discriminated against them.
“New Zealand is yet to implement specific legislation that directly addresses the particular privacy and human rights challenges emerging from AI and sophisticated algorithmic technology.”
The Commission supports calls for legislation relating to AI in Aotearoa, to ensure that people’s right to privacy and rights to equality and non-discrimination could be adequately protected.
Such legislation should be grounded in te Tiriti o Waitangi, they said.
AI in other countries
- Australia has released AI ethics principles and tactical guidance for its public service.
- Canada and the European Union are taking a risk-based approach.
- The United Kingdom is releasing principles-based advice to civil servants on the use of GenAI, investing in AI research hubs, and training regulators in different sectors to address the risks while harnessing the opportunities.