Only 12% of public servants surveyed said they had received training in the use of AI at work. Image / Getty Creative
Only 12% of public servants surveyed said they had received training in the use of AI at work. Image / Getty Creative
There are fears AI is being widely used by public servants who have not been trained in the new technology.
A Public Service Association survey of 900 members found:
55% said AI was in use in their workplace
94% said their use of AI systems was self-initiated, as opposed to being required as part of a work process
58% had not been told what information cannot be uploaded to public AI platforms, such as ChatGPT
Only 12% of respondents using AI said they had received training on the use of AI at work
The PSA said the survey also found a “low level” of concern about jobs being replaced by AI.
Butnational secretary Kerry Davies said these figures indicated there was a clear and pressing need for employers to engage with workers on using AI in a planned way.
Victoria University AI expert Dr Andrew Lensen said it was striking how much the uptake of AI was organic.
“This is both good and bad. Grassroots use of AI is a really good way to discover how AI could be used most effectively across an organisation.
“But a lack of oversight comes with risks of inappropriate or unethical use.”
“It is striking to me how much the uptake of AI is organic - employees choosing to experiment with how it can help their work,” Victoria University senior lecturer in artificial intelligence Andrew Lensen.
Lensen said the survey also highlighted a desperate need for more AI training and literacy.
“High-quality training is crucial for the public sector, where misusing AI, even inadvertently, can lead to biased and unfair outcomes for citizens.
“While the public sector is starting to progress in its development of multiple AI frameworks and guidelines, we really need to see more emphasis on education, too. Even the difference between ‘generative’ and ‘predictive’ AI and the different pitfalls they have is not commonly understood.”
“If people aren’t made aware of both the opportunities and limitations of Gen AI tools, then there is real scope for harm and other unintended consequences" - AI governance expert Frith Tweedie
AI governance and privacy expert Frith Tweedie said it was concerning that of the 12% of public servants who had artificial intelligence training, only 30% had learned about AI and privacy and 24% covered AI ethics and principles.
“Training has to be about more than just getting the most value out of AI — it also has to address Gen AI’s limitations so people understand how to use these tools safely and responsibly.
“If people aren’t made aware of both the opportunities and limitations of Gen AI tools, then there is real scope for harm and other unintended consequences. For example, privacy, security and confidentiality violations, accuracy problems (aka “hallucinations”) and outputs that perpetuate societal biases."
Tweedie — a former technology lawyer who now a co-director of Simply Privacy and sits on the elected board of the AI Forum NZ — added, “When I do risk assessment work for clients, Gen AI training and guidance is always a key risk mitigant. This needs to be addressed for the public sector Asap”.
“In my experience, the only way you learn this stuff is by constantly engaging with the latest releases on a daily basis - which potentially involves higher levels of risk and time than hierarchical public sector organisations are comfortable with" - futurist Ben Reid
Memia director and Fast Forward Aotearoa author Ben Reid said that beyond governance guidelines, all organisations have their work cut out with the practical side of using AI.
“AI continues to advance at breakneck speed internationally — with no signs of slowing down. For example, recent research from Metr maps the duration of complex tasks, which AI can complete doubling every seven months. If this trend continues, then by 2026, AI will take a few seconds to complete work which would take a human one hour to achieve — and things will keep on escalating from there.
“Against this exponentially advancing background, it’s no wonder that AI skills training needs aren’t being met by traditional employers.
“By the time anyone’s rolled out a training course, the technology will have moved on,” Reid said.
“In my experience, the only way you learn this stuff is by constantly engaging with the latest releases on a daily basis, and just carving out time to be curious and experiment — which potentially involves higher levels of risk and time than hierarchical public sector organisations are comfortable with or have the capabilities to govern.”
Internal Affairs: Training, on the way
The Government says more training and an advisory panel are on the way.
The Department of Internal Affairs is setting the rules around how the public sector uses artificial intelligence.
A spokesperson for Internal Affairs head Paul James — who doubles as the Government chief digital officer — said, “An AI training approach for the Public Service is in development with the GCDO and Public Service Commission, in addition to the AI training agencies provide and will provide as part of their own AI programmes”.
Other GCDO-led initiatives are under way “To further support and upskill public service agencies and their employees include an AI community of practice for digital practitioners in the public service, developing a toolkit of additional resources, the establishment of an Expert Advisory Panel for Public Service use of AI and the development of a Public Service AI Assurance model.”
Currently, the public service as a whole operates under broad-stroke guidelines, but together, in part, by Statistics NZ, while those who work in the Beehive are kept on a tighter leash by the Parliamentary Service, which in February banned the made-in-China DeepSeek AI.
More PSA survey findings
7% said they use AI or automated decision-making to inform decisions
62% said use of AI might undermine public trust in Government.
8% said there was a process in place in their workplace to raise issues with suspected AI decision-making failures
43% said they did not feel confident to raise concerns about issues with AI systems if they encounter them in their work
And, unlike across the Tasman, there has yet to be any new funding to boost AI uptake in the public or private sectors, although some existing resources have been reallocated to support the new technology.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.