ACC have been using a secret model to predict which claimants will pose a higher risk in terms of their duration of claim.
Public insurer ACC is using a secret computer model to predict how long clients will be on its books, then sorting and targeting those it considers a risk.
The software was built using the private information of thousands of ACC clients without their knowledge - and possibly without their consent.
It has had no public scrutiny or oversight from agencies such as the Privacy Commissioner, despite having potential mass privacy implications.
Critics say the lack of transparency is unacceptable, particularly considering the corporation's poor track record with sensitive information.
ACC was responsible for one of New Zealand's worst privacy breaches, sending details of 9000 claims to a person who should not have received them in 2012.
Experts are also concerned the model could be discriminatory - for example treating similar injury claims differently depending on a person's ethnicity or gender.
"We need to ask ourselves if this is something that is acceptable to us as New Zealanders and we need to ask how it was allowed to be developed," said specialist ACC lawyer Warren Forster, from advocacy group Acclaim Otago.
"We must have transparency and system oversight. People have a right to know what is being done with their data by the Government."
Predicting risk though the use of computer algorithms is a method increasingly used by public agencies around the world, particularly where they deal with large amounts of data.
New Zealand's most notable case so far has been at the Ministry of Social Development, which is creating a tool to predict children at-risk of child abuse. It is subject to high public scrutiny.
Documents show ACC has used risk modelling since at least 2004. It told the Herald its current version was built by a contracted consultant in 2014, using historic claims data.
It says the model was designed to improve customer experience by allowing it to proactively contact clients who need more help, and assign claims more quickly.
The algorithm uses a variety of client and claim information - such as a person's age or the site of their injury - to make its predictions about which clients need more help, what type of case manager they should have, and how long they are likely to take to recover.
Exactly how it works is unclear - despite new best practice data guidelines advocating transparency - because the model is not public. ACC would not release details to the Herald.
It would also not answer questions about whether the model has been checked for bias.
Overseas, similar systems had been found to contain heavy racial bias, such as a sentencing tool in the United States which was biased against black people.
They also have issues with accuracy - for example the local version built to predict children at risk of child abuse was wrong 50 per cent of the time - and so constant revision is now considered best practice.
It is unclear what kind of revision ACC completed as it did not answer those questions either.
Additionally, the corporation was unable to provide any information about what ethical guidelines were provided to staff using the model's predictions about their clients.
ACC said staff could see an estimate of the client's likely recovery duration, and used that to ensure "proactive management" and introduce "interventions" where recoveries were not progressing as expected.
Forster said that was a huge concern, because ACC staff and managers were driven by exit targets - the time in which they could get their clients off the books.
"Targets combined with predictions could drive unlawful and unethical behaviour, such as exiting people from the scheme when they still need ACC's help," Forster said.
He did not think ACC had client consent for the corporation to use their data in the model, unless there were forms he had not seen, which was unfair.
Professor Rhema Vaithianathan, co-director of the Centre for Social Data Analytics at AUT, said it wasn't a question of public insurers doing a "bad thing" by using algorithms, but if they were fair and transparent in the way they did it.
"Fairness means treating like with like, so two people in the same situation treated the same, and two people in different situations being treated appropriately," she said.
Transparency required that people should know to what extent the algorithm was making a decision, and where a human was involved.
"That's crucial with public agencies, where some of these decisions are crucial to people's lives," Vaithianathan said.
Privacy Commissioner John Edwards said he didn't know enough about the model to comment, however he would be concerned to see a model that had not been sufficiently tested for accuracy, bias, and was overly relied on as a substitute for human decision-making.
ACC Minister Michael Woodhouse did not respond to requests for comment.
Uses computer modelling to predict how long claimants are likely to be on its books, and which ones will need extra help so managers can closely monitor their cases
What experts say it should have
Checks and balances for bias, ethical and legal oversight, constant revision, public transparency, consent to use client data
What it doesn't have
It's not clear which checks have been completed as the model and its workings are not public. The Privacy Commissioner was unaware of its creation, and it's unknown if clients are aware or consenting to their data being used in the system when they lodge a claim.