Ask how to use risk-modelling system, not why to trash it.
Criticism of the Government's white paper on child abuse and its reliance on a risk-modelling tool prepared by the University of Auckland was predictable. However it is important to see that concerns about the implementation strategies set out in the white paper - making risk scores available to "professionals across the sector" with its attendant security risks for instance - may not be good reasons for rejecting the risk modelling tool itself.
The attraction of risk modelling turns on predictive power and the opportunity it provides for early and targeted intervention. That promise must not be lightly put aside. The tool uses a de-identified data set linking administrative records from Social Welfare and the Cyfs systems. The resulting focus on children whose families have received benefits raises important ethical issues touched on below. The basic justification for the focus, however, is found in records showing that 11,878 (5.4 per cent) of children born between 2003 and 2006 were maltreated by age 5, that 9816 (83 per cent) had had a benefit spell prior to turning 2, and would therefore have received a risk score under the Auckland tool.
Patrick Kelly, head of New Zealand's main child abuse unit, and member of a 2009 expert panel on child abuse, points out that existing approaches already select families for targeted intervention on the basis of a short list of known risk factors, and asks "why not just implement that?" The answer lies in the promise of greater predictive power of the Auckland tool. If that promise is well founded (and the evidence is offered on the MSD website), it must not be ignored.
Some critics object that automated predictive risk modelling will lead social workers to trust computers rather than their own judgment. The tool does not purport to replace social workers or to shift them off the frontline. It may have the opposite effect. Many risk assessment tools rely on social workers correctly identifying and reporting risk factors. Such "operator driven" tools are significantly threatened by what we might crudely call data entry problems. Better, one might think, to leave the complex and time consuming data collection and validation tasks to an automated system and its designers (properly informed by social workers) freeing front line staff to exercise experience and judgment in decisions about the proper response to validated risk assessments delivered by the tool.