But sentencing is a task AI may be able to perform instead – after all, AI machines are already used to predict some criminal behaviour, such as financial fraud. Before considering the role of AI in the courtroom, then, we need a clear understanding of what it actually is.
AI simply refers to a machine behaving in a way that humans identify as “intelligent”. Most modern AI is machine learning, where a computer algorithm learns the patterns within a set of data. For example, a machine learning algorithm could learn the patterns in a database of houses on Trade Me in order to predict house prices.
So, could AI sentencing be a feasible option in New Zealand’s courts? What might it look like? Or could AI at least assist judges in the sentencing process?
Inconsistency in the courts
In New Zealand, judges must weigh a number of mitigating and aggravating variables before deciding on a sentence for a convicted criminal. Each judge uses their discretion in deciding the outcome of a case. At the same time, judges must strive for consistency across the judicial system.
Consistency means similar offences should receive similar penalties in different courts with different judges. To enhance consistency, the higher-level courts have prepared guideline judgements that judges refer to during sentencing.
But discretion works the opposite way. In our current system, judges should be free to individualise the sentence after a complete evaluation of the case.
Judges need to factor in individual circumstances, societal norms, the human condition and the sense of justice. They can use their experience and sense of humanity, to make moral decisions and even sometimes change the law.
In short, there is a “desirable inconsistency” that we cannot currently expect from a computer. But there may also be some “undesirable inconsistency”, such as bias or even extraneous factors like hunger. Research has shown that in some Israeli courts, the percentage of favourable decisions drops to nearly zero before lunch.
The potential role of AI
This is where AI may have a role in sentencing decisions. We set up a machine learning algorithm and trained it using 302 New Zealand assault cases, with sentences between zero and 14.5 years of imprisonment.
Based on this data, the algorithm built a model that can take a new case and predict the length of a sentence.
The beauty of the algorithm we used is that the model can explain why it made certain predictions. Our algorithm quantifies the phrases the model weighs most heavily when calculating the sentence.
To evaluate our model, we fed it 50 new sentencing scenarios it had never seen before. We then compared the model’s predicted sentence length with the actual sentences.
The relatively simple model worked quite well. It predicted sentences with an average error of just under 12 months.
The model learned that words or phrases such as “sexual”, “young person”, “taxi” and “firearm” correlated with longer sentences, while words such as “professional”, “career”, “fire” and “Facebook” correlated with shorter sentences.
Many of the phrases are easily explainable – “sexual” or “firearm” may be linked with aggravated forms of assault. But why does “young person” weigh towards more time in prison and “Facebook” towards less? And how does an average error of 12 months compare to variations in human judges?
The answers to those questions are possible avenues for future research. But it is a useful tool to help us understand sentencing better.
The future of AI in courtrooms
Clearly, we cannot test our model by employing it in the courtroom to deliver sentences. But it gives us an insight into our sentencing process.
Judges could use this type of modelling to understand their sentencing decisions, and perhaps remove extraneous factors. AI models could also be used by lawyers, providers of legal technology and researchers to analyse the sentencing and justice system.
Maybe the AI model could also help create some transparency around controversial decisions, such as showing the public that seemingly controversial sentences like a rapist receiving home detention may not be particularly unusual.
Most would argue that the final assessments and decisions on justice and punishment should be made by human experts. But the lesson from our experiment is that we should not be afraid of the words “algorithm” or “AI” in the context of our judicial system. Instead, we should be discussing the real (and not imagined) implications of using those tools for the common good.
· Andrew Lensen is a lecturer in Artificial Intelligence at Pūkenga, Te Herenga Waka - Victoria University of Wellington
· Marcin Betkier is a lecturer in Law at Te Herenga Waka - Victoria University of Wellington