Authorities worldwide are seeing more AI-generated sex abuse imagery, prompting legal adaptations to prosecute offenders effectively.
Men in Auckland, Christchurch and Wellington have recently faced sentencing for illegal computer-generated imagery.
A detective tasked with helping investigate the thousands of child abuse imagery tips New Zealand authorities receive each year says the number of cases involving AI images is on the rise.
Warning: This article contains descriptions of cases involving child sexual abuse material.
The images were not real but nevertheless disturbing.
Among the images created with the help of artificial intelligence (AI) was a sexually explicit scene involving a deepfake of a pre-pubescent Harry Potter character and an unrelated scene involving adult beastiality.
The defendant, who has name suppression, was sentenced to community detention and intensive supervision during an appearance late last year in Auckland District Court. One week earlier, in Wellington District Court, a man was granted a discharge without conviction after telling police he was being blackmailed by a hacker for six AI-generated anime-style images that were on his computer.
And another man was sentenced in Christchurch District Court last week to three and a half years’ imprisonment for making, possessing, distributing and exporting child sexual abuse material – with roughly half of the 1345 objectionable publications in his possession having been computer generated.
Authorities around the world are seeing an influx of such cases as AI-generation programmes become more and more sophisticated, outputting photorealistic content. It has prompted a scramble among lawmakers in many jurisdictions to create bespoke laws that allow them to prosecute holders of such imagery as easily as those caught with images depicting real victims.
“The rise of artificial intelligence is altering the landscape of child harm,” the United Nations Interregional Crime and Justice Research Institute said in a 52-page report released in September. “One of the most pressing dangers facing the global child protection ecosystem is AI’s effects on child sexual abuse material.”
In New Zealand, child exploitation material is investigated by three agencies: New Zealand Police, Customs and the Department of Internal Affairs.
Detective Senior Sergeant Kepal Richards – who works with the Online Child Exploitation Across New Zealand team, dubbed Oceanz – says prosecutors in New Zealand are able to work with the existing law.
The Films, Videos and Publications Classifications Act forbids any publication that “promotes or supports” child sexual exploitation, meaning a specific victim does not need to be identified. Violating it can result in a sentence of up to 10 years' imprisonment and a fine of up to $50,000.
The law “puts New Zealand in a better place than some of our international partners”, Richards told the Herald.
According to the United Nations report, about 18% of countries around the world are trying to catch up to the technology – particularly over the past two years – by considering legislation to address such images directly.
The images include ones that are completely made up by imputing text descriptions, others that involve feeding images of real people to put them in sexualised scenarios and videos that have been described as “rapidly becoming indistinguishable” from real-life exploitation footage.
Consensus appears to be clear among lawmakers that all such images are harmful.
“Some offenders might attempt to justify their actions by telling themselves and others that AI-generated sexual abuse imagery of children is less harmful as there is no real-life victim but that isn’t the case,” Richards said. “The viewing of these types of images reinforces sexual attraction to children and when shared promotes or supports the exploitation of young persons ... Those found making, possessing, or distributing it can expect to be held accountable for their actions.”
Statistics gathered so far have suggested AI cases are still coming in at a trickle.
The United States-based National Centre for Missing and Exploited Children, which co-operates with New Zealand law enforcement to refer cases, reported receiving 4700 tips regarding AI-generated child sexual exploitation material globally in 2023. That figure was dwarfed by the 36 million tips overall.
But authorities locally and internationally have reported a “steep uptick” over the past three years as AI generators hit the mainstream.
US law enforcement agencies have reported finding AI-generated material in about 50% of seizures, according to the United Nations report. Chainalysis, which tracks cryptocurrency on the dark web, has reported that the cost of black market images has “collapsed” – thought to possibly be driven by the “deluge” of AI content.
Of those 4700 tips to the Centre for Missing and Exploited Children, more than 70% were said to be found on traditional online platforms like Meta and X.
“Despite the fact that it is still in its nascency, generative AI has quickly evolved to facilitate nefarious activity and has increased mainstream access,” researchers noted in the United Nations report, adding that it’s difficult to get an accurate count of such cases due to the hidden nature of such offending. “Still, all indicators point to a growing volume of content, indicating that more children are being harmed ... It is likely that perpetrators will more easily be able to victimise more children – and adults – from the comfort of their computers. This increase in sexualisation of children may reduce barriers for other interested perpetrators, kicking off a new epidemic of child sexual exploitation.”
United Kingdom-based watchdog the Internet Watch Foundation sounded the alarm about the issue in 2023 after finding thousands of AI child exploitation images on a dark web forum. In an update to the report, the agency recently noted some increasingly troubling trends.
Images can increasingly be found on commercial sites, and those remaining on the dark web appear to be depicting more severe abuse, “indicating that perpetrators are more able to generate complex ‘hardcore’ scenarios”, the report states.
Anecdotally, the flow of AI cases in New Zealand also seems to be at a trickle so far.
Richards said AI imagery is increasing “but by no means is it as prevalent as the sharing of real-life sexual child abuse material we see currently”.
One fear, he said, is that it will bog down investigations as it becomes more and more difficult to differentiate between AI and real-life imagery. While both are prosecutable, finding and helping sexual abuse victims will always be an important priority.
“This can take away considerable time and resources of investigators who are trying to detect and safeguard real children both here in New Zealand and internationally,” he said.
One of the most concerning trends in AI image generation, Richards said, is “nudify” apps that take images of clothed people and generate deep-fake sexually explicit versions, which “serves as a timely reminder to parents about posting innocent images of their children online”.
“It’s important that privacy settings are appropriate and reviewed often, that you only share the images with people you know and trust as once on the internet, people with bad intentions can with the use of AI technology objectify your child for sexual gratification.”
Rise of the ‘nudify’ apps
And it’s far from an underground, dark web-only phenomenon.
Melbourne-based RMIT University Professor Nicola Henry recently cited a “landmark” lawsuit in which the San Francisco City Attorney took aim at 16 “nudify” websites. Those sites were visited over 200 million times just in the first half of 2024, with one site boasting: “imagine wasting time taking her out on dates, when you can just use [redacted site] to get her nudes”.
In that same period, there had been a 2400% increase in advertising for such apps on social media, she noted in a scholarly article for The Conversation. She suggested governments take proactive measures to block access to such sites but acknowledged such a tactic could easily be overcome with a VPN.
The Australian Government took measures to specifically address non-consensual deepfakes last year, passing a law that carries a punishment of six years’ imprisonment for sharing such material and up to seven years for being the person who created it.
In announcing the passage of the law, Attorney-General Mark Dreyfus described the sharing of such material as “a damaging and deeply distressing form of abuse”.
“This insidious behaviour can be a method of degrading, humiliating and dehumanising victims,” he explained. “Such acts are overwhelmingly targeted towards women and girls, perpetuating harmful gender stereotypes and contributing to gender-based violence.”
In New Zealand, internet safety agency Netsafe has advocated for an amendment to the current law banning the posting of intimate visual recordings without consent to clearly include AI-generated content.
“Given the rapid pace of technological change, some laws may need updates to explicitly cover deepfake and AI-generated content,” the agency said on its website.
However, both the Harmful Digital Communications Act and the Films, Videos, and Publications Classification Act offer protections, the agency said.
Last year, police, Customs and the Department of Internal Affairs received almost 20,000 tips regarding potential child exploitation materials in New Zealand. Most of those were referrals from the National Center for Missing and Exploited Children, but they also included information from other international partners and locally generated tips, Richards said.
“These referrals are triaged daily by the three agencies and an investigative, prevention or intelligence response formulated,” he explained.
Most AI cases, he said, seem to involve people who are downloading pre-existing images rather than creating their own.
That was the case in November when an Epsom resident with name suppression appeared before Auckland District Court Judge Nevin Dawson for sentencing. He was supported in court by his mother and his wife, even though she had left him after learning of his illegal activity.
“It’s a great pity to see you here today,” Dawson told the 53-year-old. “You’ve never been before the court before. I hope I never see you again.”
Prior to sentencing, the defendant voluntarily began attending weekly psychotherapy sessions, defence lawyer Sarah Baird told the judge, noting that her client was assessed as being a lower-than-average risk of re-offending via images and a very low risk of escalating offending involving the grooming children.
All but four of the 650 images of live children that were found on his devices were deemed “Category C”, the lowest level which excludes scenes of rape or sadism. That, paired with a discount for her client’s guilty plea, made compelling arguments for him to receive community detention rather than home detention so he could continue his employment, Baird said.
The judge agreed, noting that the defendant said he regretted the day he “stumbled across” the dark web.
“I accept your remorse is genuine because you have made serious efforts at rehabilitation,” he said.
But another defendant in Christchurch wasn’t so fortunate last week.
Customs began investigating the 35-year-old in June after a tip from the Centre for Missing and Exploited Children that he had been uploading illegal images to social media via a location in New Zealand.
A search warrant was executed at his home days later, and authorities found images that included bestiality, defecation, rape and torture. The haul also included “hyper-realistic computer-generated videos of children being abused”, Simon Peterson – chief Customs officer for the Child Exploitation Operations Team – said in a statement.
“The spread of computer-generated and AI-generated child sexual abuse imagery is of real concern but, as this case shows, it is a crime which Customs takes seriously and that we will investigate and prosecute,” he said.
In addition to the three-and-a-half-year sentence, the defendant was placed on the sex offender register.
Several similar cases remain pending in Auckland.
Craig Kapitan is an Auckland-based journalist covering courts and justice. He joined the Herald in 2021 and has reported on courts since 2002 in three newsrooms in the US and New Zealand.
Sign up to The Daily H, a free newsletter curated by our editors and delivered straight to your inbox every weekday.