Fake influencer Lil Miquela has already illustrated how powerful computer-generated imagery can be. Photo / File
In the secret recesses of the internet – the darkweb, where drugs, guns and sex are for sale – there are chatrooms where paedophiles gather to swap appalling images of abuse.
Last month, a new Home Office national strategy to tackle child sexual abuse (CSA) noted that 10 of theworst darkweb sites have 3.45m registered user accounts around the world, with up to 240,000 of them in the UK.
The police's Child Abuse Image Database (CAID), deployed to identify victims and trace offenders, now contains 16.8 million indecent images and 323,700 videos.
Around the world, however, authorities often face a problem: to gain access to the dark web chatrooms and hunt down abusers, undercover officers are required to gain their quarries' trust by providing images of abuse themselves.
"The darkweb is particularly difficult to police," says Abhilash Nair, academic and author of a recent book entitled The Regulation of Internet Pornography. "To get into some of these forums, you need to offer something to be let in."
Now German police are doing just that, after the approval of new laws allowing them to use hyper-realistic – but fake – computer-generated images depicting abuse in sting operations against paedophiles.
Understandably, the move has created deep unease. Before the law was passed, opposition MP Stephan Thomae complained that "the goal should be to eliminate child pornography material from the internet, not to enrich it with computer-generated material." Green Party politicians rejected the proposal entirely, on the basis that it was not ethical to fight crime by committing crime.
"It is ethically fraught," says Nair, who says that while police in Britain can legitimately access child abuse images to pursue investigations, "that does not extend to using virtual imagery as bait."
But hard as such ethical dilemmas are, experts say we are only just beginning to confront the problems posed as computer-generated images become ever more photo-realistic, and are deployed in ever more sophisticated and immersive platforms, including virtual reality.
"These CGI child porn images are synthetic media, which uses artificial intelligence to generate human-like characters," says Dudley Nevill-Spencer, founder of the Virtual Influencer agency, which builds digital characters for brands, managing "their life arc, story and content development on behalf of clients."
The most famous synthetic "influencer", Lil Miquela, is a fashion model realistically computer-created in 2018 complete with her own backstory – "she" comes from the suburbs of Los Angeles. But though that background is theoretical, "her" deployment and value on social media, as a cool young champion of brands including Prada and Calvin Klein, is very real. Lil Miquela's Instagram account currently boasts 3m followers.
Now, the tools to create armies of Lil Miquelas are in everyone's hands. "Three years ago, we needed to spend a lot of money on very expensive CGI software or processing power," says Nevill-Spencer. "Now, I've got 19-year-olds in Berlin creating characters that you would have seen in the movies three years ago, using cloud software you can almost download for free. It is incredible."
The market for virtual people is currently being driven by businesses, he says, looking for cool characters to promote their wares, or patient and tireless for customer services – like Jamie, a hyper-realistic rep created by ANZ Bank, and Samsung's Neons.
But few doubt that the market for such "virtual humans" will eventually extend to the retail market too, with private customers boasting walking, talking, hyper-real virtual versions of themselves, just as many now have digital profiles on social media.
"Is the law ready for this?" asks Edina Harbinja, Senior Lecturer in Media/Privacy Law at Aston University. "Obviously not."
The UK's forthcoming Online Harms Bill is set to deal with issues such as bullying and abuse online. But, says Harbina, "these are almost analogue harms compared to what we can imagine in the future".
She points to the legal conundrum posed by the recent revelation that Microsoft has been granted a patent in America for a chatbot that would draw on a dead person's social media posts and other data to create a digital version of them that relatives could interact with.
"But are these digital creations an extension of your identity, with human rights to privacy and dignity?" asks Harbinja. "Or are they an extension of data, to be owned, like property, and commercialised? This is where the legal battle will be fought."
To many of us, the idea of communicating with such characters may seem bizarre. In announcing that the chatbot was an old project which the company was not planning to pursue, Tim O'Brien, Microsoft's general manager of AI programs, tweeted: "Yes, it's disturbing."
But how weird you find it, suggests Nevill-Spencer, probably depends on your age. "Anyone under 24 doesn't find anything remotely weird about it. They've been talking to Alexa forever." Older people, he claims, adapt fast. "The moment after they've had their first engagement with a virtual human, that's it, their squeamishness is gone."
Indeed, some studies – like a US military effort to help veterans deal with post-traumatic stress – show that far from clamming up, people can be more willing to interact with virtual humans than the real thing.
That willingness to trust and confide may cause other problems. In a virtual world, who guarantees that people are who they say they are, or that they are not sponsored to sell you something?
"As so often with technology there isn't a regulatory body that is common in the production of digital humans," says Nevill-Spencer. "It doesn't exist."
He calls for transparency: "There has to be disclosure. You need to know who the people are behind the avatars, how old they are; to label brands characters so it's clear their motivation is to sell a product; or if there's a created character, who is the creator?"
The law may only catch up with this brave new world, says Harbinja, when it begins to affect us all. "Look at AI. Concern started when it started making discriminatory decisions about human populations. When it becomes widespread, then the law comes in." That may be sooner than we think. "It's not whether but when they will be widespread," she says of virtual humans.
The clearest mediums for that to occur are virtual reality, which immerses goggle-wearing users in a fantasy world, and augmented or mixed reality, which overlays digital effects and information on our real surroundings, like a heads-up-display. Not for nothing do the latest smartphones feature "Light Detection and Ranging" technology known as lidar, which can map the dimension of rooms and other physical spaces.
"With VR we will be transcending offline and online worlds quite seamlessly," says Harbinja. "This line between our physical self and our digital virtual self will be increasingly blurred and that's why we will be wrestling with these difficult questions."
Will our avatars belong to us, or to the platforms on which they exist? Will they just represent data, or a vital sliver of our selfhood?
"[Facebook CEO Mark] Zuckerberg's obsession is synthetic media," says Nevill-Spencer, "His goal is to remove the telephone as the device that connects you and replace it with glasses, which overlay the virtual world onto the real world. Eyewear becomes the new operating system. Apple is doing the same thing. What's going to populate these virtual worlds? Virtual characters."
Such characters, he thinks, will soon beguile and charm us, befriend us as well as prey on our credulity and trust or, eventually, that of our own digital avatars. Indeed, such virtual relationships, for good and bad, are inevitable
Most of us are hardwired to treat humans with respect and decency, shrinking from violence against even anthropomorphised objects.
"It creates an emotional relationship with a humanoid looking subject. You can't control it," says Nevill-Spencer. "This has only just got started."