The 28-year-old said the harassment in the wake of the incident was relentless. Photo / Instagram
As artificial intelligence becomes more commonplace, an “insidious” issue is on the rise – which one victim says is as traumatic as a sexual assault.
Deepfake – a portmanteau of ‘deep learning’ and ‘fake’ – media overlays an image or video on to an existing image. It’s been used to enhance both the entertainment and gaming industries. But there are now mounting concerns over its potential to create sexually explicit content, without the consent of those depicted.
Such was the experience of Blaire, a Twitch streamer who goes by the handle ‘QTCinderella’ online, when she received dozens of messages in February about an “adult video” of her that had gone viral on a pornographic website.
The issue? The 28-year-old had never created such a video – and was distraught to learn her face had been pasted onto another woman’s body, making it look as if she was legitimately engaging in the depicted act. She had become a victim of deepfake porn.
While image-based abuse – commonly referred to as ‘revenge porn’ – remains a concern, especially among young women, experiences like Blaire’s show that sexual content no longer needs to be produced in the first place for people to share it.
Dr Emma Jane, one of the world’s leading academic experts on digital misogyny and gender-based violence, told news.com.au non-consensual deepfake pornography “absolutely” sits on the spectrum of sexual violence.
It’s a form of abuse referred to in academia as technology-facilitated gender-based violence (TFGBV), and one with a “depressingly long history”. Three decades after the first widely-reported rape in the digital world – a 1993 incident in a small multi-player online game called LambdaMOO – Dr Jane said “it’s incredibly dispiriting that we’ve made so little progress”.
‘An insidious form of victim-blaming’
Contributing to that lack of progress is the entrenched belief that TFGBV doesn’t constitute sexual abuse because it doesn’t occur in “real” life.
Considering “our daily interactions increasingly involve a merging of the digital and the physical”, Dr Jane said, “I no longer think it’s useful to try to make stark distinctions between our online and offline lives”.
“This is one reason it’s not okay to tell women who’ve been targeted for TFGBV (like deepfake pornography) that the solution is to simply ‘take a little break’ from using the internet,” she added.
“It’s on par with suggesting they stay locked down in their homes rather than risk venturing outdoors, and constitutes an insidious form of victim-blaming.”
Telling women who’ve been targeted for non-consensual deepfake pornography that it’s not “real”, Dr Jane continued, could be considered a form of gaslighting.
“Unfortunately, it has many parallels with the way sexual violence and gendered harassment are downplayed in our broader culture. Victim blaming and shaming is prevalent everywhere,” she said.
“Debates about whether sexual violence is ‘real’ if it occurs online are often used as an opportunity to further attack those who speak publicly about their experiences … TFGBV can have a severe – and embodied – impact on targets that affects their psychological health, their livelihoods, their reputation and their physical safety.”
She pointed to a 2020 study by the EIU, which found that of women who had experienced online violence in the previous year, 7 per cent lost or had to change their job, 35 per cent reported mental health issues, one in 10 experienced physical harm as a result of online threats, and almost nine in 10 reported restricting their online activity in a way that limited their access to employment, education, healthcare and community.
The sexual nature of deepfakes reflects rape culture on a global scale. As Blaire told Vice’s Motherboard, the harassment in the wake of her incident was relentless, and resurfaced trauma from her past.
“You feel so violated … I was sexually assaulted as a child, and it was the same feeling,” she said.
“Where you feel guilty, you feel dirty, you feel like, ‘What just happened?’. And it’s bizarre that it makes that [trauma] resurface. I genuinely didn’t realise it would.”
It's crazy to see the amount of people excusing Atrioc with the words "it's just porn"
It's fake porn using the faces of people who never made the porn in the first place. Never consented to be used in porn. His colleagues. His friends who, I repeat, DID NOT CONSENT. https://t.co/4OlnY548DE
Maya Higa, one of the victims Twitch streamer Atrioc (Brendan Ewing) was recently caught watching a deepfake porn video of, shared a similar perspective in a harrowing statement on Twitter.
“In 2018, I was inebriated at a party and I was used for a man’s sexual gratification without my consent,” she wrote.
“Today, I have been used by hundreds of men for sexual gratification without my consent. The world calls my 2018 experience rape. The world is debating over the validity of my experience today.”
‘Not something we can simply arrest our way out of’
As is often the case with sexual abuse and harassment, the path to justice for victims of TFGBV is not an easy one. Not only does the costly, time-consuming burden of legal recourse fall on them; it’s further complicated by the fact most people sharing abusive images online are doing so anonymously and can be harder to pin down.
In the UK, the government recently amended its Online Safety Bill in a bid to crack down on deepfakes; in the US, only Virginia and California have laws that ban faked and deepfaked revenge porn.
In Australia, eSafety Commissioner Julie Inman Grant told news.com.au “deepfakes are covered under eSafety’s image-based abuse scheme, which is the only scheme of its kind in the world”.
“Any Australian whose images or videos have been altered to appear intimate and are published online without consent can contact eSafety for help to have them removed … We have a 90 per cent success rate in achieving these takedowns, primarily from overseas sites,” Inman Grant said.
Dr Jane said the issue of deepfake pornography is “not something we can simply arrest our way out of”.
“These are problems involving complex combinations of people and technology, so there’s never going to be a single simple solution,” she said.
“I do, however, think an important first step is for governments and individuals to take a good hard look at the way the business models of social media giants like Meta and Alphabet facilitate and exacerbate these sorts of problems.”
Inman Grant agreed.
“Innovations to help identify, detect and confirm deepfakes are advancing, and technology companies have a responsibility to incorporate these into their platforms and services,” she said.
“Antidotes to these risks need urgent investment and innovation now, as they are lagging behind the rapid proliferation of these technologies and the online harms they are likely to engender.”
Image-based abuse is a breach of the Online Safety Act 2021, and under the Act, perpetrators can be issued a fine or imposed with jail time in some jurisdictions. Any Australian whose images or videos have been altered to appear sexualised and are published online without consent can contact eSafety for help to have them removed.