In You Need to Calm Down, the 2019 hit Taylor Swift penned with New Zealand songwriter and producer Joel Little, she vented at the faceless hordes taking potshots at her online from behind their keyboards.
“Say it in the street, that’s a knock-out. But you say it in a Tweet, that’s a cop-out,” sang Swift, several years before her 2023 Eras world tour became the biggest grossing in history, minting her as a billionaire in the process.
All of that recent success, which cemented Swift’s position at the top of the music streaming charts, seems to have further infuriated a small cohort of internet users who have a dark fixation with the 34-year-old pop sensation. Late last month, deepfake pornographic images of Swift, generated by artificial intelligence, started to circulate on social media platforms in high volumes.
On X, formerly known as Twitter, one image went viral, amassing 47 million views before X’s moderators finally disabled the account and temporarily disabled searching under Swift’s name.
Many AI experts thought 2024 would be the year of the political deepfake. The New Hampshire attorney-general’s office was last month investigating a fake robocall featuring Joe Biden’s voice, which made as many as 25,000 automated calls to mobile phones urging voters to skip the New Hampshire Republican Party primary election.
But the Swift deepfakes have triggered something bigger and point to the rotten heart of the problem – the vast majority of AI-generated deepfakes are pornographic in nature, and usually depict women.
The antipathy towards Swift is wrapped up in other parochial themes – she endorsed Biden for the 2020 presidential election, and her appearances at NFL games where her boyfriend Travis Kelce takes to the field for the Kansas City Chiefs have become internet memes in their own right. Swift seems to send Trump-supporting, MAGA-hat-wearing men into a lather and it has taken a grotesque turn towards misogyny and toxic masculinity.
AI image-generating applications are good and getting better at melding a person’s face onto a body. Videos are a bit clunky and more time-consuming to produce, but it’s just a matter of time before a new iteration of AI makes them seamless. The deepfake deluge is coming.
I’ve little faith that the Taylor Swift Act, introduced by Missouri state representative Adam Schwadron last week, will have much of a chance of gaining traction in the House of Representatives. The bill would offer legal safeguards against AI deepfake violations and would plug a gap in federal law.
Even the horror of the live-streamed Christchurch massacre didn’t spur legislative changes to hold digital platforms to account for distributing objectionable content. The Big Tech lobby in Washington is highly effective at fending off attempts to regulate the industry. A patchwork of AI laws are emerging in the US at state level, but federal reform in everything from data privacy to hate speech has progressed at a glacial pace.
The UK government last year introduced the Online Safety Act, which specifically outlaws the publication of unauthorised deepfakes. The Swift deepfakes would appear to be the first big test of the legislation, which can lead to tech company executives being held criminally responsible and face major fines if they are deemed to not have acted responsibly in combating deepfakes.
Swift has no shortage of money, powerful political friends, and her millions-strong global fanbase of “Swifties” could help her power a major push for change. For the many women whose lives have been ruined by deepfakes, harnessing her starpower could be the best chance they have to fight back.