The manipulated images and audio have not spread far beyond the confines of 4chan, Siegel said. But experts who monitor fringe message boards said the efforts offered a glimpse at how nefarious internet users could employ sophisticated AI tools to supercharge online harassment and hate campaigns in the months and years ahead.
Callum Hood, head of research at the Center for Countering Digital Hate, said fringe sites such as 4chan — perhaps the most notorious of them all — often gave early warning signs for how new technology would be used to project extreme ideas. Those platforms, he said, are filled with young people who are “very quick to adopt new technologies” such as AI to “project their ideology back into mainstream spaces.”
Those tactics, he said, are often adopted by some users on more popular online platforms.
Here are several problems resulting from AI tools that experts discovered on 4chan — and what regulators and technology companies are doing about them.
Artificial images and AI pornography
AI tools such as Dall-E and Midjourney generate novel images from simple text descriptions. But a new wave of AI image generators are made for the purpose of creating fake pornography, including removing clothes from existing images.
“They can use AI to just create an image of exactly what they want,” Hood said of online hate and misinformation campaigns.
There is no federal law banning the creation of fake images of people, leaving groups such as the Louisiana parole board scrambling to determine what can be done. The board opened an investigation in response to Siegel’s findings on 4chan.
“Any images that are produced portraying our board members or any participants in our hearings in a negative manner, we would definitely take issue with,” said Francis Abbott, executive director of the Louisiana Board of Pardons and Committee on Parole. “But we do have to operate within the law, and whether it’s against the law or not — that has to be determined by somebody else.”
Illinois expanded its law governing revenge pornography to allow targets of nonconsensual pornography made by AI systems to sue creators or distributors. California, Virginia and New York have also passed laws banning the distribution or creation of AI-generated pornography without consent.
Cloning voices
Late last year, ElevenLabs, an AI company, released a tool that could create a convincing digital replica of someone’s voice saying anything typed into the program.
Almost as soon as the tool went live, users on 4chan circulated clips of a fake Emma Watson, a British actor, reading Adolf Hitler’s manifesto, “Mein Kampf.”
Using content from the Louisiana parole board hearings, 4chan users have since shared fake clips of judges uttering offensive and racist comments about defendants. Many of the clips were generated by ElevenLabs’ tool, according to Siegel, who used an AI voice identifier developed by ElevenLabs to investigate their origins.
ElevenLabs rushed to impose limits, including requiring users to pay before they could gain access to voice-cloning tools. But the changes did not seem to slow the spread of AI-created voices, experts said. Scores of videos using fake celebrity voices have circulated on TikTok and YouTube — many of them sharing political disinformation.
Some major social media companies, including TikTok and YouTube, have since required labels on some AI content.
President Joe Biden issued an executive order in October asking that all companies label such content and directed the Commerce Department to develop standards for watermarking and authenticating AI content.
Custom AI tools
As Meta moved to gain a foothold in the AI race, the company embraced a strategy to release its software code to researchers. The approach, broadly called “open source,” can speed development by giving academics and technologists access to more raw material to find improvements and develop their own tools.
When the company released Llama, its large language model, to select researchers in February, the code quickly leaked onto 4chan. People there used it for different ends: They tweaked the code to lower or eliminate guardrails, creating new chatbots capable of producing antisemitic ideas.
The effort previewed how free-to-use and open-source AI tools can be tweaked by technologically savvy users.
“While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness,” a spokesperson for Meta said in an email.
In the months since, language models have been developed to echo far-right talking points or to create more sexually explicit content. Image generators have been tweaked by 4chan users to produce nude images or provide racist memes, bypassing the controls imposed by larger technology companies.
This article originally appeared in The New York Times.
Written by: Stuart A. Thompson
Photographs by: Daniel Zender
©2024 THE NEW YORK TIMES