Jennifer Watkins, with her husband, Bruce, and their twin sons, Ben and Flynn, at their home in Australia. One son made a YouTube video that got Watkins locked out of all her Google accounts. Photo / Adam Ferguson, The New York Times
Google has a zero-tolerance policy for child abuse content. The scanning process can sometimes go awry and tar innocent individuals as abusers.
When Jennifer Watkins got a message from YouTube saying her channel was being shut down, she wasn’t initially worried. She didn’t use YouTube, after all.
Her 7-year-old twinsons, though, used a Samsung tablet logged into her Google account to watch content for children and to make YouTube videos of themselves doing silly dances. Few of the videos had more than five views. But the video that got Watkins in trouble, which one son made, was different.
“Apparently it was a video of his bottom,” said Watkins, who has never seen it. “He’d been dared by a classmate to do a nudie video.”
Google-owned YouTube has artificial intelligence-powered systems that review the hundreds of hours of video that are uploaded to the service every minute. The scanning process can sometimes go awry and tar innocent individuals as child abusers.
The New York Times has documented other episodes in which parents’ digital lives were upended by naked photos and videos of their children that Google’s AI systems flagged and that human reviewers determined to be illicit. Some parents have been investigated by police as a result.
The “nudie video” in Watkins’ case, uploaded in September, was flagged within minutes as possible sexual exploitation of a child, a violation of Google’s terms of service with serious consequences.
Watkins, a medical worker who lives in New South Wales, Australia, soon discovered that she was locked out of not just YouTube but all her accounts with Google. She lost access to her photos, documents and email, she said, meaning she couldn’t get messages about her work schedule, review her bank statements or “order a thickshake” via her McDonald’s app — which she logs into using her Google account.
Her account would eventually be deleted, a Google login page informed her, but she could appeal the decision. She clicked a Start Appeal button and wrote in a text box that her 7-year-old sons thought “butts are funny” and were responsible for uploading the video.
“This is harming me financially,” she added.
Children’s advocates and lawmakers around the world have pushed technology companies to stop the online spread of abusive imagery by monitoring for such material on their platforms. Many communications providers now scan photos and videos saved and shared by their users to look for known images of abuse that had been reported to authorities.
Google also wanted to be able to flag never-before-seen content. A few years ago, it developed an algorithm — trained on the known images — that seeks to identify new exploitative material; Google made it available to other companies, including Meta and TikTok.
Once an employee confirmed that the video posted by Watkins’ son was problematic, Google reported it to the National Center for Missing and Exploited Children, a nonprofit that acts as the federal clearinghouse for flagged content. The centre can then add the video to its database of known images and decide whether to report it to local law enforcement.
Google is one of the top reporters of “apparent child pornography,” according to statistics from the national centre. Google filed more than 2 million reports last year, far more than most digital communications companies, though fewer than the number filed by Meta.
(It is hard to judge the severity of the child abuse problem from the numbers alone, experts say. In one study of a small sampling of users flagged for sharing inappropriate images of children, data scientists at Facebook said more than 75 per cent “did not exhibit malicious intent.” The users included teenagers in a romantic relationship sharing intimate images of themselves, and people who shared a “meme of a child’s genitals being bitten by an animal because they think it’s funny.”)
Apple has resisted pressure to scan the iCloud for exploitative material. A spokesperson pointed to a letter the company sent to an advocacy group this year, expressing concern about the “security and privacy of our users” and reports “that innocent parties have been swept into dystopian dragnets.”
Last fall, Google’s trust and safety chief, Susan Jasper, wrote in a blog post that the company planned to update its appeals process to “improve the user experience” for people who “believe we made wrong decisions.” In a major change, the company now provides more information about why an account has been suspended, rather than a generic notification about a “severe violation” of the company’s policies. Watkins, for example, was told that child exploitation was the reason she had been locked out.
Regardless, Watkins’ repeated appeals were denied. She had a paid Google account, allowing her and her husband to exchange messages with customer service agents. But in digital correspondence reviewed by the Times, agents said the video, even if a child’s oblivious act, still violated company policies.
The draconian punishment for one silly video seemed unfair, Watkins said. She wondered why Google couldn’t give her a warning before cutting off access to all her accounts and more than 10 years of digital memories.
After more than a month of failed attempts to change the company’s mind, Watkins reached out to the Times. A day after a reporter inquired about her case, her Google account was restored.
“We do not want our platforms to be used to endanger or exploit children, and there’s a widespread demand that internet platforms take the firmest action to detect and prevent CSAM,” the company said in a statement, using a widely used acronym for child sexual abuse material. “In this case, we understand that the violative content was not uploaded maliciously.” The company had no response for how to escalate a denial of an appeal beyond emailing a Times reporter.
Google is in a difficult position trying to adjudicate such appeals, said Dave Willner, a fellow at Stanford University’s Cyber Policy Center who has worked in trust and safety at several large technology companies. Even if a photo or video is innocent in its origin, it could be shared maliciously.
“Paedophiles will share images that parents took innocuously or collect them into collections because they just want to see naked kids,” Willner said.
The other challenge is the sheer volume of potentially exploitative content that Google flags.
“It’s just a very, very hard-to-solve problem regimenting value judgment at this scale,” Willner said. “They’re making hundreds of thousands, or millions, of decisions a year. When you roll the dice that many times, you are going to roll snake eyes.”
He said Watkins’ struggle after losing access to Google was “a good argument for spreading out your digital life” and not relying on one company for so many services.
Watkins took a different lesson from the experience: Parents shouldn’t use their own Google account for their children’s internet activity, and should instead set up a dedicated account — a choice that Google encourages.
She has not yet set up such an account for her twins. They are now barred from the internet.