Meta chief executive Mark Zuckerberg and his team drove the Facebook parent company's efforts to capture young users and misled the public about the risks, according to more than a dozen lawsuits filed by United States state attorneys-general. Illustration / Pablo Delcan, the New York Time
In April 2019, David Ginsberg, a Meta executive, emailed his boss, Mark Zuckerberg, with a proposal to research and reduce loneliness and compulsive use on Instagram and Facebook.
In the email, Ginsberg noted that the company faced scrutiny for its products’ impacts “especially around areas of problematic use/addiction and teens”.He asked Zuckerberg for 24 engineers, researchers and other staff, saying Instagram had a “deficit” on such issues.
A week later, Susan Li, now the company’s chief financial officer, informed Ginsberg that the project was “not funded” because of staffing constraints. Adam Mosseri, Instagram’s head, ultimately declined to finance the project, too.
The email exchanges are just one slice of evidence cited among more than a dozen lawsuits filed since last year by the attorneys-general of 45 American states and the District of Columbia.
The states accuse Meta of unfairly ensnaring teenagers and children on Instagram and Facebook while deceiving the public about the hazards. Using a co-ordinated legal approach reminiscent of the United States government’s pursuit of Big Tobacco in the 1990s, the attorneys-general seek to compel Meta to bolster protections for minors.
A New York Times analysis of the states’ court filings — including roughly 1400 pages of company documents and correspondence filed as evidence by the state of Tennessee — show how Zuckerberg and other Meta leaders repeatedly promoted the safety of the company’s platforms, playing down risks to young people, even as they rejected employee pleas to bolster youth guardrails and hire additional staff.
In interviews, the attorneys-general of several states suing Meta said Zuckerberg had led his company to drive user engagement at the expense of child welfare.
“A lot of these decisions ultimately landed on Mr Zuckerberg’s desk,” said Raul Torrez, the Attorney-General of New Mexico. “He needs to be asked explicitly, and held to account explicitly, for the decisions that he’s made.”
The state lawsuits against Meta reflect mounting concerns that teenagers and children on social media can be sexually solicited, harassed, bullied, body-shamed and algorithmically induced into compulsive online use.
Last week, Dr Vivek Murthy, the US Surgeon-General, called for warning labels to be placed on social networks, saying the platforms present a public health risk to young people.
His warning could boost momentum in Congress to pass the Kids Online Safety Act, a bill that would require social media companies to turn off features for minors, like bombarding them with phone notifications, that could lead to “addiction-like” behaviours. (Critics say the bill could hinder minors’ access to important information. The News/Media Alliance, a trade group that includes the Times, helped win an exemption in the bill for news sites and apps that produce news videos.)
In May, New Mexico arrested three men who were accused of targeting children for sex after, Torrez said, they solicited state investigators who had posed as children on Instagram and Facebook. Torrez, a former child sex-crimes prosecutor, said Meta’s algorithms enabled adult predators to identify children they would not have found on their own.
Meta disputed the states’ claims and has filed motions to dismiss their lawsuits.
In a statement, Liza Crenshaw, a spokesperson for Meta, said the company was committed to the wellbeing of young people and had many teams and specialists devoted to youth experiences.
She added that Meta had developed more than 50 youth safety tools and features, including limiting age-inappropriate content and restricting teenagers aged under-16 from receiving direct messages from people they didn’t follow.
“We want to reassure every parent that we have their interests at heart in the work we’re doing to help provide teens with safe experiences online,” Crenshaw said. The states’ legal complaints, she added, “mischaracterise our work using selective quotes and cherry-picked documents.”
But parents who say their children died as a result of online harms challenged Meta’s safety assurances.
“They preach that they have safety protections, but not the right ones,” said Mary Rodee, a primary school teacher in Canton, New York, whose 15-year-old son, Riley Basford, was sexually extorted on Facebook in 2021 by a stranger posing as a teenage girl. Riley died by suicide several hours later.
Rodee, who sued the company in March, said Meta had never responded to the reports she submitted through automated channels on the site about her son’s death.
“It’s pretty unfathomable,” she said.
The push to win teenagers
Meta has long wrestled with how to attract and retain teenagers, who are a core part of the company’s growth strategy, internal company documents show.
Teenagers became a major focus for Zuckerberg as early as 2016, according to the Tennessee complaint, when the company was still known as Facebook and owned apps including Instagram and WhatsApp. That spring, an annual survey of young people by investment bank Piper Jaffray reported that Snapchat, a disappearing-message app, had surpassed Instagram in popularity.
Later that year, Instagram introduced a similar disappearing photo- and video-sharing feature, Instagram Stories. Zuckerberg directed executives to focus on getting teenagers to spend more time on the company’s platforms, according to the Tennessee complaint.
The “overall company goal is total teen time spent”, wrote one employee, whose name is redacted, in an email to executives in November 2016, according to internal correspondence among the exhibits in the Tennessee case. Participating teams should increase the number of employees dedicated to projects for teenagers by at least 50%, the email added, noting that Meta already had more than a dozen researchers analysing the youth market
In April 2017, Kevin Systrom, Instagram’s chief executive, emailed Zuckerberg asking for more staff to work on mitigating harms to users, according to the New Mexico complaint.
Zuckerberg replied that he would include Instagram in a plan to hire more staff, but he said Facebook faced “more extreme issues”. At the time, legislators were criticising the company for having failed to hinder disinformation during the 2016 US presidential campaign.
Systrom asked colleagues for examples to show the urgent need for more safeguards. He soon emailed Zuckerberg again, saying Instagram users were posting videos involving “imminent danger”, including a boy who shot himself on Instagram Live, the complaint said.
Two months later, the company announced that the Instagram Stories feature had hit 250 million daily users, dwarfing Snapchat. Systrom, who left the company in 2018, didn’t respond to a request for comment.
Meta said an Instagram team developed and introduced safety measures and experiences for young users. The company didn’t respond to a question about whether Zuckerberg had provided the additional staff.
‘Millions’ of underage users
In January 2018, Zuckerberg received a report estimating that four million children under-13 were on Instagram, according to a lawsuit filed in federal court by 33 states.
Facebook’s and Instagram’s terms of use prohibit users under-13. But the company’s sign-up process for new accounts enabled children to easily lie about their age, according to the complaint. Meta’s practices violated a federal children’s online privacy law requiring certain online services to obtain parental consent before collecting personal data, like contact information, from children under 13, the states allege.
In March 2018, the New YorkTimes reported that Cambridge Analytica, a voter-profiling firm, had covertly harvested the personal data of millions of Facebook users. That set off more scrutiny of the company’s privacy practices, including those involving minors.
Zuckerberg testified the next month at a Senate hearing, “We don’t allow people under the age of 13 to use Facebook”.
Attorneys-general from dozens of states disagree.
In late 2021, Frances Haugen, a former Facebook employee, disclosed thousands of pages of internal documents that she said showed the company valued “profit above safety”. Lawmakers held a hearing, grilling her on why so many children had accounts.
Company executives knew that Instagram use by children under-13 was “the status quo”, according to the joint federal complaint filed by the states. In an internal chat in November 2021, Mosseri acknowledged those under-age users and said the company’s plan to “cater the experience to their age” was on hold, the complaint said.
In its statement, Meta said Instagram had measures in place to remove under-age accounts when the company identified them. Meta has said it has regularly removed hundreds of thousands of accounts that could not prove they met the company’s age requirements.
Fighting over beauty filters
A company debate over beauty filters on Instagram encapsulated the internal tensions over teenage mental health — and ultimately the desire to engage more young people prevailed.
It began in 2017 after Instagram introduced camera effects that enabled users to alter their facial features to make them look funny or “cute/pretty”, according to internal emails and documents filed as evidence in the Tennessee case. The move was made to boost engagement among young people. Snapchat already had popular face filters, the emails said.
But a backlash ensued in the autumn of 2019 after Instagram introduced an appearance-altering filter, Fix Me, which mimicked the nip/tuck lines that cosmetic surgeons draw on patients’ faces. Some mental health experts warned that the surgery-like camera effects could normalise unrealistic beauty standards for young women, exacerbating body-image disorders.
As a result, Instagram in October 2019 temporarily disallowed camera effects that made dramatic, surgical-looking facial alterations — while still permitting obviously fantastical filters, like goofy animal faces. The next month, concerned executives proposed a permanent ban, according to Tennessee court filings.
Other executives argued that a ban would hurt the company’s ability to compete. One senior executive sent an email saying Zuckerberg was concerned whether data showed real harm.
In early 2020, before an April meeting with Zuckerberg to discuss the issue, employees prepared a briefing document on the ban, according to the Tennessee court filings. One internal email noted that employees had spoken to 18 mental health experts, each of whom raised concerns that cosmetic surgery filters could “cause lasting harm, especially to young people”.
But the meeting with Zuckerberg was cancelled. Instead, the chief executive told company leaders that he was in favour of lifting the ban on beauty filters, according to an email he sent that was included in the court filings.
Several weeks later, Margaret Gould Stewart, then Facebook’s vice-president for product design and responsible innovation, reached out to Zuckerberg, according to an email included among the exhibits. In the email, she noted that as a mother of teenage daughters, she knew social media put “intense” pressure on girls “with respect to body image”.
Stewart, who subsequently left Meta, did not respond to an email seeking comment.
In the end, Meta said it barred filters “that directly promote cosmetic surgery, changes in skin color or extreme weight loss” and clearly indicated when one was being used.
Priorities and youth safety
In 2021, Meta began planning for a new social app. It was to be aimed specifically at children and called Instagram Kids. In response, 44 attorneys-general wrote a letter that May urging Zuckerberg to “abandon these plans”.
“Facebook has historically failed to protect the welfare of children on its platforms,” the letter said.
Meta subsequently paused plans for an Instagram Kids app.
By August, company efforts to protect users’ wellbeing work had become “increasingly urgent” for Meta, according to another email to Zuckerberg filed as an exhibit in the Tennessee case. Nick Clegg, now Meta’s head of global affairs, warned his boss of mounting concerns from regulators about the company’s impact on teenage mental health, including “potential legal action from state AGs”.
Describing Meta’s youth wellbeing efforts as “understaffed and fragmented”, Clegg requested funding for 45 employees, including 20 engineers.
In September 2021, the Wall Street Journal published an article saying Instagram knew it was “toxic for teen girls”, escalating public concerns.
An article in the New YorkTimes that same month mentioned a video that Zuckerberg had posted of himself riding across a lake on an “electric surfboard”. Internally, Zuckerberg objected to that description, saying he was actually riding a hydrofoil he pumped with his legs and wanted to post a correction on Facebook, according to employee messages filed in court.
Clegg found the idea of a hydrofoil post “pretty tone deaf given the gravity” of recent accusations that Meta’s products caused teenage mental health harms, he said in a text message with communications executives included in court filings.
Zuckerberg went ahead with the correction.
In November 2021, Clegg, who had not heard back from Zuckerberg about his request for more staff, sent a follow-up email with a scaled-down proposal, according to Tennessee court filings. He asked for 32 employees, none of them engineers.
Li, the finance executive, responded a few days later, saying she would defer to Zuckerberg and suggested that the funding was unlikely, according to an internal email filed in the Tennessee case. Meta didn’t respond to a question about whether the request had been granted.
A few months later, Meta said that although its revenue for 2021 had increased 37% to nearly US$118 billion ($192b) from a year earlier, fourth-quarter profit plummeted because of a $10b investment in developing virtual reality products for immersive realms, known as the metaverse.
Explicit videos involving children
Last autumn, the Match Group, which owns dating apps like Tinder and OKCupid, found that ads the company had placed on Meta’s platforms were running adjacent to “highly disturbing” violent and sexualised content, some of it involving children, according to the New Mexico complaint.
Meta removed some of the posts flagged by Match, telling the dating giant that “violating content may not get caught a small percentage of the time”, the complaint said.
Dissatisfied with Meta’s response, Bernard Kim, the chief executive of the Match Group, emailed Zuckerberg with a warning, saying his company could not “turn a blind eye”, the complaint said.
Zuckerberg didn’t respond to Kim, according to the complaint.
Meta said the company had spent years building technology to combat child exploitation.
Last month, a judge denied Meta’s motion to dismiss the New Mexico lawsuit. But the court granted a request regarding Zuckerberg, who had been named as defendant, to drop him from the case.
Written by: Natasha Singer
Natasha Singer, who covers children’s online privacy, reviewed several thousand pages of legal filings in states’ lawsuits against Meta for this article.