WARNING: This story discusses self-harm and may be distressing.
Instagram-owner Meta is “fundamentally misleading” the public about the safety of its picture-sharing app for teens, a former executive-turned-whistleblower has said.
The 52-year-old, who left Meta disillusioned in late 2021, went public with his concerns in testimony to US senators a fortnight ago.
He told US politicians his teenage daughter, Joanna, and her friends had for years received unwanted sexual advances on Instagram, the photo-sharing app that can be used by those aged 13 and over. Béjar said the company had failed to act on his concerns.
In an interview with the UK’s Daily Telegraph, Béjar, who worked in Silicon Valley for more than 25 years, said he went public after privately raising concerns to senior Facebook and Instagram executives, including boss Mark Zuckerberg.
He said: “I realised if I had not done anything at that point nothing would change.”
He claims his warnings were ignored and have left him lamenting the company’s inaction.
Béjar highlights the tragic case of Molly Russell, the British 14-year-old who died in 2017, as an example of how Meta’s policies can leave teenagers open to harm.
Molly had saved 2100 posts related to depression or self-harm on her phone, many of which were not in breach of Instagram’s rules, a coroner was told last year. A coroner ruled Molly had died from “an act of self-harm whilst suffering from depression and the negative effects of online content”.
Béjar said many Instagram accounts may post “awful, but not violating content” under the current rules, leaving teenagers with no recourse to complain or seek help.
After our interview, he says he has been “thinking about Molly Russell all day”. He said: “If you look at the kind of content that Molly was browsing, what percentage was violating?
“While the company talks about their safety features, what tools do they give teens when they get recommended content that is unwanted?”
The overall design of Instagram is not conducive to encouraging teenagers to report things that make them uncomfortable, according to Béjar.
Teens may fear reporting other users because it will reduce their engagement, he adds, while the design of the app’s reporting tools are too convoluted for younger users.
Executives at the US$850 billion tech giant were well aware of the potential harms caused by Instagram because he had warned them about them, he claimed.
His team’s research discovered that as many as 13 per cent of 13-to-15 year olds on Instagram had received an unwanted sexual advance in just a one-week period. At least 6.7 per cent of these children had also seen distressing self-injury posts in the last seven days.
In October 2021 – having spent months compiling his research – Béjar wrote an email to Zuckerberg expressing his concerns, but was ignored.
Two years later, he says little has changed. He told senators that there was “no way, so far as I or teenagers I know can determine” to report unwanted sexual advances easily within the app.
Asked whether he felt Instagram was currently appropriate for a 13-year-old, Béjar said: “No it is not, it categorically is not.”
Béjar has worked in the technology industry since he was 15, beginning with IBM in Mexico City before a chance meeting with Apple co-founder Steve Wozniak.
Wozniak sent him an Apple computer and invited him to visit Silicon Valley. He later supported him while studying at King’s College London, before Béjar returned to California.
A security expert, he joined the company then known as Facebook in 2009, becoming an engineering director in the company’s Protect and Care team.
He left in 2015 to spend more time with his children. But by 2019 he had grown concerned that his daughter, who was 14 at the time, was receiving unwanted sexual advances on Instagram.
Béjar returned to Meta as a consultant to work on safety tech, but found his efforts were ignored.
His second stint at the company coincided with a series of damaging leaks about the social network company, compiled by the Wall Street Journal. Data scientist Frances Haugen ultimately went public in 2021 as the source of the leaks and handed a dossier of information to US Senators.
At the heart of her claims was research that appeared to show Meta, then known as Facebook, knew Instagram was making teenage girls feel worse about themselves.
The company strongly denied the claims. Zuckerberg said in October of that year: “At the heart of these accusations is this idea that we prioritise profit over safety and well-being. That’s just not true.”
As Haugen was going public, Béjar was pushing privately for change. He flagged his concerns to Sheryl Sandberg, Facebook’s operations chief, and Instagram lead Adam Mosseri.
On the same day Haugen gave her own testimony to US politicians, he sent a private email to Zuckerberg with detailed research about how teenagers were experiencing more harm on Instagram than previously thought, which he said was “tragic and breathtaking”. The memo was titled: “Gap in our understanding of harm and bad experiences.”
His concerns were brushed off and Zuckerberg never replied.
Béjar described the weak response from executives as “heartbreaking”, as he had believed they simply did not know the scale of the problem. Now his superiors knew, but still little was being done to stop the harm.
He said: “I had first-hand experience of them ignoring what can be described as statistically significant research,” which suggested millions of teenagers were experiencing safety issues while using Meta’s apps.
Meta discloses figures on incidents of hate speech, bullying and harassment on its social networks as part of regular transparency reports.
However, Béjar argues the numbers are “misleading and misrepresenting” the problem as they disclose just a “fraction” of the true harm.
The vast majority of negative experiences on Instagram do not break its rules, he said, and even when content does break the rules, it may not be reported.
His research while at the company found half of Instagram users had a bad experience in the last seven days but just 1pc of those incidents led to a formal report to the company. Still only 2 per cent of those complaints ended with action being taken.
In response to detailed questions from The Telegraph, a Meta spokesman said it was “absurd to suggest” there was “some sort of conflict” between its study of users’ “perception” of Instagram, and its transparency reports.
“Prevalence metrics and user perception surveys measure two different things. We take actions based on both and work on both continues to this day,” the spokesman said.
In the UK, tech giants will soon be compelled to provide in-depth transparency data to Ofcom as part of their obligations under the Online Safety Bill. Béjar said social media companies should be compelled to better collect and publicise data on how many children get unwanted sexual advances on their apps.
Meta says teenagers’ accounts are automatically set to private when they sign up, while people over the age of 19 cannot send messages to teens who do not follow them. Meanwhile, new features mean people a user does not follow can only send a single invitation to chat.
Meta, which rebranded from Facebook shortly after Haugen’s whistleblowing claims emerged in 2021, has been pushing to add more encryption to its messaging services, including Instagram’s “DMs”.
This has prompted a backlash from child safety campaigners, who fear it will make it harder to catch potential abusers.
Béjar said encrypted messages are “really important” for many people but he was less sure if they were appropriate for children.
“Should younger people have encrypted content? I don’t believe that they should.”
Béjar’s claims come as Meta faces a series of legal challenges brought by dozens of US states over allegations it has used “manipulative features” to addict children to its sites. It is also being sued by US regulator, the Federal Trade Commission, over claims its control of Instagram is anti-competitive.
After he was compelled to testify in one of the lawsuits, Béjar said he became “distraught” because he came to believe that so little had been done to tackle the harms he had worked to uncover.
He resolved to be “the most helpful person I can” to regulators and officials looking into the company.
Asked whether he felt children’s safety was a top priority at Meta, which reported a net profit of US$22.5b in the 12 months to June, Béjar said dozens of researchers had been laid off from its Instagram wellbeing team since he left.
“That tells you a lot about priorities.”
Responding to Béjar’s Senate appearance, a Meta spokesman said: “Every day countless people inside and outside of Meta are working on how to help keep young people safe online.
“Working with parents and experts, we have also introduced over 30 tools to support teens and their families in having safe, positive experiences online. All of this work continues.”