With more teens experiencing depression, anxiety, self-harm, and other mental health problems, the amount of content dedicated to these issues on social media has exploded. This can be a source of support and validation, but one former heavy user tells Investigations Editor Alex Spence it can also push vulnerable young people into dark rabbit holes that compound their problems.
Warning: This article discusses suspected suicide, self-harm, and other mental health problems and could be distressing for some people. If you need help, contact Lifeline on 0800 543 354 or text 4357 (HELP)
It was her 21st birthday, but Anna* didn’t feel like celebrating.
Anna’s teenage years had been devastated by depression and post-traumatic stress. She’d spent years in the care of mental health services, with several admissions to psychiatric facilities. On several occasions, overwhelmed by anguish, she had injured herself so badly she needed hospital treatment to tend to the physical wounds. She’d tried several times to kill herself.
Now, in September 2019, she was again unravelling. At a moment when Anna should’ve been emerging hopefully into adulthood, she couldn’t help ruminating on what she’d missed — relationships, academic qualifications, the life milestones that her peers took for granted. Looking ahead, she saw only more darkness.
“I just felt so alone,” Anna recalls.
Anna picked up her phone, opened Instagram, and began creating a post aimed at the only people she felt she truly felt she could confide in: the few hundred followers of a private account that she used to document her daily struggles with mental illness.
A heavy user of social media, Anna had accounts on multiple platforms that reflected different aspects of her life. Instagram was the app she used most often. In a public-facing profile, she presented a curated version of herself that she was comfortable sharing with parents, teachers, and acquaintances. But she spent far more time on a locked account dedicated to her mental health problems that she allowed only a select group to access.
On this account, Anna didn’t hold back. It was a place where Anna could vent about her most intimate feelings to a tight community of young women she trusted could relate to her suffering.
She selected a selfie in which she looked pale and exhausted. Then she started typing.
“I look how I feel,” she told her followers. “It was meant to be a good day, and it has at times been good, but it has also been a day of grief. At the age of 12, this is not how I imagined my life to be.”
Anna’s teenage years coincided with a couple of seismic trends that, in little more than a decade, have transformed the experience of growing up in New Zealand.
She and her peers didn’t know the world before the internet. They came of age when smartphones were ever-present and social media the dominant means of communication, which has brought radical changes in the ways teenagers interact with each other, express their identities, and occupy their leisure time — and to the social harms they’re exposed to.
At roughly the same time, there’s been an alarming rise in the number of children and adolescents experiencing poor mental health.
Between 2012 and 2019, according to the latest national Youth2000 survey, the number of secondary school students with significant symptoms of depression jumped from 13 per cent to 23 per cent. Nearly a quarter of students in the latest survey reported having self-harmed in the past year, while 21 per cent said they’d seriously thought about suicide.
These trends have intersected in profound and often disturbing ways.
As more young people struggled with anxiety, depression, and other mental conditions, the amount of content dedicated to these issues on digital platforms has exploded.
Social networks became the places that teens turned to for (not always accurate) information about symptoms or conditions they might be experiencing and advice on how to deal with them. It became the place to go to find others experiencing similar pain, to vent about how lonely and depressed they were feeling, to complain about the treatment they were receiving, and to share pearls of hope and recovery.
It’s difficult to quantify the scale of this material because it is so diffuse, spread across multiple platforms that share limited data about their operations, and often hidden from public view. Last year, a survey by New Zealand’s Classification Office found that 16 per cent of respondents aged 16-29 had come across material online (not just on Instagram) that encouraged self-harm such as cutting or burning. Slightly more had viewed material promoting anorexia. One in five had seen content encouraging suicide.
To better understand this phenomenon, the Weekend Herald spoke to several people including Anna who have been frequent consumers of this content, along with parents, researchers, clinicians, health officials, and regulators. (Anna spoke on the condition that we don’t use her real name or specific details that could identify her, including her location.) We also reviewed dozens of academic studies and examined thousands of posts on several platforms.
At its best, the content we examined was positive and uplifting, raising awareness of often-stigmatised conditions, encouraging people to seek help, and inspiring recovery. But there was also a staggering amount of much darker material, which critics say spreads misinformation about conditions and treatments and promotes self-destructive behaviour.
“It’s really unhealthy,” Anna says. “I just don’t think people understand how graphic it is, and how toxic.”
Critics worry about the aggregate effect of this material on a minority of vulnerable teens — that it could push them into deep, dark rabbit holes that compound their emotional difficulties and complicate their recovery.
Frances Haugen, a former Facebook executive turned whistleblower, who in 2021 revealed internal company documents showing that Instagram’s own teenage users claimed it was bad for their mental health, has described Instagram as having an “escalator” effect that exposes users to more and more extreme content.
“When it comes to kids, you can start with a blank Instagram account, no friends, no interests, and just do some very innocuous searches like healthy eating, healthy recipes,” Haugen said in an interview last year with the Centre for Humane Technology, a think tank. “Just by clicking on the first five pieces of content each day, within a couple of weeks [you’re] being taken to pro-anorexia content. If you’re mildly depressed, it can push you towards self-harm content. These algorithms escalate. It’s an escalator that goes up evermore.”
Instagram’s parent company, Meta, which also owns Facebook and WhatsApp, says it is working to improve protections and support for its younger users. “We do not allow content that encourages or promotes suicide or self-harm, and we will remove it as soon as we are made aware of it, through a combination of in-app reporting and proactive technologies we use to detect it,” a spokesperson said in a statement.
“Our policies do not allow graphic self-harm content such as cutting in any circumstances,” Meta added. “We are committed to removing this type of content, but it will take time while we build new technology to find it and stop it being recommended. We are working together with youth and mental health experts on any changes, to help ensure we get it right.”
This is welcome, critics say, but much more needs to be done — and not just by tech companies. Policymakers and public officials have also been too slow to respond to the rapidly evolving risks, despite promises to improve youth mental health and suicide prevention.
In recent months, a series of developments have made the problem impossible to ignore. In September, a landmark case in Britain drew global attention when a coroner found that the negative effects of online content contributed to the death of a 14-year-old girl named Molly Russell. Tech companies are also facing growing scrutiny in the US in Congress, the media, and the courts.
In New Zealand, the Herald revealed in October that the coroner’s office is conducting an inquiry into the suspected suicides in 2019 of three young women who were linked on Instagram. Anna was connected to two of them.
Anna was 12 when it first occurred to her that deliberately inflicting pain on yourself is a thing that some people do to suppress mental anguish.
She says she arrived at this discovery on Tumblr, a microblogging platform.
This was in the early 2010s, when popular platforms such as Instagram, WhatsApp, Snap, and Pinterest were still fledgling operations and TikTok had not yet been founded. Tech companies had yet to encounter the public and regulatory backlash that would come later amid accusations that their pursuit of ever-greater user engagement was also amplifying social conflict.
In an early warning sign that there was a downside to the optimistic vision championed by social media’s founders, some platforms had been flooded with posts promoting various forms of self-harm. It got so bad that Tumblr introduced restrictions on material that “urges readers to cut or mutilate themselves; embrace anorexia, bulimia, or other eating disorders; or commit suicide”.
At this time, Anna was just beginning a slide into mental illness that would end up consuming her adolescence. Scared, lonely, and confused, she looked to the internet to help her make sense of what she was going through.
“I found what I was feeling,” she says.
To an impressionable, troubled preteen, the material Anna found on Tumblr was revelatory. It provided an explanation for the disturbing thoughts she was experiencing and a sense of validation that she “wasn’t getting anywhere else”. But it drew her into a dark online world that she now believes was detrimental to her mental well-being.
After first learning about self-harm online, Anna says she became addicted to it. At 13, she tried for the first time to kill herself and was admitted to a psychiatric facility, beginning a long journey through the mental health system.
By the mid-2010s, Instagram had become the main social outlet for Anna and her peers. Using a private account that gave no outward clues as to her identity or location, she connected with dozens of other young women who had been involved in mental health services, some of whom she met first offline in psychiatric facilities.
(“If you’ve been in a respite facility, if you’ve been in a psychiatric facility, if you’ve been in an outpatient programme, odds are you have an Instagram profile,” says another woman who has been involved in this community.)
“I wanted to connect with people who felt the same way,” Anna says.
In this closed community, a fringe of a fringe, she found people with a range of conditions: depression, anorexia, borderline personality disorder, chronic self-harm. What they had in common was a severe level of distress and a “craving for validation”, Anna says. Most had negative experiences of public services and felt they had nowhere else to go for support.
Anna spent hours a day on the app, sharing details with followers of her experiences in health care, self-harm incidents, suicidal urges, and doubts about the future with a level of intimacy and frankness she had never been able to summon with her doctors and counsellors.
“I’m such a failure,” she wrote on one occasion. “I’m really trying but it’s not working.”
“I f***ed up,” she posted one day after she’d been taken to an emergency department after self-harming.
“Life is s***,” she wrote another time with a selfie showing her in a hospital bed with a nasal gastric tube attached. “I’m so depressed at the moment, and I have very little hope.”
Posting was a release, but over time Anna began to feel uneasy about the amount of time she was spending online and the impact it was having on her.
It wasn’t just the eventual psychic toll of being immersed for hours a day in her own and others’ misery that came to bother her.
Among the concerns that have been expressed about Instagram’s potential impact on young people’s mental health is that it amplifies pressure on teens to conform, exacerbates peer conflict, and encourages users to post more and more provocative content in pursuit of likes and followers. Anna says she saw all those play out in her network.
In seeking affirmation, she felt that many in the group tended to overshare painful events such as self-harm incidents and suicide attempts. That could be distressing for a viewer who was dealing with their own trauma. For someone trying to avoid self-harming, it could prompt them to relapse.
While Anna never doubted for a moment that her friends’ anguish was real, she worried that social media turned suffering at times into a kind of performance. It encouraged unhealthy comparison and competition. It incentivised people to amp up their posts to get more attention.
“It definitely does become competitive,” she says. “You see someone’s in hospital and then you know that there’ll be one or two or three more people that end up in hospital pretty shortly after because it’s almost contagious.”
Anna quickly adds that she’s not judging her peers harshly for this. “I don’t think any of the people doing it are being malicious or anything,” Anna says. “I think it’s all about a need for validation, a need for people to accept them.”
Anna says she always tried to be mindful when she was posting about her experiences that it could be damaging for others to read. She says she wrote about self-harm incidents because she wanted to warn others about the potential long-term consequences and discourage them from trying it. At the time, she thought she was treading a careful line, raising awareness without triggering anyone. But she has come to doubt that. Now she thinks it was unwise to have shared as much as she did.
“I think now it’s not so helpful to anyone, really,” she says.
The biggest concern about teenagers sharing content relating to mental health online is a phenomenon social scientists call “suicide contagion”.
It is well-established that people who are vulnerable to suicidal urges could be motivated to hurt themselves if they’re exposed to suicidal behaviour by others. This could be direct, for example the suicide of a family member, friend, or neighbour, or indirect, say if they read a news story about a celebrity taking their own life. (Concerns about copycat behaviour have prompted some jurisdictions, including New Zealand, to impose strict rules on depictions of suicide by broadcasters and news publishers.)
The worst scenario is a “cluster” in which a group of people kills or attempts to kill themselves close together.
In late 2019, authorities became aware that three young Kiwi women who were connected on Instagram died by suspected suicide, as the Herald first reported in October.
Among them was Cassandra Fausett, a 17-year-old from South Auckland who endured a “horrendous” two-year spiral into mental illness that resulted in numerous suicide attempts, police callouts, and hospital admissions.
Cassandra had a private Instagram account through which she shared intimate details of these traumatic experiences with a select group of young women — including Anna — right up until the moment she died. Cassandra posted pictures and a video on social media moments before she is suspected of taking her own life in September 2019, according to her parents.
Cassandra’s mother, Caroline, says social media provided Cassandra with a safe space to talk about her mental illness and that the contacts she made on the Instagram network were an important source of support. But there’s also evidence that social media wasn’t always beneficial to her state of mind. According to Cassandra’s medical notes, she first self-harmed at 13 after she “learnt about this through social media and decided to try it and see if it helped manage how she was feeling and found that it helped distract her from what was distressing her”.
And Cassandra herself expressed ambivalence about her social media use at times. In one of her final posts on her private Instagram account, she wrote: “This account is holding me back in my recovery. I’m probably going to delete or god knows what maybe aim to use it less idrk. But anyway I want this account to be a safe place for me to share my thoughts and feelings and make genuine friends. I honestly don’t know who I can trust though. If you want to stay on my account comment or dm me. I will be removing most people so if you want to stay/care about my journey lmk!”
Cassandra’s death was followed weeks later by that of another young woman in the community, Georgia, a 21-year-old from the South Island. (The Herald agreed to identify her only by her first name at the request of her family, to protect their privacy.) A third suspected suicide is being investigated by the coroner but that person’s identity is not known.
The deaths prompted an outpouring of grief in the community, Anna says. “Everyone was very, very upset and sad and shocked.”
She recalls a few users withdrawing after that, having decided it was detrimental to their mental health, but that drew “a lot of backlash from people who were still entrenched”. Most kept posting, including Anna.
When authorities investigating the deaths became aware that the young women were connected online, it sparked a flurry of activity across several government agencies.
Worried that others might try to take their own lives, officials held a series of conversations in the run-up to Christmas about how to respond. Arran Culver, a senior psychiatrist at the Ministry of Health (he has since been promoted to acting deputy director general for mental health and addiction services) emailed Caroline Fausett asking for access to Cassandra’s Instagram contacts, so they could figure out who else was in the network.
Police visited some of Cassandra’s friends and urged them to delete material from their phones, Caroline says. Clinical Advisory Services Aotearoa (CASA), a unit that specialises in responding to potential suicide clusters, was brought in to support users who could be identified and located.
However, sources familiar with the conversations said the authorities soon reached the limits, technically and legally, of what they could do about the community. The private settings made the network difficult to penetrate, and even if they could identify users they didn’t have the authority to stop anyone posting material that wasn’t illegal.
Officials at the Ministry of Health’s Suicide Prevention Office held discussions with Instagram’s parent company. The coroner’s office launched a joint investigation into the deaths, which is ongoing. But as the months went by, the government’s urgency to tackle the issue appeared to wane. Multiple sources across the mental health sector said the initial concerns didn’t lead to the formulation of a robust cross-government strategy in this area, and that it doesn’t seem to be a priority for health officials.
Matthew Tukaki, head of the Ministry of Health’s Suicide Prevention Office, says it is still on the radar. “We are concerned about this online content and communication about self-harm because, for example, it can increase self-harm urges and behaviour and sharing of details about methods,” Tukaki said in a statement.
Among the measures Tukaki says the SPO has taken in this area are adopting a service to monitor social media for self-harm and suicide-related material; holding regular meetings with government agencies about addressing harmful content; and developing guidance for young people, families, and schools.
“Educating young people, as well as the adults in their lives, including parents, whanau, teachers, clinicians and other support people about how to keep safe online is an important strategy in this space,” Tukaki says.
A year after Cassandra’s death, Anna lost another friend in the Instagram community, a young woman to whom she had also been close offline. The death affected her deeply.
By then, Anna’s growing doubts about her social media use had reached a tipping point. One day, she and another friend in the community had a long, searching conversation in which they both reflected on why it had absorbed them so intensely and whether it had been good for them.
Life generally was getting better for Anna. She had found a team of mental health professionals that made her feel supported in a way that she hadn’t experienced in the system before. And she had landed a part-time job that she found fulfilling and challenging.
Her mental health wasn’t yet fully recovered, but it was “certainly a lot better than it used to be”.
As Anna’s state of mind improved, she found herself using her Instagram account less and less. Eventually, she stopped posting, and then withdrew from the community.
Now that she has pulled back from that world, Anna says she is determined that others don’t fall down the same rabbit hole that she did.
She believes parents, educators, health professionals, regulators, and government officials need to do a lot more to protect young people online. Social media is a fact of modern life, an essential part of teenage interaction, and young people can’t be blamed for seeking help on it. She says there needs to be vastly more support and services available for them offline.
“It’s on us as a society to come up with solutions and alternatives so that no one needs to go there,” she says.
“People are struggling, and they want to find somewhere where they do fit in, belong, and are accepted.”
Anna still has a presence on Instagram, but when she goes on the platform now it’s usually for short periods at the end of the day.
She started a new account which is a much cheerier place than the endless stream of despair that she was once immersed in, populated by content that mostly has nothing to do with mental illness. Dog memes and sports clips are her diversions now. She says she is trying to focus on constructive offline activities — work, spending time with friends, therapy. Some days she doesn’t open the app at all.
* Not her real name
Help us investigate
This article is the latest in a series that has examined the rise in mental distress among young people and the state of services that support them. If you have information about this topic, please contact Investigations Editor Alex Spence at alex.spence@nzme.co.nz. Because of the volume of correspondence, we cannot reply to all the responses we receive but we will read all of them. We will not publish your name or identify you as a source unless you want us to.
Where to get help
If it is an emergency and you or someone else is at risk, call 111.
For counselling and support
Lifeline: Call 0800 543 354 or text 4357 (HELP)
Suicide Crisis Helpline: Call 0508 828 865 (0508 TAUTOKO)
Need to talk? Call or text 1737
Depression helpline: Call 0800 111 757 or text 4202
For children and young people
Youthline: Call 0800 376 633 or text 234
What’s Up: Call 0800 942 8787 (11am to 11pm) or webchat (11am to 10.30pm)
For help with specific issues
Alcohol and Drug Helpline: Call 0800 787 797
Anxiety Helpline: Call 0800 269 4389 (0800 ANXIETY)
OutLine: Call 0800 688 5463 (0800 OUTLINE) (6pm-9pm)
Safe to talk (sexual harm): Call 0800 044 334 or text 4334
All services are free and available 24/7 unless otherwise specified.
For more information and support, talk to your local doctor, hauora, community mental health team, or counselling service. The Mental Health Foundation has more helplines and service contacts on its website.