Australian Prime Minister Anthony Albanese has supported calls to limit children’s access to social media and Opposition leader Peter Dutton has pledged to implement a ban on under-16s using the platforms.
Australia’s government has also pledged $6.5 million to trial age verification technology.
Public discourse about implementing a social media ban for those under the age of 16 highlights the genuine concerns many New Zealanders have about the online experiences our young people are having.
More can, and should, be done to protect all users from online harms, but a blanket age ban is not a silver bullet - not from a public policy perspective, and not for our young people.
Netsafe, as the country’s independent online safety charity and the approved agency under the Harmful Digital Communications Act, knows first-hand the challenges adults and young people alike face online.
In the 2024 financial year, we received more than 28,000 reports of online harm, and we’ve been helping Kiwis navigate the digital world for 25 years, providing practical tools, advice and support.
Delaying exposure to social media for an additional three years (13 being the current minimum age required by most social media providers and 16 the age being discussed) does not in itself eliminate the risks young people face online.
Neither does it do anything practical to help young people manage them, whether they are on a platform legitimately or not.
We hear from parents and caregivers worried about cyberbullying, exposure to harmful content, privacy breaches, or how much time their children spend online.
However, when we speak to young people, their worries look quite different. They’re more focused on navigating friendships, dealing with romantic relationships in the digital space, and managing their mental health and self-esteem.
The gap between what concerns parents and caregivers and what concerns young people adds to the complexity of how we respond and help address these issues.
We know many children are given internet-enabled devices before the age of 13.
It is primarily parents and caregivers who are in control of that decision, but it’s also one parents tell us leads to social media habits, and outcomes, they quickly feel no control over.
Implementing an age ban (how to enforce one is another topic) may seem like a quick fix but it ignores a few crucial realities.
Firstly, we can’t assume that because a child doesn’t have a phone in school or social media at home they’re shielded from all online risks. The public debate should be about helping young people to develop the skills needed to self-regulate, manage distractions and make informed decisions.
Secondly, it’s equally important to acknowledge young people can have positive online experiences too. Many are connecting with causes, getting help, demonstrating skills, learning, or finding communities they identify with online. Not everyone has a safe neighbourhood, a supportive family, or feels included in their classroom.
Social media platforms have a moral responsibility to make their spaces safer and more positive for young users and there is a huge amount of further work needed.
But if we simply ban younger users, we remove the ongoing pressure on these companies to improve their platforms.
The vital pillar in this broader debate about online safety is education.
Digital literacy must be a fundamental part of a young person’s education, and the status quo simply isn’t cutting it.
Finland has embedded digital literacy into its national curriculum and primary-aged children are taught critical thinking skills - from learning to distinguish fact from fiction to knowing how to spot an AI-generated image.
Young people need to be taught the “rules of the road” online, just like they’re taught to read and write. With enhanced media literacy skills, young people can understand the risks, recognise potential hazards and keep themselves safe online.
A recent survey of more than 2000 Kiwi parents showed more than 74% wanted more information about how to keep their kids safe online. Time-poor parents prefer practical tools like how-to guides and one-page top tips.
We speak to parents and caregivers every day who are unaware of the parental controls at their disposal with regard to screen time and social media usage and we encourage all parents to visit our website to find practical advice.
At Netsafe, we’re committed to developing and delivering practical and useful resources to bring families together, empowering both parents and young people.
This year alone, we’ve launched a website of resources for teachers, enhanced our chatbot Kora, and released a free programme for primary-aged children using internet-connected devices at home. We’ve created guides explaining the parental controls available on popular platforms like Instagram, TikTok and Snapchat.
Professor Daniel Johnson from the ARC Centre of Digital Excellence recently described a potential ban on social media as “problematic at best and arguably reckless”.
He says it would either drive social media use to occur in secret and with greater risks or will cut young people off from the services and networks critical to their health and wellbeing.
Netsafe shares this view, and that of many other experts and academics across the online safety sector.
With better education, stronger parental controls and more accountability from social media platforms, we can ensure that young people have the skills they need to thrive in the digital world.
If we focus too much on shielding our young people from the digital environment rather than protecting them within it, we risk creating entire generations ill-equipped for the contemporary world - its challenges, and opportunities.