The business models of tech companies entice their users into harmful echo chambers that can eventually lead to the kind of tragedy that occurred on March 15, tech experts have told Prime Minister Jacinda Ardern.
Centre for Humane Technology co-founders Tristan Harris and Aza Raskin attended the Voices For Action meeting with Ardern in Paris today in the lead up to tomorrow's Christchurch Call summit.
The summit will be the first time several countries and tech companies will sign up to a commitment to prevent violent extremism and terrorist content from being hosted by online content hosts, including Facebook, Instagram, Twitter, and Google-owned YouTube.
Part of the commitment will be for collaboration across governments and tech companies, and that could lead to looking at the broader issue of how the business models of social media platforms contribute to harm.
"Imagine a spectrum. On the left-hand side you have the calm science, and on the other side you have the radicalising crazy stuff," said Harris, who used to be Google's design ethicist.
"YouTube wants to tilt you to the crazy side of the spectrum, no matter where on the spectrum you start on. A teen girl watching a dieting video, they recommend anorexia videos because they're more extreme. If you start someone on a 9/11 news video, they recommend 9/11 conspiracy theories.
"You can hire 10,000 content moderators to stop the bad content, but you've still tilted the entire playing field with 2 billion people using it."
He said 70 per cent of YouTube content that was watched came from recommendations based on online behaviour.
"It's not like these people are searching for white nationalism videos. They land on something and get sucked in by recommendations.
"Getting people to question things and saying, 'let me tell you the full truth' - that's the first step of walking down that white nationalism rabbit hole."
Harris said the companies were focused on the "extractive attention economy".
"They privately profit, and any harm that shows up doesn't incur on their balance sheet.
"How are all these systems designed? Are they designed to tilt the world towards stronger democracies, towards truth, towards the strong mental health of children, or is it tilting in the opposite direction? That's the broader conversation that needs to happen."
Raskin, who used to be head of user experience at Mozilla, said the business model led to radicalisation, teen depression and isolation, and fake news.
"All of these feel like disparate problems, but in fact they're all driven by this ruthless desire to crawl down our brain stems to hijack human nature to try and get our attention.
"We are dosing humanity with 700 million hours of this kind of hate. What do we expect to happen?"
Collaborative work springing from the Christchurch Call could lead to governments and tech companies sharing data on the coded and ambiguous language that extreme alt-right groups use to entice people towards radicalisation.
Extremist groups use coded language to make content appear less controversial and avoid being flagged as potentially harmful.
Harris said the Christchurch Call was a great first step and a cause for optimism, but added that narrowly dealing with online violent extremism was like "trying to save coral reefs ... and not seeing climate change".
He said tech companies were also best placed to see how the system worked, and therefore how to avoid harm.
"They are the ones who know the data about what got recommended to whom, and which clusters of videos tended to push people down different pathways."
Tech companies will not change their business models without public pressure, he said.
"The more people know, the more pressure they will be under."