COMMENT: The Christchurch Call summit has made specific progress, with tech companies and world leaders signing an agreement to eliminate terrorist and violent extremist content online.
The question now is how we collectively follow up on its promise.
The summit in Paris began with the statement that the terror attack in Christchurch two months ago was "unprecedented". But one of the benefits of this conversation happening in such a prominent fashion is that it draws attention to the fact this was not the first time social media platforms have been implicated in terrorism.
It was merely the first time a terrorist attack in a Western country was broadcast via the internet. Facebook played a significant role in the genocide of Rohingya Muslims in Myanmar, as covered in the Frontline documentary, The Facebook Dilemma. And a 2018 study by Karsten Muller and Carlo Schwarz demonstrated a link between Facebook use and violence against refugees in Germany.
I hope attention now turns to the fact social media platforms profit from an indifference to harassment and from harassment itself. It falls within the realms of corporate responsibility to deal with these problems, but they have done nothing in the past.
Online communities whose primary purpose is to terrorise the people they target have existed for many years, and social media companies have ignored them. Anita Sarkeesian was targeted in 2012 after drawing attention to the problem of how women are represented in video games. She chronicled the amount of abuse she received on Twitter in one week, which included threats of murder and rape.
Twitter did nothing.
When the Paris summit began, I hoped pressure from governments and the threat of regulation would prompt movement from social media companies, but I wasn't optimistic.
I expected social media companies would claim technological solutions based on algorithms would magically fix everything without human oversight, despite the fact they can be, and are, gamed by bad actors.
I also thought the discussion might turn to removing anonymity.
Mainly, I thought that there would be general, positive-sounding statements from tech companies about how seriously they were taking the summit, without concrete details to their plans.
I'm pleased to be wrong. The discussion has already raised specific and vital elements. As the Herald reported: "Tech companies have pledged to review their business models and take action to stop users being funnelled into extremist online rabbit holes that could lead to radicalisation. That includes sharing the effects of their commercially sensitive algorithms to develop effective ways to redirect users away from dark, single narratives."
The underlying business model of social media platforms has been part of the problem with abuse and harassment on their services. A great deal of evidence suggests algorithms designed in pursuit of profit are also fuelling radicalisation towards white supremacy.
Researcher Rebecca Lewis highlights that YouTube's business model is fundamental to the ways the platform pushes people towards more extreme content.
I never expected the discussions to get so specific that tech companies would explicitly put their business models on the table. That is promising, but the issue will be what happens next.
New Zealand Super Fund chief executive Matt Whineray has said an international investor group of 55 funds, worth US$3.3 trillion ($5 trillion) will put their financial muscle to the task of following up these initiatives and ensuring accountability. My question is how solutions and progress are going to be defined.
Social media companies have committed to greater public transparency about their setting of community standards, particularly around how people uploading terrorist content will be handled. But this commitment in the Christchurch Call agreement doesn't carry through to discussions of algorithms and business models.
Are social media companies going to make their recommendation algorithms open source and allow scrutiny of their behaviour?
That seems unlikely, given how fundamental they are to their individual business models. They are likely to be seen as vital corporate property.
Without that kind of openness it's not clear how the investor group will judge whether any progress towards accountability is being made.
While the Christchurch Call has made concrete progress, we must collectively keep up the pressure.
That means pursuing transparent accountability through whatever means we can.
One example of a specific step would be more widespread adoption of best ethical practice for covering extremist content in the news.
There is evidence that not naming the perpetrator makes a difference, and the guidelines New Zealand media adopted for the coverage of the trial of the Christchurch accused are another step in the right direction.