Police officers near Al Noor Mosque after it was attacked in March. Photo / Adam Dean, The New York Times
A video game that uses footage of the Christchurch massacre to put Muslims in a gunman's crosshairs. Memes featuring the face and weapons of the man charged in that New Zealand attack. Messages on online forums that glorify him as patron saint of the far right.
New Zealand has workedhard to keep the name of Brenton Tarrant, the man charged with killing 51 Muslims in Christchurch, out of the news and to restrict the spread online of the hateful ideology he is accused of promoting. But the footage, games, memes and messages that still populate the dark corners of the global internet underline the immensity of the task, especially for a small country like New Zealand.
"The internet is a very complex and rough environment, and governments, especially small governments, don't have as many cards as they would like to play," said Ben Buchanan, a cybersecurity expert who teaches at Georgetown University.
Shortly after the March 15 attack, Prime Minister Jacinda Ardern declared that she would never utter the accused gunman's name and that she would do whatever she could to deny him a platform for his views.
A few days later, the New Zealand government banned the sharing or viewing of a 74-page manifesto that the accused is believed to have written. The country also declared it a crime to spread the video purporting to show the massacre; more than a dozen people have been officially warned or charged.
Ardern followed those actions with an effort, which she branded the Christchurch Call, to enlist tech companies like Facebook, Google, Twitter and YouTube to do more to curb violent and extremist content. In an op-ed, Ardern noted that her government could change gun laws and tackle racism and intelligence failures but that "we can't fix the proliferation of violent content online by ourselves."
Seventeen countries and the European Commission, as well as eight large tech companies, have signed on to her call. And late last week, leaders at the Group of 20 summit in Osaka, Japan, issued their own appeal to tech companies, declaring in a statement that "the rule of law applies online as it does offline."
But, if anything, the appetite for material connected to the Christchurch attack continues to grow, said Ben Decker, the chief executive of Memetica, a digital investigations consultancy.
Facebook said that an apparent livestream of the Christchurch attack was viewed by fewer than 200 users, but that videos of the attack posted later were watched by 4,000 others, and that the platform blocked more than 1 million uploads in the days after the assault. It is unclear how many uploads have been attempted in the months since.
The video game adapting the purported Christchurch footage is still being shared online. Modelled on other so-called first-person-shooter games, it tracks a gunman who enters a mosque, drawing a gun and killing anyone in his path.
In the days leading up to a court appearance by the accused last month, during which he pleaded not guilty to charges that included murder and terrorism, memes featuring him spiked across the message boards, Decker said. Scores of boards are devoted to the accused.
And on the day the accused was due in court, a user on Reddit announced a plan to attack a mosque in Texas, vowing to follow the example of "our lad." Many users flagged it to the police, and no attack occurred.
"You have these toxic communities trying to infect more mainstream congregations with xenophobia, Islamophobia and threats of mass violence," Decker said. "The fact that it moves across platforms allows users to notify law enforcement. It definitely is a tale of two internets."
Decker was among the consultants the New Zealand authorities met with as Ardern prepared to travel to Paris in May to issue her Christchurch Call. One question she has grappled with is how far New Zealand, an island nation of just under 5 million people, will go to keep the rest of the world at bay.
After the Christchurch attack, local internet service providers suspended access to websites that hosted videos of the shooting and apologised for the censorship, even as they acknowledged that they could not completely prevent users from viewing the material.
"We appreciate this is a global issue; however, the discussion must start somewhere," the companies said in a statement addressed to the heads of Facebook, Google and Twitter. "We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content."
The press in New Zealand has also imposed restrictions on itself. As news outlets have prepared to cover the trial trial, which is scheduled for May, they have voluntarily agreed to limit coverage of anything that could amplify white supremacist ideology, including the manifesto.
That manifesto has already had an impact beyond New Zealand's shores. In April, a gunman entered a synagogue 40km from San Diego, killing one person and injuring three others. The suspect claimed to have been inspired by the Christchurch shootings, had reportedly posted his own manifesto online and may have tried to livestream the shooting.
Senator Josh Hawley, R-Mo., has introduced a bill to amend legislation that protects tech companies from liability for content posted by their users.
8Chan, which cooperated with law enforcement after the Christchurch attack, has criticised the bill, saying that any erosion of the legislation is "an affront to liberty and freedom of speech online."
Ardern has said she hopes that less mainstream platforms will become more open to stamping out extremist content if the major platforms can reach a consensus on the issue.
Given the free speech considerations, and the gargantuan task that tech companies face in monitoring online speech, there has been a focus on the role that artificial intelligence could play in blocking hateful content, including at a House hearing late last month.
But Buchanan, the Georgetown expert, who attended the hearing, told the committee that automated systems alone would not be able to solve the problem.
Alex Stamos, a former chief security officer at Facebook and now the director of the Stanford Internet Observatory, said at the hearing that there were several steps that tech companies could take to address extreme content online, including being more transparent.
"While there is no single answer that will keep all parties happy, the platforms must do a much better job of elucidating their thinking processes and developing public criteria that bind them to the precedents they create with every decision," Stamos said.
"There remain many kinds of speech that are objectionable to some in society but not to the point where huge, democratically unaccountable corporations should completely prohibit such speech," he added. "The decisions made in these gray areas create precedents that aim to serve public safety and democratic freedoms but can also imperil both."