The ability of Internet users to spread a video of Friday's slaughter in New Zealand marked a triumph — however appalling — of human ingenuity over computerised systems designed to block troubling images of violence and hate.
People celebrating the mosque attacks that left 50 people dead were able to keep posting and reposting videos on Facebook, YouTube and Twitter despite the websites' use of largely automated systems powered by artificial intelligence to block them. Clips of the attack stayed up for many hours and, in some cases, days.
This failure has highlighted Silicon Valley's struggles to police platforms that are massively lucrative yet also persistently vulnerable to outside manipulation despite years of promises to do better.
Friday's uncontrolled spread of horrific videos — a propaganda coup for those espousing hateful ideologies — also raised questions about whether social media can be made safer without undermining business models that rely on the speed and volume of content uploaded by users worldwide. In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media.
"It's an uncontrollable digital Frankenstein," said Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology.
Those pushing videos of Friday's attack made small alterations — such as changing the color tones or length — of the shooting video originally live-streamed by the alleged killer himself through his Facebook page. Such tricks often were enough to evade detection by artificial-intelligence systems designed by some of the world's most technologically advanced companies to block such content.
But for all of the investment in such technology, even more has gone into making social media platforms powerful springboards for delivering images, sounds and words to as many people as possible — for the purpose of generating advertising revenue that fuels profits measured in the tens of billions of dollars each year.
"The only good thing that's come out of this is that it laid bare the lie that the big tech companies can solve this problem with AI because they really cannot," said Cathy O'Neil, author of "Weapons of Math Destruction," about the societal impact of algorithms. "Nobody knows how to counteract this stuff. They should stop pretending they do."
Mia Garlick, the head of communications and policy for Facebook in Australia and New Zealand, said the company would "work around the clock to remove violating content using a combination of technology and people." Garlick said the company is also now even removing edited versions of the video that do not feature graphic violence.
Twitter did not respond to a request for comment Monday, and Reddit declined to comment, but both have described working hard over several days to remove objectionable content from the shooting. For Reddit, that includes forums named "gore" and "watchpeopledie."
A YouTube executive, in an exclusive interview with The Washington Post, acknowledged that the platform's systems were overwhelmed and promised to make improvements. "We've made progress, but that doesn't mean we don't have a lot of work ahead of us, and this incident has shown that," said Neal Mohan, YouTube's chief product officer.
Facebook, YouTube and Twitter have scrambled before and largely failed to contain odious content on their platforms. Nor is this the first time killers have used social media to deliver images of their crimes.
But nobody before had staged a mass-casualty attack in a way so geared to spreading it virally across social media. Many critics worry that because it has happened once, it almost certainly will happen again.
Micah Schaffer, a technology policy consultant and a former director at YouTube, said that over the past decade social media companies have designed software to more effectively promote content to broad audiences. Those design choices, he said, have made it easier for content to spread online.
"Back in 2007, when I was at YouTube, if a video on the homepage was a success, that meant getting hundreds of thousands of views," Schaffer said. "These videos were handpicked, and we scrutinized each one. Compare that today to the recommendation algorithm sending millions of views to videos with no human intervention. To me, that's just irresponsible."
Those who study social media say that slowing the spread of appalling videos might require the companies to change or limit some features that help spread stimulating content. Those include powerful search and recommendation algorithms, nearly instantaneous uploads and autoplay.
The companies' losing battle to keep content in check already is having financial consequences. Facebook's stock recorded its steepest drop of the year, falling more than 3 per cent on Monday. In a note to clients, Needham & Co. analyst Laura Martin blamed the negative effects of "horrific images ... that are technologically difficult to block at the 100 per cent level and which hurt [Facebook's] brand."
Shares in Google's parent company, Alphabet, slid less than 1 per cent.
The day of the New Zealand shooting, authorities say, the admitted gunman, Brenton Tarrant, posted on the anonymous message board 8chan that he would "live stream the attack via facebook" and included a link to Facebook and a number of other sites where an anti-Muslim manifesto and other documents were stored.
Within minutes of this announcement, a flood of anonymous posters were cheering him on and rallying to save and re-upload the video widely online to give it maximum distribution: "I am downloading it"; "Stream saved"; "SAVE THIS S*** NOW"; "GRAB THEM WHILE YOU CAN."
One poster wrote that the gunman had "delivered" on his pledge: "I just saw him kill so many" people, using an obscene term for Muslims. "Nice shootin," another wrote. 8chan did not respond to requests for comment.
"Human nature has always had evil and horrific elements to it. But we're not asking for the platforms to solve humanity — that's not the issue," said Mike Ananny, an associate professor of communication at the University of Southern California. "These companies have the advertising monopolies, the eyeball-attention monopolies. … We have to expect them to be a much more responsible player than they are."
When the tech giants want to block a troubling video, they add the original to a vast internal blacklist so that their systems can quickly recognize whenever a copy of it resurfaces. This type of "hashing" technology is a key reason companies can automatically flag or block child pornography, terrorist propaganda and copyrighted material before it spreads widely online.
But their algorithms depend on pattern recognition, and even the most sophisticated systems are easily deceived. Anyone hoping to spread an otherwise blocked video — such as movie pirates and creators of extremist videos — can cut out shorter snippets or change the video's playing speed, colouring, sound or video size, then upload it again.
Facebook said that it blocked 80 per cent of the New Zealand first-person shooting videos from being uploaded within the first 24 hours but that roughly 300,000 videos slipped through — a showing that put them within reach of millions of people online. The company also joined its peers, including Twitter and YouTube, in sharing information about terrorist propaganda as part of the industry-led Global Internet Forum to Counter Terrorism. On Monday, the group said it had identified "more than 800 visually-distinct videos related to the attack," adding: "This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online."
U.S. law largely shields major tech platforms from being held liable for the content users post to their sites. Silicon Valley sees this decades-old legal shield — known as Section 230 for the provision in law it represents — as the reason that major social media sites have flourished without having to defend themselves against lawsuits.
But frustrated lawmakers from both parties increasingly are wondering whether the time has come to craft new legislation to curb digital ills such as hate speech and terrorist propaganda.
"I think that we absolutely need to have a hearing and understand exactly what happened in the situation to determine the best solutions to prevent it from happening in the future," said Rep. Suzan DelBene (D-Wash.). "What happened in New Zealand is horrifying and heartbreaking, and that video should never be available online."
But a chief architect of Section 230, Sen. Ron Wyden (D-Ore.), urged caution. While saying that tech giants must be "far more vigorous about identifying, fingerprinting and blocking content and individuals who incite hate and violence," Wyden said that eliminating the section could have unintended consequences for free speech.
"So often in the wake of horrible events politicians grasp for knee-jerk responses that won't solve real problems, and may even make them worse," he added in a statement.
European policymakers have been more aggressive in holding technology giants responsible for what appears on their platforms. Germany, for example, began enforcing a law last year that requires social media sites to remove instances of hate speech within 24 hours. The European Union broadly has proposed rules that would give Facebook, YouTube and others an hour to remove terrorist content — or face hefty fines.
Facebook chief Mark Zuckerberg told congressional lawmakers last year that human moderators and AI systems would help the company solve some of its most intractable problems, including automatically flagging harmful videos.
But AI researchers have said that view doesn't factor in the vast number of ways in which automated systems fail to come even close to human ability, including in understanding context and common sense. Today's algorithms excel at certain tasks but are what AI engineers call frustratingly "brittle," with even small changes to their tasks or training data quickly leading them to collapse.
Stephen Merity, a machine learning researcher in San Francisco, said tech companies do not want to use more drastic measures, such as tougher restrictions on who can upload or bigger investments in content-moderation teams, because of how they could alter their sites' usability or business model. "They don't want to do that, so they make these wild promises," he said.
But "you can't bank on future magical innovations," Merity said. "We're past the point of ceding the benefit of the doubt to these tech companies."