By Margaret Sullivan, The Washington Post's media columnist
Right from the twisted start, those who plotted to kill worshipers at two New Zealand mosques depended on the passive incompetence of Facebook, YouTube and other social media platforms.
They depended on the longtime priorities of the tech giants who, for years, have concentrated on maximising revenue, not protecting safety or decency.
They got it.
Many hours after the massacre, a horrific 17-minute video — showing a man in black shooting with a semiautomatic rifle at those running from mosques and shooting into piles of bodies — could still be easily accessed on YouTube.
My colleague, Washington Post tech reporter Drew Harwell, summed up the social media disaster succinctly in a tweet: "The New Zealand massacre was live-streamed on Facebook, announced on 8chan, reposted on YouTube, commented about on Reddit, and mirrored around the world before the tech companies could even react."
It gets worse. The brutality that killed at least 49 people and wounded many others was fuelled and fomented on social media — inviting support and, no doubt, inspiring future copy cats.
One of the suspects had posted a 74-page manifesto railing against Muslims and immigrants, making it clear he was following the example of those like Dylann Roof, who in 2015 murdered nine black churchgoers in Charleston, South Carolina.
All of it ricocheted around the globe, just as planned.
The platforms, when challenged on their role in viral violence, tend to say that there is no way they can control the millions of videos, documents and statements being uploaded or posted every hour around the world. They respond when they can, which is often with agonising slowness and far too late.
And they insist on presenting themselves not as media companies with some sort of gatekeeping or editing responsibility, but as mere platforms — places for their billions of users to do pretty much what they wish.
To the extent that the companies do control content, they depend on low-paid moderators or on faulty algorithms. Meanwhile, they put tremendous resources and ingenuity — including the increasing use of artificial intelligence — into their efforts to maximise clicks and advertising revenue.
This is far from the first time acts of violence have been posted in real time. Since Facebook's live-video tool began in 2015, it's been used to simulcast murder, child abuse and every sort of degradation.
But the tragedy in New Zealand takes this dangerous — and largely untended — situation to a new level that demands intense scrutiny and reform.
Granted, there are tough issues here, including those involving free speech and the free flow of information on the internet.
Reddit, for one, often takes the view that its users deserve to be treated like grown-ups, to see what they want to see.
As its representatives on Friday closed down a thread called "watchpeopledie," where users commented on the massacre video, they sounded regretful:
"The video is being scrubbed from major social-media platforms, but hopefully Reddit believes in letting you decide for yourself whether or not you want to see unfiltered reality," the post said. "Regardless of what you believe, this is an objective look into a terrible incident like this."
Where are the lines between censorship and responsibility?
These are issues that major news companies have been dealing with for their entire existences — what photos and videos to publish, what profanity to include.
Editorial judgment, often flawed, is not only possible. It's necessary.
The scale and speed of the digital world obviously complicates that immensely. But saying, in essence, "we can't help it" and "that's not our job" are not acceptable answers.
Friday's massacre should force the major platforms — which are really media companies, though they don't want to admit it — to get serious.
As violence goes more and more viral, tech companies need to deal with the crisis that they have helped create.