Like other social media companies, Twitter has once again found itself in a position akin to that of traditional newspaper editors, who wrestle with difficult decisions about how much to show their audiences. Though newspapers and magazines generally spare their readers from truly graphic images, they have made some exceptions, as Jet magazine did in 1955 when it published open-casket images of Emmett Till, a 14-year-old Black boy who was beaten to death in Mississippi, to illustrate the horrors of the Jim Crow-era South.
Unlike newspaper and magazine publishers, however, tech companies like Twitter must enforce their decisions on a huge scale, policing millions of users with a combination of automated systems and human content moderators.
Other tech companies like Facebook’s parent, Meta, and YouTube’s parent, Alphabet, have invested in large teams that reduce the spread of violent images on their platforms. Twitter, on the other hand, has scaled back its content moderation since Mr. Musk bought the site late last October, laying off full-time employees and contractors on the trust and safety teams that manage content moderation. Mr. Musk, who has described himself as a “free speech absolutist,” said last November that he would establish a “content moderation council” that would decide which posts should stay up and which should be taken down. He later reneged on that promise.
Twitter and Meta did not respond to requests for comment. A spokesman for YouTube said the site had begun removing video of the massacre, adding that it was promoting authoritative information sources.
Graphic content was never completely banned by Twitter, even before Mr. Musk took over. The platform, for instance, has allowed images of people killed or wounded in the war in Ukraine, arguing that they are newsworthy and informative. The company sometimes places warning labels or pop-ups on sensitive content, requiring that users opt in to see the imagery.
While many users clearly spread the images of the massacre, including of the dead attacker, for shock value, others retweeted them to underscore the horrors of gun violence. “The N.R.A.’s America,” one tweet read. “This isn’t going away,” said another. The New York Times is not linking to the social media posts containing the graphic images.
Claire Wardle, the co-founder of the Information Futures Lab at Brown University, said in an interview that tech companies must balance their desire to protect their users with the responsibility to preserve newsworthy or otherwise important images — even those that are uncomfortable to look at. She cited as precedent the decision to publish a Vietnam War image of Kim Phuc Phan Thi, who became known as “Napalm Girl” after a photo of her suffering following a napalm strike circulated around the world.
She added that she favored graphic images of noteworthy events remaining online, with some kind of overlay that requires users to choose to see the content.
“This is news,” she said. “Often, we see this kind of imagery in other countries and nobody bats an eyelid. But then it happens to Americans and people say, ‘Should we be seeing this?’”
For years, social media companies have had to grapple with the proliferation of bloody images and videos following terrible violence. Last year, Facebook was criticized for circulating ads next to a graphic video of a racist shooting rampage in Buffalo, N.Y., that was live-streamed on the video platform Twitch. The Buffalo gunman claimed to have drawn inspiration from a 2019 mass shooting in Christchurch, New Zealand, that left at least 50 people dead and was broadcast live on Facebook. For years, Twitter has taken down versions of the Christchurch video, arguing that the footage glorifies the violent messages the gunman espoused.
Though the graphic images of the Texas mall shooting circulated widely on Twitter, they seemed to be less prominent on other online platforms on Sunday. Keyword searches for the Allen, Texas, shooting on Instagram, Facebook and YouTube yielded mostly news reports and less explicit eyewitness videos.
Sarah T. Roberts, a professor at the University of California Los Angeles who studies content moderation, drew a distinction between editors at traditional media companies and social media platforms, which are not bound by the ethics that traditional journalists adhere to — including minimizing harm to the viewer and the friends and family of the people who were killed.
“I understand where people on social media are coming from who want to circulate these images in the hopes that it will make a change,” Ms. Roberts said. “But unfortunately, social media as a business is not set up to support that. What it’s set up to do is to profit from the circulation of these images.”
Written by: Benjamin Mullin
© 2023 THE NEW YORK TIMES