Since Hamas launched a deadly cross-border attack into Israel over the weekend, violent videos and graphic images have flooded social media. Photo / Samar Abu Elouf, The New York Times
The strategy mirrors efforts by extremist groups like the Islamic State and Al Qaeda in years past.
A video of a Hamas assailant firing his assault rifle at a car full of Israeli civilians was viewed more than 1 million times on X, formerly known as Twitter, since it wasuploaded Sunday.
A photograph of dead Israeli civilians, strewn across the side of a road in an Israeli kibbutz near the Gaza Strip, has been shared more than 20,000 times on X.
And an audio recording of a young Israeli woman’s desperate cries for help as she was being kidnapped from her home has been shared nearly 50,000 times on the platform.
Since Hamas launched a deadly cross-border attack into Israel over the weekend, violent videos and graphic images have flooded social media. Many of the posts have been seeded by Hamas to terrorise civilians and take advantage of the lack of content moderation on some social media sites — particularly X and Telegram — according to a Hamas official and social media experts interviewed by The New York Times.
The strategy mirrors efforts by extremist groups such as the Islamic State group and al-Qaida, which took advantage of the lack of guardrails at social media companies years ago to upload graphic footage to the internet. Social media companies reacted then by removing and banning accounts tied to those groups.
The issue has sprouted anew in the past week, particularly on X, where safety and content moderation teams have largely disbanded under Elon Musk’s ownership, and on Telegram, the messaging platform that does virtually no content moderation.
Israeli groups that monitor social media for hate speech and disinformation said graphic imagery often starts on Telegram. It then moves to X before finding its way to other social media sites.
“Twitter, or X as they are now called, has become a war zone with no ethics. In the information war being fought, it is now a place where you just go and do whatever you want,” said Achiya Schatz director of FakeReporter, an Israeli organisation that monitors disinformation and hate speech.
In the past, his group has reported fake accounts or violent content to X, which would then remove the post if it violated its rules, Schatz said. Now, he added, there is no one at the company to talk to.
“Everyone we once worked with is gone. There is no one to reach at that company,” he said. “The information war on Twitter is gone, lost. There is nothing left to fight there.”
He added that platforms such as Facebook, YouTube and TikTok had been responsive to requests about removing graphic images and misinformation from their platforms, although the companies were being inundated with requests.
Telegram and X did not respond to requests for comment. Over the weekend, X’s safety team tweeted an update to its policies, stating that it was removing Hamas-affiliated accounts and had taken action on tens of thousands of posts.
Nora Benavidez, senior counsel at Free Press, a media advocacy group, said the state of discourse on X during the conflict was “the terrible but natural consequence of 11 months of misguided Musk decisions.”
She cited the rollback of policies against toxic content, cuts in staff and the prioritisation of subscription accounts, which “now allows, even begs for, controversial and incendiary content to thrive.”
Some subscription accounts have also been posting fake or doctored images, said Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute at Rutgers University.
Researchers have identified images from video games that were posted on TikTok as actual footage. Old images from the civil war in Syria and a propaganda video from Hezbollah, the Lebanese Shiite militant organisation, have been circulated as new.
“It’s a problem across social media,” Goldenberg said.
Schatz said his organisation on Sunday identified a video of children in cages that had been viewed millions of times on X, amid claims that the children were Israeli hostages of Hamas. While the origins of the video aren’t clear, Schatz found versions of the video posted weeks ago on TikTok, and other researchers have discovered versions of the video on YouTube and Instagram claiming it was from Afghanistan, Syria and Yemen.
“We reported that the video was fake, and definitely not a current video from Gaza, but nobody at X responded,” Schatz said. “The real videos are bad enough without people sharing these fake ones.”
The effect of the videos has been stark. Some Israelis have begun avoiding social media for fear of seeing missing loved ones featured in graphic footage.
Sol Adelsky, an American-born child psychiatrist who has been living in Israel since 2018, said many parents had been advised to keep their children off social media apps.
“We are really trying to limit how much stuff they are seeing,” he said. “Schools are also giving guidance for kids to be off certain social media apps.” Some schools in the United States have also encouraged parents to tell their children to delete the apps.
Adelsky added that even with the guidance, a lot of unverified claims and frightening messages had made their way to people through messaging apps such as WhatsApp, which are popular among Israelis.
Fear and confusion are part of the strategy, according to a Hamas official who would speak only on the condition of anonymity.
The official, who used to be responsible for creating social media content for Hamas on Twitter and other platforms, said the group wanted to establish its own narratives and seek support from allies through social media.
When the Islamic State group published videos of beheadings on social media, he said, the footage served as a rallying cry for extremists to join their cause, and as psychological warfare on their targets. While he stopped short of saying that Hamas was following a playbook laid out by the Islamic State group, he called its social media strategy successful.