Twitter, rebranded as X after being taken over by Elon Musk a year ago, has seen an increase in harmful content. Photo / Getty Images
When Elon Musk bought Twitter a year ago, he said he wanted to create what he called a “common digital town square.” “That said,” he wrote, “Twitter obviously cannot become a free-for-all hellscape.” A year later, according to study after study, Musk’s platform has become exactly that.
Now rebranded asX, the site formerly known as Twitter has experienced a surge in racist, antisemitic and other hateful speech. Under Elon Musk’s watch, millions of people have been exposed to misinformation about climate change. Foreign governments and operatives — from Russia to China to Hamas — have spread divisive propaganda with little or no interference.
Musk and his team have repeatedly asserted that such concerns are overblown, sometimes pushing back aggressively against people who voice them. Yet dozens of studies from multiple organisations have shown otherwise, demonstrating on issue after issue a similar trend: an increase in harmful content on X during Musk’s tenure.
The war between Israel and Hamas — the sort of major news event that once made Twitter an essential source of information and debate — has drowned all social media platforms in false and misleading information, but for Musk’s platform in particular the war has been seen as a watershed. The conflict has captured in full how much the platform has descended into the “free-for-all hellscape” that he promised advertisers he wanted to avoid on the day he officially took over last year.
“With disinformation about the Israel-Hamas conflict flourishing so dramatically on X, it feels that it crossed a line for a lot of people where they can see — beyond just the branding change — that the old Twitter is truly gone,” Tim Chambers of Dewey Square Group, a public affairs company that tracks social media, said in an interview. “And the new X is a shadow of that former self.”
The growing sense of chaos on the platform has already hurt Musk’s investment. People visited the website 5.9 billion times in September, down 13 per cent from the same month last year, according to data analysis firm Similarweb.
Advertisers have also fled, leading to a sizable slump in sales. Musk noted this summer that ad revenue had fallen 50 per cent. He blamed the Anti-Defamation League, one of several advocacy groups that have catalogued the rise of hateful speech on X, for “trying to kill this platform.”
Most of the problems, however, stem from changes that Musk instituted — some intentionally, some not. Studies about the state of X have been conducted over the past year by researchers and analysts at universities, think tanks and advocacy organizations concerned with the spread of hate speech and other harmful content.
He disbanded an advisory council focused on trust and safety issues and laid off scores of employees who addressed them. For a monthly fee, he offered users a blue checkmark, a label that once conveyed that Twitter had verified the identity of the user behind an account. He then used algorithms to promote accounts of uncertain provenance in users’ feeds. He removed labels that identified government and state media accounts for countries like Russia and China that censor independent media.
“The entire year’s worth of changes to X were fully stress-tested during the global news breaking last week,” Chambers said, referring to the conflict in Israel. “And in the eyes of many, myself included, it failed utterly.”
X remains one of the most popular social media platforms, trailing only Facebook’s 16.3 billion monthly visits and Instagram’s 6.4 billion visits, according to Similarweb. TikTok, which is rising in popularity among certain demographic groups, has roughly 2 billion visits each month. Despite voluble threats by disgruntled users to move to alternative platforms – Mastodon, BlueSky or Meta’s new rival to Musk’s, Threads — none of them have yet reached the critical mass to replicate the public exposure that X offers.
Keeping X at the centre of public debate is exactly Musk’s goal, which he describes at times with a messianic zeal. The day after Hamas attacked Israel, Musk urged his followers to follow “the war in real time.”
He then cited two accounts that are notorious for spreading disinformation, including a false post in the spring that an explosion had occurred outside the Pentagon. Faced with a flurry of criticism, Musk deleted the post and later sounded chastened.
“As always, please try to stay as close to the truth as possible, even for stuff you don’t like,” he wrote. “This platform aspires to maximize signal/noise of the human collective.”
Musk, the prominent, outspoken executive behind Tesla and Space X, had been an avid Twitter user for years before taking it over, promoting his ventures and himself, at times with crude, offensive comments. During the Covid-19 pandemic, he sharply criticised lockdowns and other measures to slow its spread and began to warn of a “woke” culture that silenced dissent.
Among his first acts as owner was to reverse the bans on thousands of accounts, including those of users who had spread disinformation about Covid and the 2020 election campaign. Others included followers of the QAnon conspiracy theory and fringe characters known for racist, sexist and homophobic demagogy.
The impact was instantaneous. Researchers at Tufts, Rutgers and Montclair State University documented spikes in the use of racial and ethnic slurs within hours of Musk’s acquisition. A coordinated campaign on 4chan, a notorious bulletin board, encouraged the use of a particular slur in what seemed to be a test of the new owner’s tolerance for offensive speech.
The prevalence of such offensive language has, according to numerous studies, continued unabated. “The Musk acquisition saw a sustained rise in hateful speech on the platform,” a peer-reviewed article in the Misinformation Review, a journal of the Harvard Kennedy School, wrote in August.
Worse, the article suggested, Musk’s changes appear to be boosting the engagements of the most contentious users.
A month into Musk’s ownership, the platform stopped enforcing its policy against Covid-19 misinformation. Liberal watchdog group Media Matters later identified the 250 accounts with the most engagement on Covid-related tweets. Nine of the top 10 accounts were known anti-vaccine proponents, several of whom promoted unproven and potentially harmful treatments and attacked top public health officials.
Musk’s first summer as X’s boss also coincided with a rash of climate-related disasters around the world, including deadly heat waves, rampaging wildfires, torrential rains and intense flooding. Last month, a score card evaluating social media companies on their defenses against climate-related falsehoods awarded X a single point out of a possible 21 (Meta, which owns Facebook and Instagram, was given 8 points).
The platform was “lacking clear policies that address climate misinformation, having no substantive public transparency mechanisms, and offering no evidence of effective policy enforcement,” according to the accompanying report from Climate Action Against Disinformation, an international coalition of more than 50 environmental advocacy groups.
Earlier this year, X said it would end free access to software that allowed researchers to collect and analyze data about the site. In a public letter, hundreds of researchers, journalists and civil society groups said that the new hurdles to data access would disrupt an array of public-interest projects and “reduce the very transparency that both the platform and our societies desperately need.”
Perhaps the most impactful change under Musk has been the evolution of his subscription plans. The blue checkmark that previously denoted verified accounts, often those of government agencies, companies and prominent users, could now be purchased for US$8 a month — by anyone. The label that once conveyed veracity is now up for grabs.
Reset, a nonprofit research organization, discovered that dozens of anonymous accounts linked to the Kremlin received the moniker, pushing Russian narratives on the war in Ukraine.
In April, Musk then removed the blue badges from the verified users. New accounts impersonating public officials, government agencies and celebrities proliferated, causing confusion about which were real. The platform went on to reward those who paid by amplifying their posts over those without the badge.
The same month, the platform also removed the labels that identified official state media of countries like Russia, China and Iran. In the 90 days following the change, engagement with posts from the English-language accounts of those outlets soared 70 per cent, NewsGuard, a company that tracks online misinformation, reported in September.
Musk has now run afoul of the European Union’s newly enacted Digital Services Act, a law that requires social media platforms to restrict misinformation and other violative content within the union’s 27 nations.
A report commissioned by the union’s executive body warned in August that Musk’s dismantling of guardrails on the platform had resulted in a 36 per cent increase in engagement with Kremlin-linked accounts between January and May, mostly pushing Russia’s justifications for its illegal invasion of Ukraine last year.
After the war in Israel erupted, Thierry Breton, a European Commission member who oversees putting the law into effect, issued a warning to Musk in a letter — posted on X — that the company needed to address “violent and terrorist content that appears to circulate on your platform.”
Reset, the research organisation, reported Friday that it had documented 166 posts that its researchers considered antisemitic, including many that would appear to violate laws in several European countries, including calls for violence against Jews and denying the historical facts of the Holocaust. They accumulated at least 23 million views and 480,000 engagements.
Musk, who has remade the platform in his own image, sounded incredulous, even as the company scrambled to delete accounts linked to Hamas and other terrorist groups. Even so, he responded two days later to an account, @KanekoaTheGreat, that according to the Anti-Defamation League is one of the most prominent purveyors of disinformation. The account had once been removed from Twitter but was restored in December 2022 after Musk took over.
“They still haven’t provided any examples of disinformation,” Musk replied.