Messages of support for anti-racism are spreading from social media executives. The problem is the companies don't address how their platforms are weaponised by racists. Photo / Matt Chase, NYT
Shows of support from Facebook, Twitter and YouTube don't address the way those platforms have been weaponised by racists and partisan provocateurs.
Several weeks ago, as protests erupted across the nation in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on hisFacebook page, decrying racial bias and proclaiming that "black lives matter." Zuckerberg, Facebook's chief executive, also announced that the company would donate US$10 million to racial justice organisations.
A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the company's chief executive, pledged US$3 million to an anti-racism organisation started by Colin Kaepernick, the former NFL quarterback.
YouTube joined the protests, too. Susan Wojcicki, the company's chief executive, wrote in a blog post that "we believe Black lives matter and we all need to do more to dismantle systemic racism." YouTube also announced it would start a US$100 million fund for black creators.
Pretty good for a bunch of supposedly heartless tech executives, right?
Well, sort of. The problem is that, while these shows of support were well-intentioned, they didn't address the way that these companies' own products — Facebook, Twitter and YouTube — have been successfully weaponised by racists and partisan provocateurs, and are being used to undermine Black Lives Matter and other social justice movements. It's as if the heads of McDonald's, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.
It's hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalised. In Tahrir Square in Cairo; Ferguson, Missouri; and Baltimore, activists used Twitter and Facebook to organise demonstrations and get their messages out.
But in recent years, a right-wing reactionary movement has successfully turned the tide. Now, some of the loudest and most established voices on these platforms belong to conservative commentators and paid provocateurs whose aim is mocking and subverting social justice movements, rather than supporting them.
The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn't necessarily know it by scrolling through your social media feeds.
On Facebook, for example, the most popular post on the day of Zuckerberg's Black Lives Matter pronouncement was an 18-minute video posted by right-wing activist Candace Owens. In the video, Owens, who is black, railed against the protests, calling the idea of racially biased policing a "fake narrative" and deriding Floyd as a "horrible human being." Her monologue, which was shared by right-wing media outlets — and which several people told me they'd seen because Facebook's algorithm recommended it to them — racked up nearly 100 million views.
Owens is a serial offender, known for spreading misinformation and stirring up partisan rancor. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly 4 million followers on Facebook. So can other high-profile conservative commentators like Terrence K. Williams, Ben Shapiro and The Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.
In all, seven of the Top 10 most-shared Facebook posts containing the phrase "Black Lives Matter" over the past month were critical of the movement, according to data from CrowdTangle, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favorable, perhaps because its users skew younger and more liberal.) Facebook declined to comment. On Thursday, it announced it would spend US$200 million to support black-owned businesses and organizations and add a "Lift Black Voices" section to its app to highlight stories from black people and share educational resources.
Twitter has been a supporter of Black Lives Matter for years — remember Dorsey's trip to Ferguson? — but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was actually run by a group of white nationalists posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter's trending topics sidebar, which is often gamed by trolls looking to hijack online conversations, has filled up with inflammatory hashtags like #whitelivesmatter and #whiteoutwednesday, often as a result of coordinated campaigns by far-right extremists.
A Twitter spokesman, Brandon Borrman, said, "We've taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world. From #BlackLivesMatter to #MeToo and #BringBackOurGirls, our company is motivated by the power of social movements to usher in meaningful societal change."
YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made strides in recent years to remove conspiracy theories and misinformation from its search results and recommendations, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contributed to racial division for years.
As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn't footage of a protest or a police killing, but a 4-year-old "social experiment" by viral prankster and former Republican congressional candidate Joey Saladino, which has 14 million views. In the video, Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an "All Lives Matter" sign in a predominantly black neighbourhood to prove a point about reverse racism.
A YouTube spokeswoman, Andrea Faville, said that Saladino's video had received less than 5 per cent of its views this year and that it was not being widely recommended by the company's algorithms. Saladino recently reposted the video to Facebook, where it has gotten several million more views.
In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Floyd's killing or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the country are being organised in Facebook groups and Twitter threads, and social media has been helpful in creating more accountability for police.
But these platforms aren't just megaphones. They're also global, real-time contests for attention, and many of the experienced players have gotten good at provoking controversy by adopting exaggerated views. They understand that if the whole world is condemning Floyd's killing, a post saying he deserved it will stand out. If the data suggests that black people are disproportionately targeted by police violence, they know that there's likely a market for a video saying that white people are the real victims.
The point isn't that platforms should ban people like Saladino and Owens for criticising Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet and which features of their platforms are undermining the social justice movements they claim to support.
They don't seem eager to do so. Recently, The Wall Street Journal reported that an internal Facebook study in 2016 found that 64 per cent of the people who joined extremist groups on the platform did so because Facebook's recommendations algorithms steered them there. Facebook could have responded to those findings by shutting off groups recommendations entirely or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.
As a result, Facebook groups continue to be useful for violent extremists. This week, two members of the far-right "boogaloo" movement, which wants to destabilise society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, California. According to investigators, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommendations, they're still appearing in plenty of people's feeds.
Rashad Robinson, president of Colour of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview this week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.
"What I see, particularly from Facebook and Mark Zuckerberg, it's kind of like 'thoughts and prayers' after something tragic happens with guns," Robinson said. "It's a lot of sympathy without having to do anything structural about it."
There is plenty more Zuckerberg, Dorsey and Wojcicki could do. They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don't involve using racial slurs or organised hate groups. They could dismantle the recommendations systems that give provocateurs and cranks free attention or make changes to the way their platforms rank information. (Ranking it by how engaging it is, the current model, tends to amplify misinformation and outrage bait.) They could institute a "viral ceiling" on posts about sensitive topics, to make it harder for trolls to hijack the conversation.
I'm optimistic that some of these tech leaders will eventually be convinced — either by their employees of colour or their own consciences — that truly supporting racial justice means that they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices. But I'm worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.
So is Robinson, the civil rights organiser. A few weeks ago, he chatted with Zuckerberg by phone about Facebook's policies on race, elections and other topics. Afterward, he said that he thought that while Zuckerberg and other tech leaders generally meant well, he didn't think they truly understood how harmful their products could be.
"I don't think they can truly mean 'Black Lives Matter' when they have systems that put black people at risk," he said.