Hossain Gazi inside one of the huts he built to house Rohingya refugees, in West Bengal, India. Photo / Arka Dutta, The New York Times
Mohammad Salim, a Rohingya Muslim refugee, thought he had left genocidal violence and Facebook vitriol behind when he fled his native country, Myanmar, in 2013.
But lately, his new home, India's West Bengal state, has not felt much safer. And once again, Facebook is a big part of the problem.
During India's recent national elections, Salim said, he saw Facebook posts that falsely accused Rohingya Muslims of cannibalism go viral, along with posts that threatened to burn their homes if they did not leave India. Some Hindu nationalists called the Rohingya terrorists and shared videos on the social network in which the leader of India's governing Bharatiya Janata Party vowed to expel the minority group and other Muslim "termites." A week ago, new posts popped up falsely accusing the Rohingya of killing BJP workers in West Bengal.
"Many groups demonised us on Facebook and WhatsApp, and they succeeded in whipping up a strong anti-Rohingya passion in the state," Salim, 29, said in a recent interview in a village near Kolkata, West Bengal's capital.
He said he had quit selling fruit juice at local rail stations and was moving with his pregnant wife and two toddlers to a new, undisclosed location — their fourth home in the past 15 months — because he was afraid of being attacked by right-wing Hindus or arrested.
Salim's experience, echoed in interviews with other Rohingya Muslims who sought refuge in India, shows the widening, real-world repercussions of Facebook's failure to stop anti-Rohingya hate speech on its platform, an issue that the company's chief executive, Mark Zuckerberg, promised last year to solve.
For years, Facebook ignored dehumanising anti-Rohingya propaganda on its Myanmar pages, despite substantial evidence that it was leading to mass killings, rape and the destruction of villages. After United Nations investigators criticized Facebook last year for playing a "determining role" in the ethnic cleansing of the Rohingya and the flight of 700,000 refugees, Zuckerberg told the US Senate: "What's happening in Myanmar is a terrible tragedy, and we need to do more."
But anti-Rohingya hate speech and falsehoods have since spread to India, where Facebook has 340 million users. That is creating the potential for violence in tinderbox regions like West Bengal, a Hindu-majority state with a substantial Muslim population, where the BJP has stoked fears of Muslim "infiltrators" from Bangladesh. In total, the government estimates there are about 40,000 Rohingya in India.
"Hate speech and misinformation is adding fuel to the already existing hatred towards the Rohingyas," said Mariya Salim, an independent activist on minority and women's rights who lives in Kolkata. "It's not a secret that online calls for violence can easily turn into real-life threats."
Facebook said it had made progress in combating anti-Rohingya hate speech. The Silicon Valley company has assembled a team of 100 people who speak Burmese to review posts from Myanmar, which was formerly known as Burma. It banned some military accounts responsible for hate speech. And it said it had trained its algorithms to better detect hate speech globally, claiming that it now removes about two-thirds of such posts before anyone even complains about them.
"We don't want our services to be used to spread hate, incite violence or fuel tension against any ethnic group in any country — including the Rohingya in India," Facebook said in a statement. "We have clear rules against hate speech and credible threats of violence, and we use a combination of technology and reports to help us identify and remove such content."
Yet Facebook is limited in its ability to eradicate hate speech and false information. It relies heavily on users to report inappropriate posts and on third-party partners to assess falsehoods, which means only some of the offending material is caught. The company's employees and contractors often lack the linguistic and cultural knowledge necessary to gauge the offline risks posed by certain content. And Facebook's focus on individual posts means it can overlook the long-term impact of sustained hate campaigns.
"I think Facebook keeps thinking they can solve this within the bunker of their offices and not with the collaboration of the communities who are affected," said Thenmozhi Soundararajan, the founder of Equality Labs, a human rights group that tracks hate speech in India.
Anti-Rohingya hate speech can also be found on Twitter and YouTube. But Facebook is far more influential than those services in India.
Soundararajan said that such speech on Indian Facebook pages started to increase in early 2018 when the country held elections for the upper house of Parliament. It escalated late last year as the elections for the more important lower house of Parliament approached.
Dealing with anti-Rohingya content was made harder by the BJP, which is led by Prime Minister Narendra Modi. Hoping to win Hindu votes in heavily Muslim states like West Bengal, the party campaigned on a promise to expel Muslim "infiltrators" and to make India — which is about 80 per cent Hindu but constitutionally secular — into a Hindu nation. BJP supporters used false information and criticism of Rohingya refugees as shorthand for broader anti-Muslim sentiments, Soundararajan said.
She said she had warned Facebook officials last fall about the spike in anti-Rohingya hate speech and had provided specific examples. But they did little to address the problem, she said.
Since then, anti-Rohingya posts directed at Indians have circulated widely on Facebook. In one video, a gang of men from the BJP's militant wing brandishes knives and burns the effigy of a child. "Rohingyas, go back!" the men scream in English and Hindi. This month, dozens of Rohingya homes were burned in Jammu, where the video and similar ones were shot.
Facebook said it had decided not to remove the videos because they were posted by entities claiming to be news organizations and were not directly linked to violence.
Users also posted gruesome images of human arms and other body parts and falsely claimed that the Rohingya were cannibals. The images were often removed because they violated Facebook's rules against graphic violence and hate speech, yet they kept resurfacing.
Other videos inaccurately said that Rohingya Muslims had attacked BJP workers and beaten up a Hindu priest in West Bengal. Facebook said that after independent fact checkers disproved these claims, it buried those posts.
In a more subtle attack, two Indian actresses, Payal Rohatgi and Koena Mitra, championed the anti-Rohingya cause on Facebook and Twitter. Mitra accused Rohingya refugees of being terrorists and criminals. Facebook removed some images posted by Mitra after The New York Times inquired about them.
An extremist state lawmaker, Raja Singh, whose official Facebook page was banned in March over his anti-Muslim hate speech, set up another page weeks later. In one older video still on Facebook, he called the Rohingya "insects" and "worms" and said that they should be shot if they did not leave India voluntarily. The company said Singh had not violated its rules since his return.
Facebook said its efforts to fight hate speech were a work in progress.
"We still have a long way to go," said Rosa Birch, director of the company's strategic response team.
Birch's year-old team is figuring out how to tackle issues such as "divisive" posts that do not violate the social network's rules. It is also experimenting with new techniques for preventing violence, including a temporary restriction on the sharing of posts in Sri Lanka after Muslim-led terrorist bombings there last Easter.
In addition, Facebook said it was supplementing its 15,000 human content reviewers by teaming up with civil society groups in various countries to help it assess potentially violent or threatening speech. It declined to disclose the names of its partners.
For the Rohingya in India, those explanations are of little comfort.
Hossain Gazi, a social worker in West Bengal who built huts and rented homes last year to house several hundred Rohingya refugees, including Salim's family, said that after his efforts received some publicity, right-wing Hindu groups visited, took photographs and made threats on Facebook and via phone against the Rohingya living there.
"They even wrote in several social media posts that I was running a terrorist training camp for the Rohingya and the authorities should arrest and jail me," he said. All the Rohingya refugees soon left his camps, he said.
Abdul Goni, a Rohingya refugee who lived in India from 2012 until fleeing to Bangladesh last year, said that Rohingya Muslims had used WhatsApp, where messages are private, to circulate some of the threatening videos from right-wing Hindu groups and to warn one another of impending danger.
As for Facebook, which is more public, Goni said that many Rohingya had deactivated their accounts on the social network. Others have stayed on it to monitor what is being said about them but have hidden their location and erased videos and photos — anything that would link them to the Rohingya community.
Salim, who has since moved from his West Bengal location, said it was as if he had gone full circle.
"My family fled violence in Burma and took refuge in India," he said. "We are being hounded again in this country."