"None of these changes are silver bullets," Brian Fishman, director of Facebook's dangerous organizations and individuals policy, said on Twitter. "There's still tons of work to do." But, he added, "there's a lot of progress under the hood and we wanted to provide insight into some of that work."
Some experts who study extremism online welcomed Facebook's expanded effort, especially the broader definition of terrorism. But they emphasised that the plan's effectiveness would depend on the details — where Facebook draws the line in practice and how the company reports on its own work.
"It's incredibly difficult to know exactly how these updates will play out in action, and oftentimes in the past, we've seen that the reality doesn't match the initial announcement," said Becca Lewis, a researcher at Stanford who studies extremist groups.
She added that Facebook would have to be comfortable with fewer people consuming content as it made these changes. "This is much tougher, in part because it would require social media platforms to grapple with their business models more fully," she said.
Evelyn Douek, a doctoral student at Harvard Law School who studies online speech legislation worldwide, said she was looking to future transparency reports that Facebook provides that will include data on extremist content to see whether the changes make a difference.
"A lot of these reports can be 'transparency theater' where they give information and statistics, but without enough context or information to make them meaningful," she said. Though the announcements are promising, she added, "I'll withhold judgment until I actually see how they do it."
Facebook has long played up its ability to catch terrorism-related content. In the past two years, the company said, it has detected and deleted 99 per cent of extremist posts — about 26 million pieces of content — before they were reported to it.
But Facebook said it had mostly focused on identifying organisations like separatists, Islamic militants and white supremacists. It said it would now consider people and organisations that engaged in attempts at violence toward civilians as terrorists, as opposed to its old way of defining terrorism by focusing on violent acts intended to achieve political or ideological goals.
The team leading its work to counter extremism on its site has grown to 350 people, Facebook added, and includes experts in law enforcement, national security and counterterrorism and academics studying radicalisation.
To identify more content relating to real-world harm, Facebook said it was updating its artificial intelligence to better catch first-person shooting videos. The company said it was working with American and British law enforcement authorities to obtain camera footage from their firearms training programs to help its AI learn what real, first-person violent events look like.
To divert people away from extremist content, Facebook said, it is expanding a program that redirects users searching for such posts to resources intended to help them leave hate groups behind. Since March, the company has channelled people who search for terms associated with white supremacy to resources like Life After Hate, an organization that provides crisis intervention and outreach. Facebook said that people in Australia and Indonesia would now be rerouted to the organisations EXIT Australia and ruangobrol.id.
In a letter Tuesday to Rep. Max Rose of New York, chairman of the subcommittee on intelligence and counterterrorism of the House Committee on Homeland Security, Facebook also said it was "blocking links to places on 8chan and 4chan that are dedicated to the distribution of vile content." That includes all content from 8chan's notorious /Pol board, a page known for trafficking in violent, racist speech.
That site has been offline since after the El Paso shooting, in which 22 people were killed. Fredrick Brennan, one of the 8chan's founders, had said after the shooting that the site should be shut down. The owner of 8chan, Jim Watkins, testified before lawmakers in a closed-door hearing this month.
"We've seen terrorists post 8chan links to Facebook in an effort to bring widespread attention to mass shootings, which is why I'm encouraged to see Facebook's willingness to work with me and ban those links," Rose said. "We all need to do more to combat the spread of terrorism and keep our communities safe — Congress, tech companies, everyone."
Inside Facebook, the company has additionally been developing an oversight board — colloquially referred to by outsiders as the Facebook Supreme Court — for more than a year. The company said Tuesday that the board would be made up of a "diverse" set of experts, each serving for a three-year term, with a maximum of three terms of service.
Members will oversee and interpret how Facebook's existing community standards are enforced by its content moderators, can instruct Facebook to allow or remove content, and will be asked to uphold or reverse designations on content removals. Members will also issue "prompt" written explanations for any decisions.
"Building institutions that protect free expression and online communities is important for the future of the internet," Facebook's chief executive, Mark Zuckerberg, said in a statement. "We expect the board will only hear a small number of cases at first, but over time we hope it will expand its scope and potentially include more companies across the industry as well."
Written by: Davey Alba, Catie Edmondson and Mike Isaac
Photographs by: Jason Henry
© 2019 THE NEW YORK TIMES