Mike Schroepfer, Facebook's chief technology officer, was tearing up.
For half an hour, we had been sitting in a conference room at Facebook's headquarters, surrounded by whiteboards covered in blue and red marker, discussing the technical difficulties of removing toxic content from the social network. Then we brought up an episode where the challenges had proved insurmountable: the shootings in Christchurch.
In March, a gunman had killed 51 people in two mosques there and live streamed it on Facebook. It took the company roughly an hour to remove the video from its site. By then, the bloody footage had spread across social media.
Schroepfer went quiet. His eyes began to glisten.
"We're working on this right now," he said after a minute, trying to remain composed. "It won't be fixed tomorrow. But I do not want to have this conversation again six months from now. We can do a much, much better job of catching this."
The question is whether that is really true or if Facebook is kidding itself.
For the past three years, the social network has been under scrutiny for the proliferation of false, misleading and inappropriate content that people publish on its site. In response, Mark Zuckerberg, Facebook's chief executive, has invoked a technology that he says will help eliminate the problematic posts: artificial intelligence.
Before Congress last year, Zuckerberg testified that Facebook was developing machine-based systems to "identify certain classes of bad activity" and declared that "over a five- to 10-year period, we will have AI tools" that can detect and remove hate speech. He has since repeated these claims with the media, on conference calls with Wall Street and at Facebook's own events.
READ MORE:
• Feeble Facebook should have followed YouTube's move
Schroepfer — or Schrep, as he is known internally — is the person at Facebook leading the efforts to build the automated tools to sort through and erase the millions of such posts. But the task is Sisyphean, he acknowledged over the course of three interviews recently.
That's because every time Schroepfer and his more than 150 engineering specialists create AI solutions that flag and squelch noxious material, new and dubious posts that the AI systems have never seen before pop up — and are thus not caught. The task is made more difficult because "bad activity" is often in the eye of the beholder and because humans, let alone machines, cannot agree on what that is.
In one interview, Schroepfer acknowledged after some prodding that AI alone could not cure Facebook's ills. "I do think there's an endgame here," he said. But "I don't think it's 'everything's solved' and we all pack up and go home."
The pressure is on, however. Last week, after widespread criticism over the Christchurch video, Facebook changed its policies to restrict the use of its livestreaming service. At a summit in Paris with President Emmanuel Macron of France and Prime Minister Jacinda Ardern, the company signed a pledge to re-examine the tools it uses to identify violent content.
READ MORE:
• Chch massacre 'game', including raw video footage, found on Facebook
Schroepfer, 44, is in a position he never wanted to be in. For years, his job was to help the social network build a top-flight AI lab, where the brightest minds could tackle technological challenges like using machines to pick out people's faces in photos. He and Zuckerberg wanted an AI operation to rival Google's, which was widely seen as having the deepest stable of AI researchers. He recruited PhDs from New York University, the University of London and the Pierre and Marie Curie University in Paris.
But along the way, his role evolved into one of threat removal and toxic content eliminator. Now he and his recruits spend much of their time applying AI to spotting and deleting death threats, videos of suicides, misinformation and outright lies.
"None of us have ever seen anything like this," said John Lilly, a former chief executive of Mozilla and now a venture capitalist at Greylock Partners, who studied computer science with Schroepfer at Stanford University in the mid-1990s. "There is no one else to ask about how to solve these problems."
Facebook allowed us to talk to Schroepfer because it wanted to show how AI is catching troublesome content and, presumably, because it was interested in humanising its executives. The chief technology officer often shows his feelings, according to many who know him.
"I don't think I'm speaking out of turn to say that I've seen Schrep cry at work," said Jocelyn Goldfein, a venture capitalist at Zetta Venture Partners who worked with him at Facebook.
But few could have predicted how Schroepfer would react to our questions. In two of the interviews, he started with an optimistic message that AI could be the solution, before becoming emotional. At one point, he said coming to work had sometimes become a struggle. Each time, he choked up when discussing the scale of the issues that Facebook was confronting and his responsibilities in changing them.
"It's never going to go to zero," he said of the problematic posts.
'What a Burden. What a Responsibility.'
One Sunday in December 2013, Clément Farabet walked into the penthouse suite at the Harrah's hotel and casino in Lake Tahoe, Nevada. Inside, he was greeted by Schroepfer and Zuckerberg.
Zuckerberg was shoeless. Over the next 30 minutes, the CEO paced back and forth in his socks while keeping up a conversation with Farabet, an AI researcher at New York University. Zuckerberg described AI as "the next big thing" and "the next step for Facebook." Schroepfer, seated on the couch, occasionally piped up to reinforce a point.
They were in town to recruit AI talent. Lake Tahoe was the location that year for NIPS, an academic conference dedicated to AI that attracts the world's top researchers. The Facebook brass had brought along Yann LeCun, an NYU academic who is regarded as a founding father of the modern artificial intelligence movement, and whom they had just hired to build an AI lab. Farabet, who regards LeCun as a mentor, was also on their shortlist.
"He basically wanted to hire everybody," Farabet said of Zuckerberg. "He knew the names of every single researcher in the space."
Those were heady days for Facebook, before its trajectory turned and the mission of its AI work changed.
At the time, Silicon Valley's biggest tech companies — from Google to Twitter — were racing to become forces in AI. The technology had been dismissed by the internet firms for years. But at universities, researchers like LeCun had quietly nurtured AI systems called "neural networks," complex mathematical systems that can learn tasks on their own by analysing vast amounts of data. To the surprise of many in Silicon Valley, these arcane and somewhat mysterious systems had finally started to work.
Schroepfer and Zuckerberg wanted to push Facebook into that contest, seeing the rapidly improving technology as something the company needed to jump on. AI could help the social network recognise faces in photos and videos posted to its site, Schroepfer said, and could aid it in better targeting ads, organising its news feed and translating between languages. AI could also be applied to deliver digital widgets like "chatbots," which are conversational systems that let businesses interact with customers.
"We were going to hire some of the best people in the world," Schroepfer said. "We were going to build a new kind of research lab."
Starting in 2013, Schroepfer began hiring researchers who specialised in neural networks, at a time when the stars of the field were paid millions or tens of millions of dollars over four or five years. On that Sunday in 2013 in Lake Tahoe, they did not succeed in hiring Farabet, who went on to create an AI startup that Twitter later acquired. But Schroepfer brought in dozens of top researchers from places like Google, NYU and the University of Montreal.
Schroepfer also built a second organisation, the Applied Machine Learning team, which was asked to apply the Facebook AI lab's technologies to real-world applications, like facial recognition, language translation and augmented reality tools.
In late 2015, some of the AI work started to shift. The catalyst was the Paris terrorist attack, in which Islamic militants killed 130 people and wounded nearly 500 during co-ordinated attacks in and around the French capital. Afterwards, Zuckerberg asked the Applied Machine Learning team what it might do to combat terrorism on Facebook, according to a person with knowledge of the company who was not authorised to speak publicly.
In response, the team used technology developed inside the new Facebook AI lab to build a system to identify terrorist propaganda on the social network. The tool analysed Facebook posts that mentioned the Islamic State or al-Qaida and flagged those that most likely violated the company's counterterrorism policies. Human curators then reviewed the posts.
It was a turning point in Facebook's effort to use AI to weed through posts and eliminate the problematic ones.
That work soon gathered momentum. In November 2016, when Donald Trump was elected president, Facebook faced a backlash for fostering misinformation on its site that may have influenced voters and laid the groundwork for Trump's win.
Although the company initially dismissed its role in misinformation and the election, it started shifting technical resources in early 2017 to automatically identify a wide range of unwanted content, from nudity to fake accounts. It also created dozens of "integrity" positions dedicated to fighting unwanted content on subsections of its site.
By mid-2017, the detection of toxic content accounted for more of the work at the Applied Machine Learning team than any other task. "The clear No. 1 priority for our content understanding work was integrity," Schroepfer said.
Then in March 2018, The New York Times and others reported that British political consulting firm Cambridge Analytica had harvested the information of millions of Facebook users without their consent, to build voter profiles for the Trump campaign. The outcry against the social network mushroomed.
Schroepfer was soon called to help deal with the controversy. In April 2018, he flew to London to be the designated executive to face questions from a British parliamentary committee about the Cambridge Analytica scandal. He was grilled for more than four hours as parliamentary members heaped criticism on Facebook.
"Mr. Schroepfer, you have a head of integrity?" Ian Lucas, a Labour Party politician, said to the grim-faced executive during the hearing, which was live streamed around the world. "I remain unconvinced that your company has integrity."
"It was too hard for me to watch," said Forest Key, chief executive of a Seattle virtual reality startup called Pixvana, who has known Schroepfer since they worked together at a movie effects technology startup in the late 1990s. "What a burden. What a responsibility."
The challenge of using AI to contain Facebook's content issues was on — and Schroepfer was in the hot seat.
'Talking Engineers Off the Ledge of Quitting'
From his earliest days at Facebook, Schroepfer was viewed as a problem solver.
Raised in Delray Beach, Florida, where his parents ran a 1000-watt AM radio station that played rock 'n' roll oldies before switching to R&B, Schroepfer moved to California in 1993 to attend Stanford. There, he majored in computer science for his undergraduate and graduate degrees, mingling with fellow technologists like Lilly and Adam Nash, who is now a top executive at the file-sharing company Dropbox.
After graduating, Schroepfer stayed in Silicon Valley and went after thorny technical undertakings. He cut his teeth at a movie effects start-up and later founded a company that built software for massive computer data centres, which was acquired by Sun Microsystems. In 2005, he joined Mozilla as vice president for engineering. The San Francisco nonprofit had built a web browser to challenge the monopoly of Microsoft and its Internet Explorer browser. At the time, few technical tasks were as large.
"Browsers are complex products, and the competitive landscape is weird," said Mike Shaver, a founder of Mozilla, who worked alongside Schroepfer for several years. "Even early on in his career, I was never worried about his ability to handle it all."
In 2008, Dustin Moskovitz, a co-founder of Facebook, stepped down as its head of engineering. Enter Schroepfer, who came to the company to take that role. Facebook served about 2 million people at the time, and his mandate was to keep the site up and running as its numbers of users exploded. The job involved managing thousands of engineers and tens of thousands of computer servers across the globe.
"Most of the job was like a bus rolling downhill on fire with four flat tires. Like: how do we keep it going?" Schroepfer said. A big part of his day was "talking engineers off the ledge of quitting" because they were dealing with issues at all hours, he said.
Over the next few years, his team built a range of new technologies for running a service so large. (Facebook has more than 2 billion users today.) It rolled out new programming tools to help the company deliver Facebook to laptops and phones more quickly and reliably. It introduced custom server computers in data centers to streamline the operation of the enormous computer network. In the end, Facebook significantly reduced service interruptions.
"I can't remember the last time I talked to an engineer who's burned out because of scaling issues," Schroepfer said.
For his efforts, Schroepfer gained more responsibility. In 2013, he was promoted to chief technology officer. His mandate was to home in on new areas of technology that the company should explore, with an eye on the future. As a sign of his role's importance, he uses a desk beside Zuckerberg's at Facebook headquarters and sits between the chief executive and Sheryl Sandberg, the chief operating officer.
"He's a good representation of how a lot of people at the company think and operate," Zuckerberg said of Schroepfer. "Schrep's superpower is being able to coach and build teams across very diverse problem areas. I've never worked really with anyone else who can do that like him."
So it was no surprise when Zuckerberg turned to Schroepfer to deal with all the toxicity streaming onto Facebook.
Broccoli vs. Marijuana
Inside a Facebook conference room on a recent afternoon, Schroepfer pulled up two images on his Apple laptop computer. One was of broccoli, the other of clumped-up buds of marijuana. Everyone in the room stared at the images. Some of us were not quite sure which was which.
Schroepfer had shown the pictures to make a point. Even though some of us were having trouble distinguishing between the two, Facebook's AI systems were now able to pinpoint patterns in thousands of images so that it could recognise marijuana buds on their own. Once the AI flagged the pot images, many of which were attached to Facebook ads that used the photos to sell marijuana over the social network, the company could remove them.
"We can now catch this sort of thing — proactively," Schroepfer said.
The problem was that the marijuana-vs-broccoli exercise was a sign not just of progress but also of the limits that Facebook was hitting. Schroepfer's team has built AI systems that the company uses to identify and remove pot images, nudity and terrorist-related content. But the systems are not catching all of those pictures, as there is always unexpected content, which means millions of nude, marijuana-related and terrorist-related posts continue reaching the eyes of Facebook users.
Identifying rogue images is also one of the easier tasks for AI. It is harder to build systems to identify false news stories or hate speech. False news stories can easily be fashioned to appear real. And hate speech is problematic because it is so difficult for machines to recognise linguistic nuances. Many nuances differ from language to language, while context around conversations rapidly evolves as they occur, making it difficult for the machines to keep up.
Delip Rao, head of research at AI Foundation, a nonprofit that explores how artificial intelligence can fight disinformation, described the challenge as "an arms race." AI is built from what has come before. But so often, there is nothing to learn from. Behaviour changes. Attackers create new techniques. By definition, it becomes a game of cat and mouse.
"Sometimes you are ahead of the people causing harm," Rao said. "Sometimes they are ahead of you."
On that afternoon, Schroepfer tried to answer our questions about the cat-and-mouse game with data and numbers. He said Facebook now automatically removes 96 per cent of all nudity from the social network. Hate speech was tougher, he said — the company catches 51 per cent of that on the site. (Facebook later said this had risen to 65 per cent.)
Schroepfer acknowledged the arms-race element. Facebook, which can automatically detect and remove problematic live video streams, did not identify the New Zealand video in March, he said, because it did not really resemble anything uploaded to the social network in the past. The video gave a first-person viewpoint, like a computer game.
In designing systems that identify graphic violence, Facebook typically works backwards from existing images — images of people kicking cats, dogs attacking people, cars hitting pedestrians, one person swinging a baseball bat at another. But, he said, "none of those look a lot like this video."
The novelty of that shooting video was why it was so shocking, Schroepfer said. "This is also the reason it did not immediately get flagged," he said, adding that he had watched the video several times to understand how Facebook could identify the next one.
"I wish I could unsee it," he said.
Written by: Cade Metz and Mike Isaac
Photographs by: Peter Prato
© 2019 THE NEW YORK TIMES