Police attempt to clear people from outside a mosque in central Christchurch on March 15. Photos / AP file
Facebook said it struggled to identify the video of the Christchurch mosque shootings because of the use of a head-mounted camera by the gunman, which made it harder for its systems to automatically detect the nature of the video.
"This was a first-person shooter video, one where we have someone using a GoPro helmet with a camera focused from their perspective of shooting," Neil Potts, Facebook's public policy director, told British MPs today.
Terror footage from a first-person perspective "was a type of video we had not seen before," he added.
Because of the nature of the video, Facebook's artificial intelligence - used to detect and prioritise videos that are likely to contain suicidal or harmful acts - did not work.
Potts was giving evidence to a committee of senior MPs in the UK as part of a parliamentary inquiry into hate crime. Representatives for Twitter, Google and YouTube also gave evidence.
Social media platforms, such as Facebook, have been facing scrutiny after the shooter accused of killing dozens of people in two mosques in Christchurch live-streamed the murders over the internet.
The social media company came under sharp criticism for not taking the video down fast enough and for letting it be circulated and uploaded to other platforms like YouTube.
At congressional hearings in the US over the past two years, executives from Facebook and YouTube said they were investing heavily in artificial intelligence that would be able to find and block violent and graphic videos before anyone saw them.
In a blog post following the attack, Facebook said that its AI systems are based on using many thousands of examples of content to train a system to detect certain types of text, imagery or video.
Potts was also chastised by the committee's chair, the Labour party's Yvette Cooper, for not knowing the senior officer in charge of counter terrorism policing in the UK, Neil Basu.
"We've been told by the counter terrorism chief that social companies don't report to the police incidents that clearly are breaking the law," Cooper told Potts. "You may remove it, but you don't report it."
Potts responded that he was "not familiar with the person you mentioned, or his statement," and later apologised for not knowing him. He said, however, that Facebook doesn't report all crimes to police but does report "imminent threats."
"These are places where government could be giving us more guidance," Potts said.
The committee investigating hate crime is separate to the one that recently recommended the British Government take tougher measures to keep technology companies like Facebook in check, following a year-long inquiry into fake news and its impact on elections.
Stephen Doughty, a Labour party MP, directed broad and strongly-worded criticism at all three witnesses.
"Your systems are simply not working and quite frankly it's a cesspit," he said, referring to the collective platforms' content. "It feels like your companies don't give a damn. You give a lot of rhetoric but you don't take action."
Marco Pancini, director of public policy for YouTube, responded that "we need to do a better job and we are doing a better job," adding that since an earlier hearing "we introduced a team that helps us better understand trends of violations of our policies by far-right organisations."
"That's all wonderful but they're clearly not doing a very good job," Doughty replied.