When mosque worshippers in Christchurch were killed last March, New Zealand was shocked to find footage of the event was live streamed on the world's biggest social media website.
The video stayed up for hours, remaining in the feeds of Facebook users as they trawled through advertisements and autoplaying viral videos to connect with their friends and families.
Outrage followed and questions began circulating about how something like this could be allowed to happen.
"People are looking to understand how online platforms such as Facebook were used to circulate horrific videos... and we wanted to provide additional information from our review into how our products were used and how we can improve going forward," Facebook's vice president of product management Guy Rosen said in the days following the Christchurch shootings.
The video was first reported to Facebook almost half an hour after the livestream began - 12 minutes after it had ended - which Facebook said meant it wasn't removed as quickly as it might have been if it had been flagged while it was still live.
"During the entire live broadcast, we did not get a single user report," Rosen said.
"This matters because reports we get while a video is broadcasting live are prioritised for accelerated review. We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground."
The livestream footage was later shared on the 8Chan message board, allowing it to be reuploaded to and subsequently removed from Facebook a further 1.2 million times in the first 24 hours.
Two weeks after that attack Facebook finally addressed the issue in a letter to the people of New Zealand from Facebook COO Sheryl Sandberg, telling Kiwis the company was "exploring" a number of options to stop the same thing happening again.
One of the options "explored" by Facebook included a one-strike policy that could see a user's live streaming privileges removed if they breached Facebook's rules.
The flaw in this approach is obvious: mass shooters who broadcast their crimes on Facebook rarely get the chance to do it again anyway after they're either captured or killed by police.
Last weekend, a Royal Thai Army officer in the middle of a mass shooting that killed 29 people and wounded 58 others regularly took to Facebook during his hours long massacre, including in a livestream where he asked viewers if he should surrender or not.
He was eventually killed by Thai commandos, which Thailand's public health minister Anutin Charnvirakul confirmed in a post on Facebook.
Facebook was quick to clarify to news.com.au that the "very short" live stream by the shooter did not contain any actual depictions of violence and as such was not classed as abhorrent violent material.
Facebook briefed the eSafety Commissioner who agreed with that classification.
"We have removed the gunman's presence on our services and have found no evidence that he broadcasted this violence on FB Live. We are working around the clock to remove any violating content related to this attack. Our hearts go out to the victims, their families and the community affected by this tragedy in Thailand," the company that knows all of our names, friends and relatives told news.com.au through an unnamed spokesperson.
Facebook has a policy to remove content that praises, supports or represents mass shooters and this policy was eventually used to remove the content once it was reported to the social media platform.
When a post is reported to Facebook it gets reviewed by a team of 15,000 content reviewers (whether these reviewers are contractors or actually employed directly by Facebook is unclear and the company didn't answer when we asked them that).
These 15,000 people review content in over 50 languages, meaning they aren't all able to review every piece of reported content, unless Facebook has somehow found 15,000 people fluent in over 50 languages and willing to use this skill in a repetitive, traumatising, and not all that well paid job.
This means that Facebook could have as little as 300 moderators for each language (though they could be spread proportionally through the most dominant languages on the platform).
Facebook has 2.4 billion users around the world and one moderator for every 160,000 of them.