Facebook founder Mark Zuckerberg has delivered a post promising to free the platform from the large amounts of awful, extreme and distressing content that gets published on a daily basis.
Zuckerberg said every year he sets a personal challenge for himself and given his belief that the world feels more anxious and divided than ever before, he wants to do his part by cleaning up the social media site.
"My personal challenge for 2018 is to focus on fixing these important issues. We won't prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we're successful this year then we'll end 2018 on a much better trajectory," he wrote on a post.
"This will be a serious year of self-improvement and I'm looking forward to learning from working to fix our issues together."
Every year I take on a personal challenge to learn something new. I've visited every US state, run 365 miles, built an...
But Zuckerberg's task is not as easy as it sounds and requires the services of a small army of people to help - among other things - moderating the worst on the net.
Sarah Katz, 27, was tasked with searching for the worst on the web to keep it off Facebook, meaning on a daily basis she was forced to hunt online for pornography, racism and violence. It is largely considered to be the worst job in technology.
Despite little training on how to handle the distress, Katz had to look at as many as 8000 posts a day, which often include anti-Semitic speech, bestiality photos and child exploitation.
Hired by Facebook as a contractor, Katz was paid US$30 per hour for her services, which also required her to sign a waiver warning her about what she would encounter.
She has since quit her position — something common for this type of role with a high turnover rate that sees most content moderators lasting only a few months to a year.
It's also not uncommon for workers to quit on their first day, with some people even leaving for lunch and never returning.
Shaka Tafari was another worker tasked with managing the content of social media sites, having worked as a contractor at messaging app Whisper in 2016.
The 30-year-old said he was often subject to graphic images of bestiality or people killing dogs, plus a plethora of rape references.
"I was watching the content of deranged psychos in the woods somewhere who don't have a conscience for the texture or feel of human connection," he told the Wall Street Journal.
Tafari said the nature of the job was cruel as managers remotely monitored the productivity of moderators, sending them messages to ask why they weren't working if they dwelled too long after reviewing a post.
The policing of content on social media sites is one of the fastest-growing jobs in the technology world, and while Facebook, YouTube and Google are working to develop algorithms and artificial-intelligence tools, humans remain the first and best line of defence.
To highlight the gruelling task these people face, YouTube has the equivalent of 50 years of video uploaded each day, while Facebook receives more than a million user reports of dubious content per day — a lot of content to monitor.
In an attempt to stem the issue, Facebook has 7500 content reviewers and has plans to more than double this number by the end of 2018.
"I am dead serious about this," chief executive Mark Zuckerberg said last November when talking on the issue.
To help the contractors deal with the graphic content they are subject to, Facebook requires content moderators to be offered as many as three face-to-face counselling sessions a year.
A YouTube spokeswoman said the platform had similar methods in play.
"We strive to work with vendors that have a strong track record of good working conditions, and we offer wellness resources to people who may come across upsetting content during the course of their work," she said.
Lance Ulanoff, chief correspondent and editor-at-large for tech site Mashable, likened the positions to working a 24-hour crisis hotline.
"It's very intense work," Ulanoff told The Post. "These are people who are looking specifically for language or images that might indicate self-harm, violence or anything that would indicate someone might harm others.
"These monitors are seeing potentially intense information on a constant basis. At the same time, that's what they signed up to do."
Ulanoff feels Facebook's response to the spate of violence and suicides committed on the site has been good thus far, saying it recognises the role it plays in the larger conversation.
"They're coming around to the idea that they have to become stewards of this content platform and maybe make people's lives better and improve their product at the same time," he said.
"Keeping track of these monitors and maybe refreshing the group every now and then is a good idea, but they had to do something and I think this is a very good step … They're doing what they need to do for these monitors, at least for now."
Former Microsoft online safety program employees Henry Soto and Greg Blauert also know the damage working such a job can cause, with the men seeking damages from the company.
Both men are alleging negligence, disability discrimination and violations of the Consumer Protection Act.
The men claim Microsoft failed to warn them of the dangers of the job and did not provide adequate psychological support.
Microsoft disagreed with the plaintiffs' claims, saying it applies "industry-leading, cutting-edge technology" to identify questionable content and bans the users who shared that material. The company also said it has "robust wellness programs" to ensure employees who view the content are properly supported.
Mr Soto claims his job exposed him to "many thousands of photographs and video of the most horrible, inhumane and disgusting content you can imagine", according to the lawsuit.
The former-content moderator said he started having auditory hallucinations after seeing footage of a girl abused and murdered, according to the suit, ultimately going on medical leave in February 2015.
"Soto was embarrassed by his symptoms, which included panic attacks in public, disassociation, depression, visual hallucinations, and an inability to be around computers or young children, including, at times, his own son, because it would trigger memories of horribly violent acts against children that he had witnessed," the lawsuit reads.
Even though technology is working to ensure humans don't need to be the first line of defence, it will still be a while before people can remove themselves from the danger of these jobs.