To a computer algo, massive online bullying is successful user engagement with content, and not human suffering.
When you look at how the social networks with billions of dollars in the bank, thousands of developers and huge computer farms continue to improve the algorithms that drive their sites, a scary picture emerges.
Last year, Facebook even patented a system that predicts which socioeconomic stratum you're in, to serve up better targeted content and advertising.
Are there technical countermeasures that limit exposure to bad things online? Yes, to a degree.
For instance, you could enrol kids' devices into a mobile device management (MDM) system which governs what can be viewed and done on tablets and smartphones, and which keeps logs of the activity on them.
Along with filtering and time-limiting, technical measures are imperfect however.
They can be bypassed by determined users. For kids, it becomes a challenge to get around them and there are so many devices everywhere that provide unrestricted internet access these days.
Technical measures can also fool parents into thinking that all is well when it's not.
Thinking I'd limit what my son can watch on YouTube, I deleted the unrestricted app on his device, and installed YouTube Kids which does a good job of removing dross.
That move irked the boy who quickly learnt that it was possible to get to YouTube via a web browser (which I couldn't delete). Worse, he shoulder-surfed and memorised my tablet's pass code and borrowed it; an elegant and impressive effort on his part to render my admittedly lame parental restrictions ineffective, and a wake-up call for me.
From my point of view, having child-specific versions of unrestricted software, like Facebook Messenger for under 13 year olds, just isn't the right answer.
Instead of forcing parents, teachers schools to waste time and money deploying ineffective solutions, perhaps it's time to restrict how effective social media companies' automated user profiling systems that hook users can be?
Applying a technological restriction at the source - putting limits on the power of algorithms - and not the end point would work.
It would push the responsibility of properly managing the immense power and reach of social media towards those who own it.
This is a discussion that needs to extend beyond social media too. The accurate and detailed user-profiling technology is spreading to other areas such as marketing, insurance and even politics, and they need to be reined in before we become powerless to do anything about it.