Even now, after many of the comments have been moderated or deleted, there are still many that are questionable, including one that calls Facebook "Flakebook" for deleting copies of the alleged killer Brenton Tarrant's livestream.
As a regular Breitbart reader, the President would have known the likely tenor of comments.
Meanwhile, one of Trump's top advisors, Kellyanne Conway, made the ridicous claim that Tarrant was an "eco-terrorist."
Ten hours later, Trump finally offered a more human response, tweeting a condemnation of the massacre and standing in support of NZ.
That's part of a pattern of the US president taking a pause before speaking out against racist violence and other racist incidents.
And when asked if he thought the Christchurch massacres reflected a rising global threat from white nationalism, he responded, "I don't, really. I think it's a small group of people with serious problems." It's hard to imagine he would have taken such a mild, mental health line if the shooter had been Muslim.
Another tweet I found galling was this one from Google-owned YouTube:
While it was good YouTube was working to delete copies of Tarrant's livestream, many will be questioning the site's policy of letting people upload video without any moderation. The same goes for Facebook and Twitter, used by the alleged killer, which both hosted (now deleted) accounts by Tarrant that generated the content that was copied to YouTube.
Read more: Tech companies scramble to remove Christchurch shooting video
Social media companies argue for free speech, but cynics will also see a profit motive. Blocking livestreams or reviewing all video content before it went live would dramatically increase their costs. The same goes for comments.
In the hours after the massacres, Spark, Vodafone and other NZ internet providers worked to block hate sites that were hosting copies of the livestreamed footage. But in the immediate aftermath of the attack, the problem wasn't dark web sites, it was mainstream social media platforms being used to share the sickening footage.
Facebook Australia NZ policy director Mia Garlick says, "Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement. We are adding each video we to find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again."
A lot of measures, but they all amount to the ambulance at the bottom of the cliff.
Or as one Auckland man put it on Twitter:
For its part, YouTube has a broader question about new media "stars" whose channels seems happy to carry. The Washington Post reported that, "At one point, the shooter [Tarrant] even paused to give a shout-out to one of YouTube's top personalities, known as PewDiePie, with tens of millions of followers, who has made jokes criticised as anti-Semitic and posted Nazi imagery in his videos."
Law change watered-down
An update to the Privacy Act is currently making its way through Parliament.
Privacy Commissioner John Edwards, who has been constantly grappling with inappropriate content posted to Facebook and other platforms, asked for the power to fine organisations up to $1 million. MPs shot down that request. Maybe now they'll take a second look at the legislative update continues to make its way through Parliament - and take a wider look at how social media is policed.