And despite a number of efforts by the social media platforms to clamp down, it appears to be getting worse.
"To put that in a practical context, as 2021 has progressed, Netsafe is continuing to record a new 'high' in terms of the highest ever number of reports related to harmful digital communication," Cocker said.
Netsafe is the agency responsible for complaints laid under the Harmful Digital Communications Act, and for public education and assistance over issues like harmful content and online abuse.
Cocker and his team have direct lines to the big social media platforms and law enforcement and can often make headway where an individual makes little headway with their own efforts to get any response from the likes of Facebook and Twitter.
The code could prove a useful new tool in Netsafe's toolbox, but it could also be complicated by the final shape of the Government's mooted hate-speech legislation, which is still unclear.
Council for Civil Liberties chairman Thomas Beagle has raised concerns that new restrictions on social media could hinder freedom of expression, harm democracy and prove impractical to implement.
However, with little detail about the mooted law change or Netsafe's code, the civil liberties advocate is boxing at shadows for now.
This morning, there was limited detail.
Asked if the new code would include a provision about response times to complaints (a common complaint from Herald readers, if any response even arrives beyond a template), Cocker said that would depend on how the code came together. As a general comment, the Netsafe boss said he was broadly in favour of the social media companies being encouraged to respond to complaints about harmful content, a set deadline such as 48 hours might not be feasible.
And on the question of what would happen to a social media company who ignored the code's provisions, Cocker said again that would depend on how the code came together. It was possible a social media company could be removed from the code, but that would be "extreme".
He did offer that an agency - possibly Netsafe, but to be confirmed once the code was finalised - would conduct regular audits of social media platforms' efforts to police harmful content.
Social media companies who did not meet their commitments under the code would be "identified and called out".
Cocker emphasised that the draft code would be put up for consultation over October and November, and answers to such questions would come into sharper resolution at that point.
On a Zoom call with media this morning, Facebook director of public policy for Australia and New Zealand Mia Garlick said the code hammered out with Netsafe would build on the Christchurch Call, which followed the 2019 mosque massacres and called on the industry to adopt a voluntary framework for dealing with harmful content.
Garlick said in some cases "we're still not finding harmful content fast enough". She said part of her company's ongoing effort to improve was more transparency around its initiatives.
Twitter director of public policy Australia and New Zealand Kara Hinesley said her company had also introduced transparency reports, and had simplified its harmful content rules.