How to prevent monstrous acts from being transmitted and viewed by people remains an impossible challenge. Photo / Getty Images
COMMENT:
Another live streamed mass murder, this time in Thailand where a solider went on a shooting rampage. That's something nobody needs to see, but how to prevent monstrous acts from being transmitted and viewed by people remains an impossible challenge.
It is true that the internet provides access toboth great and awful material; however, bear in mind that there is no "raw" or unfiltered internet where anything goes.
On the contrary, most of us have filtered internet access already. Email would be even less usable than it is without spam filtering and internet providers drop connections with firewalls; that's considered best practice network engineering, and it would be unsafe not to do so.
Google filters search results, and its Chrome browser blocks access to malicious sites. Facebook and Instagram happily let people and bots post horrendous disinformation but any nude bits are a no-no.
In a nutshell, filtering out bad stuff on the internet works poorly. If it worked well, the internet would be a safe and secure environment, without hackers, ransomware criminals, viruses, government and corporate surveillance, you name it.
Despite current efforts to restrict access to Bad Things struggling to keep up with malicious actors abusing technology, the government now appears to be looking for more of the same to stop objectionable content from finding its way to New Zealanders' devices.
Let's call the proposed new law for what it is, namely censorship of the internet.
As we've seen with past "internet laws' ' like the Telecommunications Interception Capability and Security Act and the anti-file sharing amendment to the copyright law, the government put the onus on providers to work out how to be compliant with the regulations, or face massive fines.
Censoring the internet shouldn't be the providers job though.
For starters, the law would have to have a rock-solid definition of what constitutes Bad Things on the internet. That's existing banned material, and Bad Things that haven't yet popped up, but when they do, they should be filtered out.
Some things everyone can agree on are bad, but there are so many areas that aren't clear cut.
For example, I would be happy to see all anti-vaccine sites and Facebook groups blocked.
Children are dying because of the dangerous anti-science nonsense they spread but even then I can guarantee that thousands of people would disagree and not want anti-vax disinformation banned. Ditto anti-1080, anti-5G, anti-fluoride campaigners, nutters advocating drinking bleach to cure novel coronavirus infections and other crazed nonsense.
Asking internet provider staffers to make snap decisions as to what's a Bad Thing and what is not simply isn't fair on them. What if they get it wrong? They'd have to watch and hear horrible things, and risk being traumatised in the process.
Technically, devising censorship filtering would be complex and risky and require constant monitoring for errors and problems.
On top of that, should providers become NetCops as well, keeping tabs on which users are actively trying to bypass censorship, and then report them to the authorities?
There's no single solution available that does all the required filtering effectively. Instead, providers have to block and filter at several levels, each with their own set of problems.
You could block links or web addresses leading to objectionable content, but they are easy to change. You'd have to be careful not to drop traffic to, say, microsoft.com in the process.
Blocking internet protocol addresses is another way to do it but the risk of collateral damage is enormous. The Australian Securities and Investment Commission meant well but got it wrong a few years ago, and accidentally blocked access to 250,000 sites hosted on one IP address. Oops.
Content filtering relies on an agreement as to what constitutes Bad Things, and is subject to the Scunthorpe Problem which needs no further explanation.
I'm told that one idea is to take the existing Department of Internal Affairs NetClean filter against child abuse material, and extend it to block objectionable content.
That wasn't ever within the scope of the filter. Pushing all of New Zealand's data traffic through that filter is likely to become a single point of failure, overwhelmed by gigabit per second fibre connections at home and fast 4G/LTE networks.
Then there's the inconvenient fact that most internet connections are strongly encrypted.
This means content filters can't see what's being sent or viewed. To do that, data streams would have to be decrypted so that Bad Things could be identified. That is, you'd have to remove an essential security feature and put people's information and privacy at risk.
It is almost impossible to moderate and censor content when millions or even billions of people can create whatever they like and post it on social media. That's even harder for live streaming when there's no clue in advance as to what will be broadcast.
The tech solution touted here is machine learning and artificial intelligence which is getting better all the time at recognising images and audio.
Taking that idea to the devices that people use to create content, even mid-range smartphones now contain powerful AI processors. Software running on smartphones that uses AI chips to recognise and block objectionable images and videos and for example hate speech from being recorded is entirely possible.
Would such tech be infallible? No, and there's every chance AI-based censorship on phones would be abused by governments to restrict freedom of expression and tracking users.
Not taking action is not an option, and we'll probably end up with a broad-brush general requirement that leaves the difficult techie bits for someone else to work out.
Whatever comes, the sad thing about it is that none of it addresses the most important issue, which is what the hell is wrong with people posting and watching awful stuff?