To what extent should Facebook, Twitter and others be censoring content and people, and how do you balance harmful content and free speech?
We don't generally comment much on these sorts of things for two reasons. Firstly, ITP is a broad church and it's outside the areas that we- as an organisation - generally take a position on. Secondly, TechBlog's editor Paul Brislen usually covers this pretty well!
But there's a hugely important principle at stake and I think our community, as the largest tech community in New Zealand, should be thinking about all of this very carefully.
Let's start by thinking about three fundamental questions:
• What is the internet? • Who owns it? • Who is responsible for regulating it?
At this point our Network Admins might start talking about networks and routers and root DNS servers etc etc etc, and that's certainly true from a technical or structural perspective. But the reality is, the Internet is more than this; in fact, no person, company, or government owns the Internet; As one website puts it, it's owned by humanity itself.
But increasingly, what geeks like us think of as "the internet" differs from what the public - especially the younger public who never saw that side - think of it as.
Some of us see ones and zeroes, some see infrastructure, and some see immense possibility - both commercially and socially. But for many (most?) people, the internet means Facebook, Instagram, YouTube, TikTok, Twitter and Google. And maybe Netflix and a few others as well.
In short, the mindset has shifted as internet access has become commoditised.
Just like we think of electricity as the stuff that makes our devices and lights work (rather than the vast network of electricity generation and distribution), the internet - for most people - has become what we do with it. And a huge part of that is on social media networks.
Now, if we accept that at least some regulation of "the internet" is necessary to prevent the worst of humanity creating victims online, just as we expect society's behaviour to be regulated to prevent people becoming victims offline, whose job is it to do so?
Putting aside some very significant jurisdictional challenges*, the answer has to be exactly the same as the offline world: the Government, via legally formed laws and regulations.
Which raises a fairly crucial and fundamental conundrum.
Once a particular "service" such as Facebook has reached the point where it's basically part of the fabric of the internet - and by extension, society itself - as many of the services above have done, who is responsible for regulation of content?
To put it another way, we don't expect the power company to regulate what we do with electricity. We also don't expect "products and services" that use electricity to regulate our use - it would be daft to expect either the power company or heat-lamp providers, for example, to detect or prevent an individual setting up an illegal growing operation in their basement - or to cut off electricity because of it.
And it would certainly be daft for power companies or lines companies to start making up their own rules about what you can do with their power - over and above what's illegal or unsafe - and take action if you breach those rules. Quite rightly, there'd be a huge outcry.
Yet this is exactly what's happening more and more with "the internet".
Let's take illegality out of the equation and accept that - arguably - mainstream social media and other companies have some sort of obligation to at least attempt to block illegal content.
It's not unreasonable to conclude that it's for the Government to regulate to the extent that content or behaviour is illegal regardless of whether it's online or offline. Unless you're an anarchist, that's a reasonable position to take and it's not too big a jump to then expect providers to have a role in preventing the publishing of content that is clearly illegal.
Some will argue with that, but let's put that aside for a moment and think about immoral, outrageous, offensive or unpopular - but not illegal - content.
Increasingly - and especially over the past few weeks since the brief occupation of the US Capitol building - social media companies like Twitter and Facebook have appointed themselves judge, jury and executioner on what content is acceptable on their platforms - both in "public" and "private" groups - and taken action against hundreds of thousands of people who have expressed views that differ from what they see as acceptable.
And there's that conundrum.
On one hand, it's their network so their rules. On the other, these companies have grown to the point that they're now a core part of the internet itself - and their actions have a significant impact on everyone. They're now in a position to - and arguably do - shape public opinion by either shadow- or outright- banning of views they don't approve of.
Given the scale of these services, this means it's now up to a very small group of extremely rich and powerful unelected men (mostly) to decide what is acceptable speech and content for society as a whole, above and beyond that deemed in law.
Surely we all have to agree that that's a problem.
And we're not just talking about speech. For example, these services have the proven ability to massively influence election outcomes and much more. At the other end of the spectrum, their algorithms can actively push content to you that research [pdf] suggests doesn't just shape opinion, but that can fully radicalise large groups of people.
So what's the answer?
It surely has to be recognising that once a "service" reaches a large enough scale, their obligation has to at least partially expand from just their shareholders to society as a whole. And looked at in this context, you could certainly argue that this might include obligations around not censoring content and users except where content is explicitly illegal - as doing so infringes their right to free speech (remember, we're thinking societal obligations here).
Secondly, as with every other facet of life, surely it should be solely up to Governments to deem what is illegal when it comes to speech and content.
Facebook especially has been calling for Governments to step up and devise rules around harmful content, a different model for platforms' legal liability and a "new type of regulator" to oversee enforcement. But so far, many Governments have shirked this responsibility in a way they never would in the offline world.
New Zealand, on the other hand, has been fairly proactive in this area and already has specific laws regarding harmful content online - primarily The Harmful Digital Communications Act 2015 - complete with 10 communications principles and an Approved Agency (Netsafe) which has a role resolving matters before they get to court.
This law strikes a careful balance and focuses on an intent to cause harm and even includes a process for services to resolve complaints to the agency about content - albeit many seem to ignore it.
Again jurisdiction* aside, surely this - the law of the land - is what large social media networks should be following when it comes to policing content rather than their own views?
Should that not be an obligation, especially once a certain scale is reached and their editorial decisions are affecting a significant portion of society?
This problem isn't going to go away, especially as more and more of "the internet" falls into private hands. It's up to all of us to push for a solution that protects victims while ensuring fundamental rights are maintained.
* And yes, I appreciate jurisdiction is one of the biggest issues in this debate and I've conveniently skirted over it above. Personally, I think this is resolvable technically but I'll address that in a future piece.
- Paul Matthews is chief executive of the Institute of IT Professionals. He posts at TechBlog.