Opinion: How does a government control information flows and maintain consistent messaging? This is not a new problem and new technologies present different challenges. Trying to control information flows in the digital paradigm poses a whole new set of problems.
Some countries have a blunt-force approach. China has the Great Firewall of China and carefully monitors all internet traffic. In 2012, after the “Arab Spring” demonstrated how social media could be used to direct protests, France’s Nicolas Sarkozy proposed tighter controls on internet-based content.
But messaging that doesn’t conform to the orthodox view is often characterised as misinformation or disinformation, and is seen as a threat. How to deal with this phenomenon?
Misinformation and disinformation are not new. As editor of Pravda, Joseph Stalin wanted to designate some ideas or perspectives as necessarily out of bounds, and coined the term “dezinformatsiya” to describe the concept.
Misinformation and disinformation came to the fore during the Covid pandemic as the government struggled to combat confused and contrary messaging about the disease and its treatment.
Much of the government messaging was delegated to Te Pūnaha Matatini at the University of Auckland. Part of that body included a small interdisciplinary team, the Disinformation Project, which analysed open source, publicly available data related to Covid mis/disinformation on social media, in media, and in physical and other digital forms of information.
The project represented an example of a “public relations” and “public information” response to the problems of mis/disinformation and was the media’s go-to source for a quote on the evils of contrary points of view, but the media did not subject the project to critical review.
The project closed down last month, and those involved have moved on to other things.
The Australian government has chosen to use the law to tackle the problem of mis/disinformation with the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill.
This bill seeks to combat mis/disinformation, promote online safety and trust, protect public health, safety and democracy and hold digital platforms accountable. The bill places obligations on large digital platforms – an easy target – to actively prevent the spread of harmful content.
The types of serious harm that mis/disinformation are reasonably likely to cause or contribute to include harm to electoral processes, public health, vilification of groups, intentional physical injury, damage to critical infrastructure and harm to the economy. These are all areas of importance to the state.
The serious harms stated in the bill are all related to aspects of state security, and the theme of the bill is the protection of the state and its messaging.
But rather than target the dissidents or contrarians, the state could shut down the dissemination of the message by platforms.
Regulating actions is more effective in preventing harm than regulating information or ideas. Regulating information and ideas will ultimately lead to censorship, which is detrimental to democracy and society.
Removing information that contradicts the government or official narrative is most concerning, not least because it prevents course correction where the position taken by an authority is wrong. It undermines the quality of decision-making and diminishes public trust.
Similar legislation is not on the horizon here, although Department of Internal Affairs proposals for Safer Online Services and Media Platforms, which thankfully are no longer a work in progress, had a wider scope.
Care is necessary to ensure the state does not become the sole source of truth. That is a significant erosion of the democratic process and the free exchange of ideas.