An expert sees the need for a deeper think about how technology - and issues like fake news - is shaping our society. Photo / 123RF
From "digitised lies" that spread across social media, to the increasing use of artificial intelligence in policy-making, tech is playing a bigger role in shaping our society than we might realise. A new book, Shouting Zeros and Ones: Digital Technology, Ethics and Policy in New Zealand, explores many of these weighty and fast-emerging issues. Its editor, Dr Andrew Chen, a researcher at University of Auckland-based Koi Tū: The Centre for Informed Futures, talks to Herald science reporter Jamie Morton.
What made you want to write this book? What was the story you really set out to tell here, and why?
Digital technologies are now pervasive – they are everywhere in our lives. It goes so much further than the devices we use or the websites we visit. I was interested in both the very visible, loud impacts (such as challenges around disinformation or debates over censorship) and the invisible, quiet impacts (such as the environmental impacts of globalised cloud computing or the government's use of algorithmic decision-making). The book aims to shine the spotlight on some of these hidden issues, while also providing some nuance on the more prevalent issues too.
A lot has been written about technology issues overseas, but not very much of it has looked at the Aotearoa New Zealand context. The Christchurch mosque shootings forced us to look at ourselves and who we are as a society, both online and offline. No one else has an Integrated Data Infrastructure (IDI) quite like ours. And we are world leaders in indigenous data sovereignty, although we still have far to go. So this book offers some local perspectives on the technology issues that face us as Kiwis. It's been great to feature a wide range of contributing authors, and we hoped to raise awareness of some of these challenges.
Why do you think we need to divorce ourselves from the science-based concept that tech development is "values-neutral" – or that we're merely innovating our way to a smarter future?
It's no longer good enough for technologists to just "build the thing" and let someone else worry about consequences. We want the next shiny thing that makes our lives easier, but I think people are increasingly aware that the last two decades of technological innovation has come at some cost – but it can be hard to pin down exactly what has gone wrong, and who should have been responsible.
Take social media, for example: most people would accept that it has helped people stay more connected, but that it also accelerates the spread of false information. To say that social media is "values-neutral" would suggest that these platforms are merely conduits for information and that they are not responsible for the information that people transmit. But these systems are designed to maximise user engagement time and keep people on the platform for as long as possible, and so they algorithmically promote content that makes us feel strongly, whether that's happy or angry.
That goal is grounded in values – for example, that it's more important to make people stay on the platform than to care about their long-term wellbeing. The point isn't that we should no longer build "values-neutral" systems – it's that values are inherent in the design of these systems whether we acknowledge them or not, so it's probably better if we think about them and understand how they influence our decisions. And it would be good for all of us to understand whether the values underlying these systems align or clash with our own.
What are some of the most troubling consequences we've seen from the digital age so far? I note you've dedicated chapters to online harm and "digitised lies"?
There are lots of challenges that are problematic in different ways. There are a set of challenges around how technology has changed how we interact with each other in modern society, such as the rise of abusive discourse on online platforms. There are challenges around how data is being used against people rather than for people, or in ways that are without the knowledge or consent of the person involved. There are challenges around how the increased use of some technologies leads to greater environmental damage and carbon emissions. And there are challenges around how technology can both democratise access to information yet also exacerbate existing societal inequalities. These issues are all interrelated and are difficult to solve individually; they exist in the complex, messy context of a society that has become inseparably intertwined with technology.
The book also spends some time on predictive analytics and data-harvesting. What are the big dangers inherent in machine learning and AI - as we've already seen with peoples' private data and government ministries?
We may have some vague sense that algorithms are making decisions that affect our lives, from the online advertising we see to the processing of tax refunds. However, there are plenty of algorithms that are being used in ways many of us may not be aware of. For example, the Department of Corrections in New Zealand uses an algorithm that calculates the risk of recidivism to help inform parole decisions. Many of the algorithms we use at the moment are relatively simple (mathematically speaking), but over time if we move towards the use of machine learning and AI it may become harder and harder to explain how these algorithms work, and it may be difficult to understand when and why something is going wrong. This is why it's important that people stay in the loop for now – that decisions with significant consequences (like whether someone is granted a visa to enter the country or whether someone is eligible for a home loan) remain with a person.
Stepping back to the tech sector generally: despite efforts from actors like the European Union, we still have a regulatory "wild west", where these major companies' reach often extends beyond countries' abilities to police them. Why is this such an uphill battle for lawmakers – and do you see any turning point here? Is Australia's bid to impose new laws on Facebook and Google, for instance, a welcome sign?
It can be hard for a small country like New Zealand. Our smallness makes us attractive for tech companies to run trials here – for example, tech giants like Facebook and Google often test their latest features in New Zealand before launching in bigger markets. But it also makes it difficult for our government to exert much influence, because if they push too far then companies can just exit the New Zealand market. The Christchurch Call has been a really positive development in this sense, because it has shown that governments around the world can work with the tech giants to address the externalities produced by tech products – but we will have to wait and see if it leads to any real change.
Another reason that it's difficult is that in many governments there isn't a clear mandate for an agency to handle some of these issues. For example, which New Zealand agency or Minister should be responsible for policies around the online spread of misinformation? When there's no agency responsible, it can be hard to get policy advice on those issues, and it becomes less of a priority for lawmakers to do something about it until something really bad happens.
So what's ultimately the take-away of this book? And where would you like to see this digital age arrive at?
What we advocate for in the book is broader consideration of the impacts of technology, especially those that may be invisible to most users, and understanding of how we might mitigate the negative impacts. Digital inclusion isn't just about everyone having digital devices – we need to ensure that everyone has digital literacy, skills, and trust in these systems too. There's a role for everyone to play, from individuals and communities to platforms and governments. We hope to see a mixture of technology policy that helps make Aotearoa and the world better for all of us.