Before former US vice-president Dick Cheney received a heart transplant in 2007, his weakened organ had a defibrillator with a wireless feature attached. Fearing a terrorist might assassinate him by discharging a fatal signal at his device, Cheney’s handlers had the wireless function disabled. A few years later, the New Zealand hacker Barnaby Jack, who devoted his short career to identifying vulnerabilities in banking and medical device security to make them safer, demonstrated just how easy it would be to blow up (disembodied) pacemakers. By pressing a button, Jack also provoked ATMs into coughing up money and insulin pumps into injecting what would have been lethal doses.
What seemed remote threats to the few have become feared and real possibilities for the many. In 2022, the cyber-security firm Sophos surveyed health organisations in 31 countries, finding 66% had been attacked by ransomware. In Düsseldorf, Germany, in 2020, hackers froze the systems in an intensive care unit, killing a woman with an aortic aneurysm ‒ she died en route to another hospital.
At the time Cheney’s heart was reverting to analogue, digital ethics was not a subfield of moral philosophy. We now live in an era where data, the internet and, increasingly, AI impinge on most aspects of our lives, from work and play to love, sex, voting and governance.
AI Morality, a collection of 20 fresh essays by various authors, most of whom specialise in philosophy, technology or practical ethics, explores the ethics of the digital world. The authors engage in wide-ranging discussions around AI and the consequences of mortgaging our lives to technology. Although Oxford University Press published this volume, its editor, David Edmonds, the charming co-host of the eminently accessible podcast Philosophy Bites, ensures you won’t bang your head against the occasional lunacy of certain strains of academic prose, with their mystery-cult jargon and vaporous abstractions.
In his introduction, Edmonds acknowledges the book is not comprehensive, but rather a “snapshot of the types of concerns that AI is forcing us to confront”. AI gives us an opportunity to ask what, if anything, makes us quintessentially human. Every new advance of AI re-focuses such areas of inquiry as moral responsibility, fairness, creativity and authenticity, even as it conjures the spectre of a “post-human” future.
And what is AI? What makes it different from, say, the brain-replacing abilities of a calculator is its ability to learn “so that performance on tasks can improve” over time. For example, the programme AlphaZero was given nothing but the rules of chess. It played itself millions of times over a few hours and then duly beat Stockfish, the strongest chess engine at the time.
Like most new technologies, the promise of AI is matched by its peril. Take work. If Sigmund Freud is right that “love and work are the cornerstones of being human”, we should start worrying. It is true that AI can eliminate repetitive drudgery. However, as philosopher John Tasioulas explains in the section “Work and Play”: “47% of all occupations in the US are capable of being computerised in the next 10-20 years.”
He then poses the question that will gain more urgency as AI matures: what does it mean to live in a world where AI takes over the tasks that “have characteristically given human life its point?” Tasioulas argues that either we will have too few jobs because of AI, or the jobs we artificially preserve will seem pointless, since those performing them will know AI could do them just as well and a lot faster.
What might take the place of work? The author suggests family, spiritual or artistic pursuits. Or games. Tasioulas reports that some philosophers predict jobless humans will play games that allow them to realise “ever higher degrees of achievement”. He then rightly questions whether the achievement one feels at making an ace in golf is the same as a lawyer proving her client innocent.
As to who will pay for this increased leisure time, there is surprisingly little discussion. Two models are suggested: universal basic income (UBI), in which “everyone receives a cash grant with no strings attached”, and a job guarantee scheme, whereby “everyone is provided work funded by the state”. We might assume that AI will generate income for the state that redistributes it, but one of the core concerns with UBI is its cost, something like $41 billion a year in New Zealand.
When it comes to the policymakers who might make decisions about work, could AI replace them with the “perfect politician”, as an essay of that name explores? A 2021 study showed more than half of Europeans supported replacing at least some politicians with AI. In China, 75% of those surveyed liked the idea of AI governance. The essay’s author, philosophy academic Theodore Lechterman, imagines an “algocracy”, or governance by algorithm, in which a digital philosopher-king that does not live among us achieves many of the “outcomes often associated with democratic systems”. As the author notes, AI may one day produce better policies than humans, and those who value democracy need to do a better job of explaining why it deserves to be prized. One reason might be that AI tends to (at present) be sexist and racist. Another is that democracy creates imagined communities that still make decisions together, no matter how bad they might be.
The essays in AI Morality cover many topics and all are worth reading for the balanced view they give over the horizon into a world enmeshed with, or dominated by, AI. Depending on who you are, the bright and easeful landscape of a UBI, reduced decision-making and games may seem full of possibility. Or you may say, like the English satirist Max Beerbohm upon reading Thomas More’s Utopia, “So this is utopia is it? Well, I beg your pardon, I thought it was Hell.”