Ben Shneiderman believes robots should collaborate with humans rather than replace them. Photo / Pete Sharp, The New York Times
A computer scientist argues that the quest for fully automated robots is misguided, perhaps even dangerous. His decades of warnings are gaining more attention.
Tesla chief Elon Musk and other big-name Silicon Valley executives have long promised a car that can do all the driving without human assistance.
But BenShneiderman, a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers, thinks fully automated cars and the tech industry's vision for a robotic future is misguided. Even dangerous. Robots should collaborate with humans, he believes, rather than replace them.
Late last year, Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organisers of an industry conference on "Assured Autonomy" in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerised machines that are "reliable, safe and trustworthy."
There should be the equivalent of a flight data recorder for every robot, Shneiderman argued.
It is a warning that's likely to gain more urgency when the world's economies eventually emerge from the devastation of the coronavirus pandemic and millions who have lost their jobs try to return to work. A growing number of them will find they are competing with or working side by side with machines.
Shneiderman, 72, began spreading his message decades ago. A pioneer in the field of human-computer interaction, he co-founded in 1982 what is now the Conference on Human Factors in Computing Systems and coined the term "direct manipulation" to describe the way objects are moved on a computer screen either with a mouse or, more recently, with a finger.
In 1997, Shneiderman engaged in a prescient debate with Pattie Maes, a computer scientist at the Massachusetts Institute of Technology's Media Lab, over the then-fashionable idea of intelligent software agents designed to perform autonomous tasks for computer users — anything from reordering groceries to making a restaurant reservation.
"Designers believe they are creating something lifelike and smart — however, users feel anxious and unable to control these systems," he argued.
Since then, Shneiderman has argued that designers run the risk not just of creating unsafe machines but of absolving humans of ethical responsibility of the actions taken by autonomous systems, ranging from cars to weapons.
The conflict between human and computer control is at least as old as interactive computing itself.
The distinction first appeared in two computer science laboratories that were created in 1962 near Stanford University. John McCarthy, a computer scientist who had coined the term "artificial intelligence," established the Stanford Artificial Intelligence Laboratory with the goal of creating a "thinking machine" in a decade. And Douglas Engelbart, who invented the computer mouse, created the Augmentation Research Center at the Stanford Research Center and coined the term "intelligence augmentation," or IA.
In recent years, the computer industry and academic researchers have tried to bring the two fields back together, describing the resulting discipline as "humanistic" or "human-centered" artificial intelligence.
Shneiderman has challenged the engineering community to rethink the way it approaches artificial intelligence-based automation. Until now, machine autonomy has been described as a one-dimensional scale ranging from machines that are manually controlled to systems that run without human intervention.
The best known of these one-dimensional models is a set of definitions related to self-driving vehicles established by the Society of Automotive Engineers. It describes six levels of vehicle autonomy ranging from Level 0, requiring complete human control, to Level 5, which is full driving automation.
In contrast, Shneiderman has sketched out a two-dimensional alternative that allows for both high levels of machine automation and human control. With certain exceptions such as automobile air bags and nuclear power plant control rods, he asserts that the goal of computing designers should be systems in which computing is used to extend the abilities of human users.
This approach has already been popularised by both roboticists and Pentagon officials. Gill Pratt, the head of the Toyota Research Institute, is a longtime advocate of keeping humans "in the loop." His institute has been working to develop Guardian, a system that the researchers have described as "super advanced driver assistance."
"There is so much that automation can do to help people that is not about replacing them," Pratt said. He has focused the laboratory not just on car safety but also on the challenge of developing robotic technology designed to support older drivers as well.
Similarly, Robert O. Work, a deputy secretary of defense under Presidents Donald Trump and Barack Obama, backed the idea of so-called centaur weapons systems, which would require human control, instead of AI-based robot killers, now called lethal autonomous weapons.
The term "centaur" was originally popularised in the chess world, where partnerships of humans and computer programs consistently defeated unassisted software.
At the Phoenix conference on autonomous systems this year, Shneiderman said Boeing's MCAS flight-control system, which was blamed after two 737 Max jets crashed, was an extreme example of high automation and low human control.
"The designers believed that their autonomous system could not fail," he wrote in an unpublished article that has been widely circulated. "Therefore, its existence was not described in the user manual and the pilots were not trained in how to switch to manual override."
Shneiderman said that he had attended the conference with the intent of persuading the organisers to change its name from a focus on autonomy to a focus on human control.
"I've come to see that names and metaphors are very important," he said.
He also cited examples where the Air Force, the Nasa and the Defense Science Board, a committee of civilian experts that advises the Defense Department on science and technology, had backed away from a reliance on autonomous systems.
Robin Murphy, a computer scientist and robotics specialist at Texas A&M University, said she had spoken to Shneiderman and broadly agreed with his argument.
"I think there's some imperfections, and I have talked to Ben about this, but I don't know anything better," she said. "We've got to think of ways to better represent how humans and computers are engaged together."
There are also skeptics.
"Ben's notion that his two-dimensional model is a fresh perspective simply is not true," said Missy Cummings, director of Duke University's Humans and Autonomy Laboratory, who said she relied on his human-interface ideas in her design classes.
"The degree of collaboration should be driven by the amount of uncertainty in the system and the criticality of outcomes," she said. "Nuclear reactors are highly automated for a reason: Humans often do not have fast enough reaction times to push the rods in if the reactor goes critical."