Most people would like others to buy cars programmed to save the lives of pedestrians, but would themselves prefer to ride in a driverless car that protected its own passengers at all costs, the researchers found.
The results were drawn from six online surveys of 1,928 people in the US between June and November 2015.
"Over the six studies the results were the same: people always had a strong moral opinion," said researcher Jean-François Bonnefon. He said neither he nor his research team colleagues, Iyad Rahwan and Azzim Shariff, expected people would have such a strong and "utilitarian" approval of self-sacrifice.
Survey participants were asked to imagine they were passengers in an autonomous vehicle (AV) programmed to minimise the number of casualties in an accident, then rank how moral it was for the car to choose to sacrifice them to save pedestrians. They were then asked to rate their preference on a scale ranging from self-protection to a response aimed at saving the lives of others.
"Before, these kinds of moral dilemmas had no urgency but now we find ourselves having to decide," said Bonnefon.
The study showed that the number of lives saved influenced how moral people thought the AV was and, in turn, increased the confidence people had in their answer.
But people were at best lukewarm when it came to actually buying a car programmed to save others, and their interest decreased further if a family member was also a passenger.
The researchers also asked respondents how likely they were to buy a driverless car if the government enforced programming aimed at saving the lives of others. The level of interest in buying an autonomous vehicle was decreased by two thirds, compared to when there was no regulation as all.
The study noted that AVs have the potential to increase traffic efficiency, reduce pollution, and reduce accidents but the survey results suggest regulation enforcing programming aimed at saving pedestrian lives was not popular.
"Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether," the researchers warned.
David Tuffley, Senior Lecturer in Applied Ethics and Socio-Technical Studies at Griffith University, said there was no practical reason why driverless cars could not be programmed to do the least harm possible.
"It is likely that the car makers will deal with the apparent moral dilemma by programming the autonomous vehicle to find, if possible, that middle ground and act in the best interests of all stakeholders," he said, but added that it's always difficult to predict how a crash may play out.
"Anyone who has been in a serious accident knows how chaotic and unpredictable outcomes can be. I was in a serious accident once and I thought I would be killed for sure, but was not - through sheer good fortune."
- The Conversation
Sophie Moore is the editor of The Conversation.