Automated Cars: Will yours kill you to save the lives of others?

Survey reveals the moral dilemma of programming autonomous vehicles: should they hit pedestrians or avoid and risk the lives of occupants?

There’s a chance it could bring the mood down. Having chosen your shiny new driverless car, only one question remains on the order form: whether your spangly, futuristic vehicle be willing to kill you?

To buyers more accustomed to talking models and colours, the query might sound untoward. But for manufacturers of autonomous vehicles (AVs), the dilemma it poses is real. If a driverless car is about to hit a pedestrian, should it swerve and risk killing its occupants?

The scenario dates back to 1960s philosophy and is clearly an extreme one. The odds of an AV facing such a black and white situation, and being aware of the fact in time to act, are extremely low. Yet when millions of driverless cars take to the roads, small odds can build up to a daily occurrence.

In a raft of surveys published in the journal Science on Thursday, researchers in the US and France set out to canvas opinion on how driverless cars should behave in no-win situations. Rather than clarifying the moral code that vehicles should be programmed with, the surveys highlights bumps in the road for the coming AV revolution.

In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plough into and kill 10 pedestrians. They agreed, too, that it was moral for AVs to be programmed in this way: it minimised deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.

But then came the first sign of trouble. When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.

The conflict, if real, could undermine the safety benefits of driverless cars, the authors say. “Would you really want to be among the minority shouldering the duties of safety, when everyone else is free-riding, so to speak, on your equitability and acting selfishly? The consequence here is that everyone believes that AVs should operate one way, but because of their decisions, they operate in a less moral, less safe way,” said Azim Shariff at the University of Oregon, who conducted the surveys with Iyad Rahwan at MIT, and Jean-Francois Bonnefon at the Institute for Advanced Study in Toulouse.

Further surveys pointed to even more trouble ahead. Before the study, the researchers suspected that the best way to make sure driverless cars are as safe as possible was through government regulation. But the surveys found most people objected to the idea. If regulations forced manufacturers to install moral algorithms that minimised deaths on the road, the majority of people said they would buy unregulated cars instead.

Continue reading the rest of the article here

Video courtesy of Science Magazine