The ethics of self-driving cars: MIT Moral Machine study results released

No easy task: Programming ethics into AI

MIT recently released the results of its Moral Machine study.  The experiment, which gathered data about 40 million decisions from participants in 233 countries, presented hypothetical scenarios where users were asked to judge whom a self-driving car should run over.  Choices included old vs. young people, affluent vs. marginal people, people vs. animals, illegal jaywalkers vs. law-abiding pedestrians, etc.

Problems with the Moral Machine study

Overall, the authors of the study found that people had a preference for saving young people before old, humans before animals and larger groups of people before smaller groups of people.  But some people say the study is by no means comprehensive.

Respondents from countries with less rigid governments were more likely to be tolerant of jaywalkers, while respondents from countries with more individualistic cultures were more likely to be in favour of killing as few people as possible, and killing the elderly before the young.  And of course, the very notion of basing this kind of decision on victim demographics goes against the ethical recommendations for self-driving cars set down by Germany last year, which state that, “In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.”

Another problem is that the situations presented in the Moral Machine are bordering on the absurd: how many times do you have to choose between swerving one way or the other and hitting different demographics of people?  That’s not to mention the fact that it’s usually impossible to know whether you’ll kill someone for sure.  Most of us would try to brake before this happens, and probably wouldn’t have time to take all the demographic factors into consideration by the time we’ve noticed the brakes are failing.  Many voices are arguing that attacking the ethics of self-driving cars from the point of view of the trolley problem is not the optimal route.

Human drivers are perhaps more likely than AI to make the “wrong” choice, especially as most human drivers would probably choose to save their own lives before taking into consideration the pros and cons of killing pedestrians of different demographics.  However, the implications of a self-driving car being preprogrammed to make the same decision many times over are seen as far more serious than the implications of a human being making a snap decision which they experience as a one-off, says Edmond Awad, a co-author of the study.

Cross-border ethics: The perennial question

Arguably the main problem stems from the question of applying blanket AI ethics across borders.  Firstly, the study could only be completed by participants with an internet connection, and a quick glance at the map shows that the overwhelming majority of results came from Europe and North America, with much of Africa left out of the study altogether.  This also excludes many participants from developing countries such as India, where killing a cow is against the question – a factor which would undoubtedly influence respondents’ choices when faced between killing a person or an animal.  Unsurprisingly, responses also differed within cultures based on participants’ socioeconomic status and other factors.

Who gets a say in determining ethical guidelines for AI?

It might be harder than it looks to establish a worldwide set of guidelines for ethics in self-driving cars.  The question is, do we want to?  Admittedly, the nature of technology is to bleed across borders, so it would seem necessary to think about incorporating various cultural values in a technology that’s ultimately destined for various different cultures and demographics.  But on the other hand, it might be unrealistic to hope to lay down an ethical code that will be universally accepted.  And, let’s face it, humans are so fallible that even an imperfect self-driving car would probably make better decisions

“What we are trying to show here is descriptive ethics: peoples’ preferences in ethical decisions.  But when it comes to normative ethics, which is how things should be done, that should be left to experts,” Awad remarked to The Verge.  How do we program AI to make moral choices, and who gets to decide?  Self-driving cars are just the tip of the pyramid when it comes to programming ethical AI, and the amount of difficulty we’re having with this relatively small issue is an indicator of the ethical conundrums we face in the near future.

Curious how your opinions stack up against everyone else’s?  Try the Moral Machine for yourself!