Autonomous vehicles (AVs) will undoubtedly improve road safety, with computers able to avoid collisions humans cannot. But when AVs face an inevitable crash, how do they decide whether to stay in the lane – and hit a person walking in the crosswalk – or swerve, hitting someone on the sidewalk?
This question – how to incorporate morality in AVs’ algorithms – was at the heart of a new study from Germany, where researchers tried to determine which course of action AVs should take before crashes.
The answer? Stay in the lane, even if more people will die.
Unlike human drivers, AVs in real time can predict the probable risk of most decisions. But some situations are completely uncertain.
In this study, researchers compared human and autonomous drivers’ reaction to an impending crash with a pedestrian in the crosswalk or a bystander on the sidewalk. They found that deciding whether to stay in the lane (hitting the pedestrian) or swerve (hitting the bystander) fluctuated with the risk.
When the likelihood of hitting the bystander was unknown, 70 percent of human and autonomous drivers opted for staying in the lane. However, when the likelihood of hitting the pedestrian was higher than hitting the bystander, only 66 percent of AVs chose to swerve. 75 percent of human drivers chose to swerve.
Overall, both human and autonomous drivers prefer staying in the lane, even if the likelihood for both crashes was 50 percent.
Policy implications: When the risk is unknown, AV manufacturers should program AVs to stay in the lane. But when the risk of staying in the lane is higher than swerving, AVs should swerve. Regardless of outcomes, AVs should make decisions most in line with humans’ moral codes, the researchers argue.
Photo by Sam Kittner for Mobility Lab.