Self-driving cars are in the news again, after several surveys were released showing how people would want their computer-controlled car to react in a situation where it has to decide whether to save the driver or pedestrians.
Most survey results suggested the driver would want to limit the amount of casualties, though we have seen sometimes that humans do not choose that option and end up killing more people to save themselves.
See Also: Is NVIDIA outpacing Apple, Google with its self-driving tech?
Nevertheless, the surveys appear to be a mandate for autonomous car developers to program a system to limit the amount of casualties. But is it possible we’re overthinking this function of autonomous cars?
Fatal accidents where the human can decide what to do are incredibly rare and are likely to be even rarer when self-driving eliminates drunk and tired drivers and turns off malfunctioning motors.
On top of that, Google has already patented a car hood that sticks pedestrians to it, which could prevent serious injuries if someone walks in front of the car at the wrong time. We are bound to see even more innovative solutions in the next four years, as automakers prepare to launch their first autonomous cars.
Can it just be coded to avoid harming humans?
With the additional safety from self-driving, we should be asking if programming life-and-death decisions is necessary. Why not instead work towards a world where cars don’t harm humans, or at least where the car protects both the driver and pedestrian from fatal injuries at all times.
It sounds like a dream world, with 32,000 car deaths in 2014, but McKinsey & Company expects that to fall to 3,200 as self-driving cars become commonplace. The 90 percent decrease does not include improvements to traffic, roads, and car safety that we are going to see in the next 10 years, three things that may reduce deaths to the hundreds for the first time since 1908.
We also need to take into consideration the massive reduction in accidents that are not fatal, estimated at 5.4 million in 2010, which could be stopped with self-driving.
That may not be enough for some, but if self-driving is able to save 30,000 deaths in a year, I think it’s worth the potential “moral” risks that come attached with it.