Home Is the Public Ready to Accept Deaths From Self-Driving Car Accidents?

Is the Public Ready to Accept Deaths From Self-Driving Car Accidents?

Self-driving cars are set to dramatically reduce accident rates as well as injuries and deaths from those accidents. But as they roll out and become more commonplace, they’re probably going to get some people killed. In fact, there are already early reports of fatalities caused by self-driving cars; tests of Uber’s fully autonomous vehicles have resulted in both injuries and deaths, and semi-autonomous Tesla vehicles have been involved in a number of accidents.

Is the public ready to accept deaths from self-driving car accidents? And if not, what steps would it take to get us there?

The Safety Dilemma

First, we need to understand the safety dilemma that self-driving cars introduce. Currently, there are around 35,000 deaths as a result of motor vehicle crashes in the United States each year. That’s close to 3,000 people every month, or 100 people per day. Hypothetically, if there was a new technology that could reduce that fatality rate by just 1 percent – just 1 person a day – that could save 350 lives annually. Since the majority of car accidents are attributable to human error, and autonomous vehicles can reduce the error rate to close to zero, we can assume that autonomous vehicles can sharply reduce the overall fatality rate of motor vehicle accidents (and reduce total motor vehicle accidents as well).

However, even a single car accident can cause major damage and multiple deaths, and a single incident of major negative publicity casts doubt on the safety and efficacy of self-driving cars in general. If a handful of magnified cases lead the public to believe that self-driving cars are dangerous, we could face delays of autonomous cars for years to come – ultimately resulting in more lives lost.

The Sources of Public Pushback

There are several reasons why the general public reacts negatively to the prospect of deaths from autonomous cars:

  • Fear of change. People generally don’t like change. Because of status quo bias, we tend to prefer things exactly how they are, rather than risk changing dynamics in a way that could potentially make things worse. Our current motor vehicle fatality rate may be exceptionally high, but it’s what we’re used to. Incorporating autonomous vehicles on the road would require a massive overhaul to many societal constructs, forcing us to change how we think about driving, how we pay for insurance, and more. If you’re already afraid of an evolving society, and if you’re reluctant to adopt new technologies, every reported death from an autonomous vehicle is going to seem bigger and more impactful than an equivalent death attributable to human error.
  • Fear of the unknown. People also fear the unknown. Right now, autonomous vehicles occupy a kind of abstract space in our minds; they’re a construct of imagination, rather than something tangible and practical. Semi-autonomous vehicles are already on the road, but most of us haven’t yet ridden in a fully autonomous vehicle, so we don’t know what it’s like. If we have no existing framework for how to consider or work with a new technology, it’s going to seem especially scary – and even more so when it does, inevitably, result in the loss of human life.
  • Disproportionate reporting. Deaths and injuries from fully autonomous vehicles are often highly visible to the public, being reported on by every major news outlet in the country, while deaths from “normal” accidents are so commonplace and so readily accepted that they’re rarely acknowledged. How often do you hear about traffic fatalities in national news outlets? By contrast, any time even a semi-autonomous vehicle is involved in a collision, the news is practically impossible to escape. For members of the public unfamiliar with the hard, high-level statistics, this can make it seem like autonomous vehicles are killing people left and right – while manually driven cars are completely safe.
  • Agency and control. The “trolley problem” is a well-known philosophical thought experiment in which a person is given the option of redirecting a “trolley” from one track to another; on its current course, it will kill three people, but if you divert course, it will kill only one. The utilitarian perspective is that one death is preferable to three deaths, but many people struggle with the concept of exercising agency in choosing that one person’s death. This is because people feel in charge of their decisions – and they don’t want to directly cause someone to die, even if it means passively allowing multiple other people to die in the process. On the road, drivers often feel in total control of their vehicle, capable of making their own ethical decisions and directing their own destiny. But putting them in a vehicle driving itself fills them with dread because it means their agency is being sacrificed entirely. Can you make a compelling argument to this type of person that an algorithm can make better decisions than they can? To do so would require both total faith in the algorithm developers and a willingness to abandon personal control.
  • Responsibility and justice. Legitimately, some people are worried about how responsibility and justice will be served in fatal collisions involving autonomous vehicles. If someone is killed and an autonomous car is found to be the root cause of the accident, who goes to jail? Who pays the fine? Will this responsibility fall on the driver, even though they didn’t do anything to cause this accident? What about the software developer? The vehicle manufacturer? This is murky territory no matter what, but it’s especially difficult to digest if you already have apprehensions about the safety of autonomous vehicles.
  • Other reasons to hate self-driving cars. Self-driving cars are going to lead us to some complicated infrastructural changes and present new dilemmas (including some we haven’t even considered yet. For many people, these changes range from scary to detestable. For example, some people hate the idea that they may someday be outlawed from owning and/or driving a manual vehicle. Some people don’t trust the autonomous vehicle industry and feel there are ulterior motives guiding the industry in this direction. Some people hate the idea of police officers being able to remotely control your vehicle if you’re caught committing a crime – or hate the idea of hackers accessing a vehicle and using it for their own gain. If you hate self-driving cars for these or other reasons, it’s easy to latch onto a fatality as a justification for your beliefs.
  • Lack of firm thresholds. It’s also worth noting that most people don’t have firm thresholds they use to evaluate the efficacy of autonomous vehicles. It’s somewhat ridiculous to demand perfection; vehicles are fast and heavy machines in a complex world, so fatalities are inevitable no matter how safe the system is. But how safe is “safe enough?” Would a 1 percent drop in human lives lost be enough to satisfy your goals? What about a 5 percent drop? Are there other metrics that need to be achieved before you consider autonomous vehicles safe?

Changing the Narrative

Right now, it seems the general public is unready to accept the deaths that will inevitably be caused by autonomous vehicles – even as autonomous vehicle manufacturers set out to reduce deaths and reduce damage as much as possible. Because of this public sentiment and ongoing pushback, policymakers and manufacturers have an uphill battle ahead of them.

If we’re going to facilitate a world in which vehicular fatalities occur at a much lower rate (while simultaneously making the world more conveniently accessible to the entire population), we need to find a way to change the narrative. We need to proactively identify the root causes of anti-autonomous vehicle perspectives and work to change them from the ground up – or at least attempt to quantify and objectively evaluate those perspectives.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Frank Landman
Editor

Frank is a freelance journalist who has worked in various editorial capacities for over 10 years. He covers trends in technology as they relate to business.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.