First self-driving auto fatality falls on Tesla

The U.S. National Highway Traffic Safety Administration (NHTSA) today said it opened a preliminary investigation into all of Tesla’s Model S cars after a death involving one vehicle while in “Autopilot” mode.

According to CNBC, NHTSA noted the accident, involving a 2015 Model S that was operating with autonomous driving features engaged, “calls for an examination of the design and performance of any driving aids in use at the time of the crash.”

The accident happened in Florida in May of this year, as a big rig truck moved into the Tesla’s lane. The car collided with the side of the truck and went underneath it. Its Autopilot system may have had difficulty seeing the truck’s white sides against an overcast sky.

In a blog post, Tesla said the incident was “the first known fatality in just over 130 million miles where Autopilot was activated,” and they pointed out the average American auto fatality is one every 94 million miles while the average global auto fatality is one every 60 million miles.

The carmaker added that they “informed NHTSA about the incident immediately after it occurred.”

While this is not a recall of the Tesla fleet, it could be the first step towards a recall. And since Autopilot is effectively a software feature, Tesla over-air updates could remove the feature quickly, making it inconvenient for Tesla owners, but not forcing them back to the dealership or leave their cars in their garages.

As autonomous vehicle technologies mature, it should be pointed out that some of them already exist in the newest model years of many vehicles today as driver assist options – and no doubt lane departure warnings, collision avoidance systems and self-parking have avoided accidents and probably saved lives to date.

Ethical and legal issues just being realized

But the ethical and legal ramifications of a driverless vehicle killing someone inside or outside the car is a storm that all sides of this technological frontier can see coming.

Recently, Ford, Google, Uber and other firms in the self-driving vehicle space announced plan to start lobbying Washington to keep regulations in line with these rapidly changing technologies to allow them to roll out faster.

But last month, the NHTSA opted to let states lead the way on setting how they let driverless cars interact with the current human-driven auto fleet on American roads.

In other jurisdictions, like Europe, there’s more unity among regulatory bodies and other countries like China and the U.K. are moving aggressively tailor their regulations to self-driving cars.

And given that software is doing the driving, the fear of a vehicle being hijacked on the fly to cause mayhem also gives some experts pause about self-driving features.

Of course, there is no evidence or suggestion that such a hack happened here, and the investigation should allowed to play out. But this tragic accident underlines the reality that this tech, like all tech, may help eliminate old driving risks but also may inject new ones.

“It’ll take until someone actually dies in an autonomous vehicle due to a hack,” said David Miller, chief security officer at Covisint, earlier this year in an interview with ReadWrite. “That doesn’t mean people aren’t thinking about security, but usually, there needs to be an actual event before you solve them.”

Facebook Comments