One of the more common themes of this year’s SXSW festival, and indeed 2015 so far, has been self-driving cars. In a very short space of time the technology has gone from a far-off futuristic vision to something we now expect to find integrated into our daily lives within the next five to 10 years.
That rapid rate of progress has many drivers shifting uncomfortably in their seats. Outside of the industry itself, few people seem ready to hand over the controls to a software program. But those at the cutting edge of autonomous vehicles are brazenly blunt about the direction we’re heading.
“In the distant future, I think people may outlaw driving cars because it’s too dangerous,” suggested Elon Musk on stage at an Nvidia conference this week. He later took to Twitter to emphasize that he hopes this doesn’t happen—and that Tesla cars will always have a human mode—but it’s an interesting long-term look at how this autonomous technology might eventually play out.
See also: Why Google’s Driverless Car Is Evil
Google’s Astro Teller, in charge of the tech giant’s own self-driving car fleet, expressed similar sentiments in an SXSW talk a couple of days ago. He said a decision was taken to remove the steering wheel from Google’s vehicles because humans weren’t “reliable” enough to act as a backup. People do “really stupid stuff” when driving, he said.
How does he know? From Google’s own experiments. When employees were invited to road-test autonomous vehicles for themselves, they made so many mistakes it became counter-productive. Teller didn’t go into details—in his own words it “wasn’t pretty”—but Google is now trying to remove the human part of the equation altogether.
The year is 2025 and you walk to your driveway, knowing that if you let your car drive itself you’re going to be in for a safer, faster, more relaxing journey. Are you really going to choose to take the wheel yourself? (Assuming you’re legally able to do so, that is.)
Shifting Gears
Once you get over the rather uncomfortable thought of letting a robot drive you, a deeper truth emerges: We’re just not very good at driving. We get distracted, we get angry, we get tired. We forget the rules of the road—if we ever knew them to begin with—and often take more risks than is sensible.
All-seeing, ultra-reliable robot drivers would mark a seismic shift in road safety, not just in terms of someone cutting you up on the highway but in terms of actual lives being saved. Can you see a cyclist hidden by a hedge? A radar sensor can. Rush hour traffic jams, a recent study in Stockholm suggests, could be all but eradicated—autonomous cars know exactly when to accelerate, when to brake, and when to merge.
Most drivers are likely to think of some kind of worst-case scenario when it comes to self-driving cars, such as being driven off a bridge or into a lake by a robot that’s suddenly developed a mind of its own. In reality, though, machines can react much quicker to sudden problems than we can. They can absorb more information simultaneously, make decisions in milliseconds, and put a car through maneuvers that our two arms and two legs just aren’t capable of carrying out.
I’m a Brit, and last summer I spent some time driving in the U.S. Staying on the “wrong” side of the road is one thing, but learning a whole new set of rules and regulations is quite another—and those rules change from state to state, too. An autonomous car could, of course, just switch to another software mode depending on its location.
It’s far from a perfect analogy, but the autopilot technology now standard in passenger jets offers another thought-provoking angle on the idea of self-driving cars. Several recent tragedies, like the one that saw Air France Flight 447 plunge into the Atlantic Ocean, have been blamed on pilot error once autopilot was disengaged. The technology has undoubtedly made air travel safer, but there is a theory that pilots have become less effective in reacting to unexpected emergency situations precisely because the automated systems are so reliable and comprehensive.
Perhaps, as Astro Teller suggests, we need to cut out people altogether.
The Self-Driving Future
A robot driver, though, is only as good as its data and sensors allow it to be. Flawed as we are, humans are still much better at judging when a cyclist is going to swerve out or seeing a stop light in a downpour. On the freeway, in good weather, along fixed routes, autonomous vehicles have now reached a very high standard—it’s everywhere else that they still struggle.
And until they can prove their ability in every scenario, we can’t hand over the keys. Think about how many different variables you have to deal with in a ten minute drive to the store and back: Actions that we consider commonplace, like giving way to an elderly pedestrian ambling across a car park, would cause all kinds of problems for an automated system.
We haven’t even covered the benefits self-driving technology brings to the elderly or disabled who would get a new lease of life by being able to take to the road, or the ways in which they could explore areas human beings can’t get to. With so many advantages to consider, it’s likely that self-driving cars will be here in not much time at all, though it’ll happen gradually at first. Nissan wants its semi-autonomous cars on the road by 2020.
Of course there are a whole host of regulatory issues to tackle, which would take up a whole new article in itself. But the next time you see a headline about self-driving cars, don’t panic—they’re here to help.
Images courtesy of Daimler, Google and NASA