When Self-Driving Cars Crash…

by Matt Klassen on March 3, 2016

It recently came to light that several weeks ago, during a testing phase on public roads, one of Google’s self-driving cars struck a municipal bus, resulting in the first crash on record where an autonomous vehicle bears at least some responsibility. As expected, the relatively minor incident has immediately given rise to serious questions about our safety in such self-driving vehicles, and created new avenues of query, such as who is ultimately responsible when a robotic vehicle gets in an accident?

According to reports, the crash occurred on February 14th, as the autonomous vehicle was being tested on the public streets of Google’s home in Mountain View, California. As Google explained in a statement, the self-driving car intended to turn right off of a major street when it detected sandbags surrounding a storm drain. The car corrected its trajectory, sliding slightly to the left to avoid the obstacle, but struck the front side of a municipal bus. It was an incredibly low speed collision and no injuries were reported.

While such minor fender benders happen thousands of times a day, this one certainly stands out, given that one of the drivers involved was not a human, but a machine. But how worried should we be that robots can make mistakes? Is this serious enough to warrant pause of the entire self-driving industry in an attempt to avoid more serious incidents in the future, or simply a minor blip along the road to technological progress?

For its part, Google has explained that it wasn’t a failure of the technology, per se, but simply the robot attempting to calculate variables related to how the human bus driver would respond to the situation. The car assumed the bus would yield given the obstruction in the right lane, the bus didn’t yield, and the car struck the bus.

Chris Urmson, the head of Google’s self-driving car project, said in a brief statement that there might be fault on both sides, indicating that Google’s autonomous car was already moving when the bus failed to yield.

“We saw the bus, we tracked the bus, we thought the bus was going to slow down, we started to pull out, there was some momentum involved,” Urmson told The Associated Press.

In fact, the test driver (the person who sits in the autonomous vehicle during real world tests) “believed the bus would slow or allow the Google (autonomous vehicle) to continue,” a company statement said. Google went on to say that it has refined its software in light the incident to prevent future reoccurrences.

Of course Google is not avoiding all responsibility in this collision, admitting that it bears at least “some responsibility” for the crash, but it is clear the company is attempting to mitigate the firestorm that is already brewing over questions of safety and responsibility when it comes to robots doing what are currently exclusively human tasks.

In regard to responsibility, given that this is the first crash on record where the autonomous vehicle bears at least some liability, it will be interesting to see how both transportation regulators and the insurance industry respond, given that there is no “driver” in the traditional sense, and the company who owns the proprietary software is not actually “driving the car” in the traditional sense. There is also the question about the possible intervention of passengers, whether the tester should have taken control of the Google car before the accident occurred. While this may stand as a back-up system, the whole point of autonomous vehicles is to remove humans from the equation, given our long history of being fairly undependable drivers.

But as for the question of safety, let me say a few things: First, this was a minor collision involves no injuries, and second, as it stands juxtaposed to the news that a human-operated SUV crashed into a Boston area pizza restaurant yesterday, killing two and injuring seven others, I for one can’t wait for robots to relieve of us of the very unnatural task of operating such a dangerous projectile.

I guess the real question will be, what will happen when an autonomous vehicle causes a fatality, and when it happens (not if), will that be the end of the self-driving project?

Did you like this post ? TheTelecomBlog.com publishes daily news, editorial, thoughts, and controversial opinion – you can subscribe by: RSS (click here), or email (click here).

Written by: Matt Klassen. www.digitcom.ca. Follow TheTelecomBlog.com by: RSS, Twitter, Facebook, or YouTube.

Be Sociable, Share!

Previous post:

Next post: