Driverless Cars Get Into Accidents Because They're Too Good at Driving
Driverless cars are model vehicle operators, but that doesn't mean the human drivers around them are. The California DMV published its autonomous car accident reports from the past year, and it does not reflect well on human drivers.
All of the reported incidents—of which there are nine—were low-speed, minor accidents, all caused by a human's reckless driving. Not that the robot cars were entirely off the hook: Most of the accidents happened because the automated cars were being a little too cautious. Human drivers tend to be more aggressive (speeding through yellow lights, accelerating to cut in front of other cars, etc.), and simply aren't used to navigating roads populated by their more timid autonomous counterparts. Computer drivers will generally stop short when they sense a threat, resulting in a lot of rear-end collisions.
More often than not, the self-driving cars weren't even moving when the accidents occurred; some of the human errors collected in the DMV's report are especially egregious. For example, on June 18, a car collided with a Google AV that "had been stopped for about 11 seconds at the impact."
It seems silly, but Google is actually working to correct this cautiousness and make their cars drive more like humans in order to reduce the number of accidents.