Where are all the driverless cars the US promised?
The first driverless cars were supposed to be deployed on the roads of American cities in 2019, but just a few days before the end of the year, the lofty promises of car manufacturers and Silicon Valley remain far from becoming reality.
Recent accidents, such as those involving Tesla cars equipped with Autopilot, a driver assistance software, have shown that “the technology is not ready,” said Dan Albert, critic and author of the book “Are We There Yet?” on the history of the American automobile.
He questioned the optimistic sales pitch that autonomous cars would help reduce road deaths — 40,000 every year in the United States, mostly due to human error — because these vehicles themselves have caused deaths.
As a result, self-driving maneuvers in the technology-laden vehicles are limited to parking, braking, starting or driving in a parking lot.
“When you’re working on the large scale deployment of mission-critical safety systems, the mindset of ‘move fast and break things’ certainly doesn’t cut it,” said Dan Ammann, CEO of self-driving car company Cruise.
General Motors, Cruise’s parent company, had promised a fleet of autonomous vehicles would be on the roads in 2019.
There are driverless shuttles running on specific routes on university campuses and Waymo, Google’s autonomous car division, has been offering robotaxi service “Waymo One” for about a year around Phoenix, Arizona. However, there is a trained driver in the cars to take control in case of emergency.
Waymo is expanding that program and since the summer, it has offered truly driverless service in some Phoenix suburbs that is free in the afternoon and sometimes in the evening. The company is also teaming up with ride-hailing app Lyft to expand to more areas.
“Automation may be used in areas such as closed campuses, where speeds are low and there is little or no interaction with other vehicles, pedestrians or cyclists or inclement weather,” said Sam Abuelsamid, engineer and expert from Navigant Research.
The big problem is “perception”: the software’s ability to process data sent by the motion sensors to detect other vehicles, pedestrians, animals, cyclists or other objects, and then predict their likely actions and adapt accordingly, he said.
And that part is key, said Avideh Zakhor, engineering and computer science professor at the University of California-Berkeley.
“The perception part is not solved yet.”