A week after the vote, Cruise agreed to a request from the California Department of Motor Vehicles to cut the number of autonomous vehicles it operates in the city after a crash involving one of Cruise’s driverless taxis and a fire truck.
Waymo and Cruise are at the forefront of the technology, but several other companies, including Apple Inc.
AAPL,
Nvidia Corp.
NVDA,
and Tesla Inc.
TSLA,
hold permits that allow them to do AV testing with a driver in the car.
Opinion: Driverless cars are driving San Francisco crazy. ‘They are not ready for prime time.’
MarketWatch talked with Philip Koopman, a professor at Carnegie Mellon University who has researched driverless-car safety for more than 20 years, about safety and liability concerns and what’s next for robocars.
MarketWatch: It feels like we’ve been stuck for a while now on the verge of having cars capable of driving themselves everywhere. Can you hazard a guess when truly autonomous cars will finally be here?
Koopman: No one knows. In 1995, researchers from Carnegie Mellon University drove their NavLab 5 vehicle across the U.S., and 98% of that was automated. I became involved a year later.
I can tell you that there’s still a lot of work left, especially to get both reliability and safety at the same time. As they say in the computer world: The first 90% of the work takes 90% of the time. The remaining 10% takes the other 90% of the time.
I think the more important question is where this technology can be deployed in the near term that is useful and economically viable.
Low-speed shuttles in uncomplicated environments, cargo hauling on quiet routes and possibly middle-mile trucking, if the routes are picked carefully, are all a lot closer to being able to scale up than a car that can drive anywhere in any conditions.
Robotaxis in urban settings are particularly challenging, and I think that is showing in the teething pains we are seeing in San Francisco.
MarketWatch: Do you have any tips for people in San Francisco, Austin and other cities where robotaxi testing is happening in earnest?
Koopman: I recommend leaving some extra space around vehicles with this still-immature automation technology. In particular, they are prone to stopping suddenly.
While they are pretty good at seeing pedestrians, I would recommend not walking out in front of a moving robotaxi expecting it to stop every single time. Also, expect them to still get confused sometimes by things a human driver would find obvious, and occasionally do things that seem stupid, because common sense is not part of the technology they use.
For local governments, I recommend fighting hard at the state level to get the right to limit operations in response to specific local conditions and situations. San Francisco lost that battle at the state level, and they are suffering as a result.
MarketWatch: Speaking of sudden braking, where are we in terms of liability and driverless cars?
Koopman: Liability is a real mess right now. Automakers tell customers it’s OK to play videogames while the car is driving itself in a new type of car coming this fall [the DrivePilot from Mercedes Benz]. But if there is a crash, there is no requirement for the district attorney to change the interpretation of a law that might say the crash is the driver’s fault.
The owner’s manual doesn’t override the law. Drivers should expect that if there’s a crash and they are behind the wheel, they are on the ones on the hook regardless of what the manufacturer tells them.
Even for robotaxis, there are questions about whether the owner, the fleet operator or the manufacturer is a responsible for a crash. Right now each state has a different rule, and some of the rules might not stand up to scrutiny by the court system. That can be a problem if we see rental-car fleets or personally owned robotaxis deployed in the coming years.
It is not uncommon for legislation to lag technology. When e-signatures started to become more common, they had to fix the laws and agree that an e-signature is a stand-in for a real signature. We’ll need something similar for computer drivers as a stand-in for human drivers in vehicles that share responsibilities between human and computer drivers.
Attempts to fix regulations are frozen in place. The National Highway Traffic Safety Administration has a detailed automated-vehicle regulatory framework proposal with a ton of public comments, but the process has been frozen in its tracks for years now.
MarketWatch: What happens when the cellular network that driverless cars operate goes down? California is earthquake country, and as we saw on Maui, disasters can strike suddenly.
Koopman: Any autonomous vehicle needs to ensure safety when a cellular network goes down. As you say, this can be expected to happen at the worst times. The current robotaxi technology clearly needs a lot more work before it is reliable enough to be counted on in a natural disaster, because it relies on remote operators anytime it gets confused.
A big concern is that if there are a couple thousand robotaxis in San Francisco and an earthquake disrupts the cellular network, you could have those vehicles unable to move aside to make room for emergency vehicles trying to get to fire and rescue scenes. Automakers need to make sure their vehicles can move out of the way of emergency responders no matter what, and especially during a network outage. Given all the reports of frozen vehicles on San Francisco streets, it is clear there is a lot of work to be done in this area.
MarketWatch: How about claims that driverless cars will eventually drive around more safely than human drivers?
Koopman: Companies are claiming victory on safety, and that’s premature. It’s the kind of thing where nuance matters.
Waymo has some reasonable work on a model that predicts they are on track for acceptable safety. However, their marketing team is claiming they are already saving lives, far before there is enough data to know if the prediction will turn out to be accurate.
That’s a bit like running a marathon and declaring victory because you feel great after the first half-mile. That leg cramp in mile 6 can still get you, and we won’t know how it will turn out until the race is done.
Cruise has not released enough details of their data to evaluate their claims about safety. Maybe they would be credible if they were more transparent, but they could just as easily be marketing puffery.
In this case, the race is at least 100 million miles, arguably a lot more, without any fatalities. Neither company has more than a few million miles, so we are still a long way off from knowing how this turns out. However, the incidents we are seeing, especially involving Cruise, are a troubling sign that things might not turn out as well as everyone hopes.
Editor’s note:
A Waymo spokesperson said that Waymo’s safety claims are based on data that has been shared in several papers and reports over the last few years, including here, and here, and here.
A Cruise spokesperson said that “building trust in the communities we serve hinges on transparency — which is why we published our safety record in our first million driverless miles, and report detailed safety data to our regulators.” Cruise said it is “committed to continually monitoring and reporting our safety performance so that customers, regulators, and any member of the general public can access it.” Safety reports Cruise has made public include those here and here.
Read the full article here