BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Afraid Of Self-Driving Car Testing On Public Roads? Compared To What?

Following
This article is more than 4 years old.

A front-page article in last Sunday’s Washington Post beat an old drum: “Some Silicon Valley Residents Anxious Over Self-Driving Cars.” The usual concerns were expressed regarding how these vehicles can’t be trusted to be safe, plus objections to corporations like Lyft, Waymo, and Cruise developing robo-taxis which could put drivers out of work.  And, one cannot deny that a fatality occurred in Uber’s case, when a safety driver wasn't doing their job. However, the new wrinkle in this article was a focus on Silicon Valley residents with some understanding of computer science who nevertheless don't feel safe around Self-Driving Cars (SDC’s).

The content in these types of articles are usually based on a skewed perspective. People who are happy with life where they live typically do not go to Town Halls and City Council meetings. The ones who do show up have a concern they want to express, which has happened frequently in the Valley regarding SDC’s. Good for them, but this results in more quote-able content from the unhappy rather than the happy. I’ll give credit to the Post writer for noting that “Some residents are proponents – or at least indifferent – to the autonomous cars on their streets.”

In this article, longtime robocar specialist Brad Templeton bolstered the supportive view of current SDC testing, proffering the excellent point that society accepts risks on the road now with teen-age drivers, who need time to become better drivers. Do we expect teen drivers to zip around for months on a test track? No, the needed learning can only happen with on-road driving. The teenage driver learning curve only benefits one driver, whereas the learning of a few hundred SDCs under development can be transferred to millions of SDC’s for long term safety benefit. Well done, Brad.

To illustrate my point, envision a series of cars driving down a residential street in Mountain View during the busy morning commute/school hours. One has a driver who is distracted by work issues and urgently trying to read a text that just came in. One is exhausted and just barely “with it.” Another is looking at their left side mirror to safely merge into a left-turn lane, not viewing the road ahead for a couple of seconds.  Another is a teenage driver, paying close attention given that they only started driving last week. And then there’s the SDC, seeing 360 degrees with full attention on relevant objects in every direction. For each car, imagine a kid on a skateboard zipping out into the vehicle’s path from between parked cars on the right. Which will respond best? Perceiving and acting on this event by the current SDC’s could be flawed but all the human processes in this example have “flaws” too. The difference: the SDC is getting better at its job every day until the point at which it will far surpass the skill of the human driver.

But I left out the most dangerous player: a Tesla driving that same road, occupied by an alert and rested driver who chooses to dwell on his or her personal screen, never looking up. This is sheer recklessness, endangering both themselves and everyone on the road, because these vehicle systems aren’t designed to handle all obstacles and events. Tesla is very clear about this in customer communications. What about when any of us take our eyes off the road for a few seconds to, for instance, select new music? In this case, a large slice of our brain capacity is still into driving, and our peripheral vision supports driving as well. However, a Tesla driver looking at down at their screen for an extended period is almost completely out of the driving game. We have yet to see harm come to someone who is the victim of a negligent Tesla driver; so far they’ve managed to kill only themselves.

I don't condemn all Tesla drivers, and I’m confident (hopeful) that the vast majority act at least as responsibly as the rest of us.

The cars cruising down our imaginary street look alike from the outside except for the SDC, sprouting sensors and sporting logo’s. You also know when a Tesla goes by, but from a distance you don't know if the driver is responsibly operating it. For the rest, you have no information about the relative safety of that particular vehicle with that particular driver. 

We do not live in a zero-risk society, especially when it comes to mobility.

One example in the Post article comes from a tech-savvy local resident driving the journalist around Silicon Valley. As they sat at a red light, the resident saw that the driver in the next lane was about to make a “dangerous illegal turn” and exclaimed, “How does an autonomous vehicle sense that?”

The answer is they do “sense that” because they’re designed to sense that! They’re designed that way because of the intense process by self-driving developers to define systems requirements and validate on that basis. This feeds into their safety case. Using on-road testing combined with simulation, a huge variety of possibilities can be thrown at the software, more driving situations than any one of us would experience in an entire lifetime. For instance, based on ingesting data from their 425,000 Hardware 2.0 vehicles on the road, Tesla employs predictive algorithms to train their neural network, which enhances understanding of the scene by their SAE Level 2 driver assist system.

Still, events will occur in the real world which haven’t been encountered by the software before. As a result a key requirement of autonomous systems is the ability to generalize. Maybe you have never encountered a turkey running across the road, but we all know to brake for creatures-in-general running across the road. That’s generalization. And turkeys are really an obstacle! See Voyage’s video of turkey-avoidance.

The BMW development team validating the BMW iNEXT Level 3 system launching in 2021 uses an approach which is another great example of extensive SDC testing. They are collecting over 3 million miles of public road driving data from 80 BMW 7 Series cars operating in the U.S., Germany, Israel, and China. From this data, 1.25 million miles of highly relevant and challenging driving scenarios are extracted which can be altered in software so these scenarios can be replayed across a wide range of speed, weather, and traffic conditions.  This process provides an additional 150 million miles in simulation to maximize diversity of the road conditions against which the automated driving system is tested.

Lest I start to sound blindly confident, let me stress that what I’ve said here applies to highly professional, responsible companies with deep expertise in functional safety. In my many interactions with dozens of companies in the SDC eco-system, this is exactly what I see. Certainly the professionals in these companies are not infallible, but they are as good as it gets. They challenge each other internally to address the fears and inconveniences noted in the Post article, plus what they’ve heard in their own community outreach.

I do have a concern–new startups with unknown credentials continue to pop up, and sharing of automated driving software for hobbyists is increasingly rapidly.  The potential for irresponsible development, just like irresponsible drivers, does indeed exist.

It’s true that society is faced with new risk via self-driving cars on public roads, but this is simply a small addition to the massive risk that’s already out there. Unlike those other risks, the maturation of self-driving tech will make the roads safer for us all.

Follow me on LinkedIn