clock menu more-arrow no yes mobile
By 2020, Waymo plans to make at least one million self-driving trips per day in the U.S.
Waymo

Filed under:

Are self-driving cars safe for our cities?

Autonomous vehicles could save thousands of lives per year. Should the U.S. let them be tested on public streets?

From ushering in an era of decreased car ownership, to narrowing streets and eliminating parking lots, autonomous vehicles promise to dramatically reshape our cities.

But after an Uber-operated self-driving vehicle struck and killed 49-year-old Elaine Herzberg, who was crossing the street with her bike in Tempe, Arizona on March 18, 2018, there are more questions than ever about the safety of this technology, especially as these vehicles are being tested more frequently on public streets.

Some argue the safety record for self-driving cars isn’t proven, and that it’s unclear whether or not enough testing miles have been driven in real-life conditions. Other safety advocates go further, and say that driverless cars are introducing a new problem to cities, when cities should instead be focusing on improving transit and encouraging walking and biking instead.

Contentions aside, the autonomous revolution is already here, although some cities will see its impacts sooner than others. From Las Vegas, where a Navya self-driving minibus scoots slowly along a downtown street, to General Motors’ Cruise ride-hailing service in San Francisco with backup humans in the driver’s seat, to Waymo’s family-focused Chandler, Arizona–based pilot program that uses no human operators in its Chrysler Pacifica minivans at all, the country is accelerating towards a driverless future.

While the U.S. government has historically been confident in autonomous vehicles’ ability to end the epidemic of traffic deaths on our streets, there are plenty of concerns from opponents of self-driving cars that are making cities think twice before welcoming them to their streets.

Self-driving vehicles may be poised to deliver a future of safer, greener streets for all, but testing the vehicles on today’s streets is a concern.
Farrells and WSP | Parsons Brinckerhoff

Are autonomous vehicles safe?

In 2009, Google launched its self-driving project focusing on saving lives and serving people with disabilities. In a 2014 video, Google showed blind and elderly riders climbing into its custom-designed autonomous vehicles, part of the company’s plan to “improve road safety and help lots of people who can’t drive.”

Although there were several self-driving projects in the country at the time, many being developed by government agencies or university labs, Google’s project differentiated itself by being public-facing. The goal was not to build cars—although Google did build its own testing prototypes—but to create a self-driving service that would help regular people get around.

Google began testing its vehicles on public streets the very same year the project launched. With the reorganization of Google into its new parent company Alphabet, the self-driving program became its own entity, Waymo. Almost a decade later, Waymo remains the clear leader for safe self-driving miles on U.S. streets.

As of November 2019, Waymo had logged 10 million self-driven miles, making it the leader for self-driven miles on U.S. streets. That’s double the amount of miles Waymo had driven in February 2018. In July 2018, Waymo’s new electric Jaguar I-Paces vehicle hit the streets, in addition to its Chrysler Pacifica hybrid minivans.

According to Waymo’s monthly reports, its vehicles have been in dozens of crashes, but caused no serious injuries. In 2016, a Waymo vehicle bumped a bus while going 2 miles per hour. On May 4, 2018, one of Waymo’s minivans was involved in a crash with minor injuries in Chandler, Arizona while in autonomous mode, but police said Waymo’s van was not the “violator vehicle.” In October 2018 a Waymo vehicle was involved in a crash that sent a motorcyclist to the hospital with minor injuries, but the human driver was at fault.

Now there are dozens of autonomous vehicle companies testing on U.S. streets, but next most experienced companies, Uber and GM Cruise, are still several million miles behind Waymo. That doesn’t include miles driven in the semi-autonomous modes that many cars now offer, like Tesla’s Autopilot, which are more driver-assistance systems than true self-driving vehicles.

In the last few years, the greatest strides taken in the self-driving industry have been by ride-hailing companies, who are devoting an exceptional amount of time and money to develop their own proprietary technologies and, in many cases, giving members of the public rides in their vehicles. In 2017, Lyft’s CEO predicted that within five years, all their vehicles will be autonomous. At a press conference in March 2018, where Waymo’s CEO John Krafcik announced its ride-hailing program, Krafcik claimed that the company will be making at least one million trips per day by 2020.


Can autonomous cars drive better than humans?

The biggest safety advantage to an autonomous vehicle is that a robot is not a human—it is programmed to obey all the rules of the road, won’t speed, and can’t be distracted by a text message flickering onto a phone. And, hypothetically at least, AVs can also detect what humans can’t—especially at night or in low-light conditions—and react more quickly to avoid a collision.

AVs are laden with sensors and software that work together to build a complete picture of the road. One key technology for AVs is LIDAR, or a “light-detecting and ranging” sensor. Using millions of lasers, LIDAR draws a real-time, 3D image of the environment around the vehicle. In addition to LIDAR, radar sensors can measure the size and speed of moving objects. And high-definition cameras can actually read signs and signals. As the car is traveling, it cross-references all this data with GPS technology that situates the vehicle within a city and helps to plan its route.

In addition to the sensors and maps, AVs run software programs which make real-time decisions about how the car will navigate relative to other vehicles, humans, or objects in the road. Engineers can run the cars through simulations, but the software also needs to learn from actual driving situations. This is why real-world testing on public roads is so important.

But how AV companies gather that information has led to greater concerns about how autonomous vehicles can detect and avoid vulnerable road users, like cyclists, and people who move slowly and more erratically through streets, like seniors and children. Waymo, for example, claims its software has been explicitly programmed to recognize cyclists. A video that Waymo released in 2016 (back when it was still part of Google) shows how one of its vehicles detected and stopped for a wrong-way cyclist coming around a corner at night.

According to a May 2018 report from The Information, Uber’s vehicle did detect Herzberg before its fatal Tempe crash, but the system made a decision not to swerve. “The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away.” A preliminary report by the National Transportation Safety Board (NTSB) confirmed that not only had Uber disabled the Volvo SUV’s collision-avoidance feature, Uber’s own system detected Herzberg six seconds before the crash and did not brake until 1.3 seconds before impact.

However, Arizona prosecutors did not charge Uber, writing in a letter that “there is no basis for criminal liability for the Uber corporation arising from this matter.”

Uber’s self-driving unit Arizona announced it was closing down on May 23, 2018. In July, Uber eliminated 100 self-driving positions in Pittsburgh and San Francisco. Uber’s self-driving program has since returned to Pittsburgh, where it is limiting testing to daylight hours.

The role of human “backup drivers” as part of AV testing has also come into question after Tempe police documents obtained by Gizmodo showed that driver Rafaela Vasquez was streaming a video on her phone at the time of Uber’s fatal crash. “The driver in this case could have reacted and brought the vehicle to a stop 42.61 feet prior to the pedestrian,” reads the report, which calls the crash “entirely avoidable.”

In November 2019, the final NTSB report attributed the crash to human error, but placed much of the blame on Vasquez. At the NTSB hearing, board members also criticized the federal government for safety regulations that are too lax.

Self-driving companies also put their vehicles through endless tests using simulated city streets. Many traditional automakers use a facility named M City in Ann Arbor, Michigan, but the larger self-driving companies have built their own fake cities specifically to test interactions with humans who are not in vehicles. Waymo’s fake city, named Castle, even has a shed full of props—like tricycles—that might be used by people on streets so that Waymo’s engineers can learn how to identify them.

After Uber’s fatal crash, Toyota built a new facility to test its vehicles’ responses to “edge cases”—extreme situations too dangerous to test on public streets.

USDOT has been testing autonomous technology at the M City facility for many years.
M City

Will eliminating human drivers reduce traffic deaths?

50 years ago, the U.S.’s rate of traffic deaths was higher than they are now—in 1980, generally considered to be the deadliest year on U.S. streets, over 50,000 people were killed. With safety features like airbags added to vehicles, stricter seat belt laws, and campaigns that stigmatized drunk driving, the rate of deaths went down significantly.

But over the last few years, the U.S. has seen a slight increase in traffic deaths again. Additionally, pedestrian fatalities increased by 27 percent over the last decade, while all other traffic fatalities decreased by 14 percent. There isn’t agreement for why these deaths are increasing, but some experts believe that this is because Americans are driving more—overall vehicle-miles traveled (VMT) reached an all-time high in 2017.

Using USDOT’s claim that 94 percent of crashes are caused by human error, it seems like a fairly obvious way to reduce crashes is to reduce the number of humans behind the wheel. But it’s not just the number of human drivers that should be reduced, the U.S. could also reduce the number of cars on roads to prevent fatalities—and autonomous vehicles might be able to help do that, too.

The real safety promise of autonomous vehicles is the fact that these vehicles can be be summoned on-demand, routed more efficiently, and easily shared—meaning not just the overall number of single-passenger cars on streets will decline, but the number of single-passenger trips will be reduced, meaning a reduction in overall miles traveled.

In addition, cities can use automated vehicles to tackle ambitious on-demand transit projects, like a proposed initiative to integrate shared self-driving vehicles into the public transit fleet. If cities can launch these kind of “microtransit” systems that serve as a first-mile/last-mile solution to help get more people to fixed-route public transportation, that will also mean fewer people in cars and more people on safer modes of transit. According to a 2016 American Public Transportation Association study, traveling by public transportation is ten times safer per mile than traveling by car.

Without having to make room for so many cars, city streets can be narrowed, making even more room for pedestrians and bikes to safely navigate cities. In this way, autonomous vehicles have a great role to play as part of a Vision Zero strategy, which most major U.S. cities have implemented in order to eliminate traffic deaths.

A typical U.S. roadway remade as a safe, accessible street filled with autonomous technology, from shared taxibots to self-driving buses, from NACTO’s Blueprint for Autonomous Urbanism.
NACTO

But aren’t human-driven cars safer now, too?

While residents of only a few cities can summon an AV on-demand right now, the truth is that much of the safety tech powering self-driving cars is making its way into today’s cars. Sophisticated collision-avoidance systems, for example, which can stop a vehicle if an object or person are detected in its path, are already being incorporated into new cars and buses.

This is why the way the National Highway Traffic Safety Administration (NHTSA) tests those kinds of safety innovations is also changing. Until recently, all safety standards were based on historical crash data, meaning the government had to track years and years of roadway incidents (and, in many cases, deaths) before making an official recommendation.

Now, technology is advancing so quickly that there’s not enough time to test every new idea for a decade. The government knows it needs to be more nimble.

In fact, that’s what happened for a recent USDOT recommendation that all cars be equipped with vehicle-to-vehicle communication (V2V), a tool which allows cars to “talk” to each other. This recommendation was fast-tracked in 2015 by U.S. transportation secretary Anthony Foxx after detailed simulations and modeling showed that the benefits were obvious—there was no need to spend years collecting historical data.

The same type of recommendation might be made for an aspect of autonomous tech. Once a clear safety benefit has been proven across the self-driving industry, a specific feature might become standard on all vehicles.

An 8-person autonomous shuttle by Navya travels a route at a speed of 15 mph in Downtown Las Vegas.
Keolis

Where are self-driving cars being tested?

About half of U.S. states allow testing of autonomous vehicles on public roads, but regulations for each state vary widely. The majority of testing is focused in a handful of states: Arizona, California, Georgia, Michigan, Nevada, Texas, Pennsylvania, and Washington.

California remains the busiest hub for the AV industry: There are currently 52 companies testing self-driving technology on the state’s streets. It’s also one of the most heavily regulated markets: California’s Department of Motor Vehicles requires companies to file for a permit and submit annual reports that include the number of miles driven and any crashes.

While it’s not necessarily used as a safety metric, one performance standard that helps to illustrate how technology is improving is tracking the number of times per self-driving mile that a human driver has to take over, which is called a “disengagement.” California DMV records demonstrate that as self-driving programs log more on-road experience, they see fewer and fewer disengagements.

Other states don’t require as much documentation as California—and they’re not necessarily required to make any information public. Arizona, for example, approved AV testing on public roads in 2016 without notifying its residents, and didn’t require any reports from companies, although after Uber’s fatal crash, that has changed.

Uber Self-Driving Car
Hills, snow, quirky local driving customs, and loose state regulations are some of the reasons Uber started testing its self-driving program in Pittsburgh.
AP Photo/Jared Wickerham

Does the federal government regulate autonomous vehicles?

In 2016, the U.S. government released its long-awaited rules on self-driving vehicles. The Department of Transportation’s 116-page document lists many benefits for bringing technology to market, among them improved sustainability, productivity, and accessibility. But the USDOT report’s central promise is that autonomy will pave the way for policies that dramatically improve road safety.

Even President Obama made the case for safety in an op-ed that heralded the dawn of the new driverless age:

Right now, too many people die on our roads—35,200 last year alone—with 94 percent of those the result of human error or choice. Automated vehicles have the potential to save tens of thousands of lives each year. And right now, for too many senior citizens and Americans with disabilities, driving isn’t an option. Automated vehicles could change their lives.

In order to get cities across the country to start thinking about using autonomy to solve transportation problems, USDOT hosted the Smart City Challenge in 2016, which awarded $40 million to Columbus, Ohio, to develop a fleet of autonomous transit vehicles. As a result of the challenge, the 70 cities that competed now have blueprints for how to introduce AV tech to their transportation planning.

Under the Trump administration, much of the legislation proposed has been centered around exemptions for automakers and increasing the number of AVs allowed to operate on U.S. streets. In fact, in September 2017, USDOT and NHTSA issued updated AV guidelines, which carried an even lighter regulatory touch, after industry leaders expressed concerns about regulation at the federal level stifling innovation.

In addition to the 2017 policy statement, Transportation Secretary Elaine Chao held preliminary hearings about autonomous vehicles where she affirmed the government would not play a heavy-handed role. “The market will decide what is the most effective solution,” she said. However, the aggressive development of V2V—which experts agree can work to make human-driven cars much safer as autonomous technology comes to market—has not been made a priority during her leadership.

On October 4, 2018, USDOT announced new guidelines to guide development and deployment of AVs, including the possibility that the department would change its safety standard rules to allow new types of autonomous vehicles to operate on U.S. roads. This would mean potentially making exemptions for automakers to produce vehicles without human-centered operating features like steering wheels.

Chao also addressed the fact that public acceptance was a key element for autonomous vehicle adoption. “Companies need to step up and address the public’s concerns about safety,” she said. “Because without public acceptance, the full potential of these technologies may never be realized.”

USDOT’s lack of regulation was called out as “laughable” by NTSB board member Jennifer Homendy during the November 2019 hearings about Uber’s fatal crash. “I actually think that there is a major failing on the federal government’s part and the state of Arizona, because they also didn’t have any standards in place and still don’t, for failing to regulate these operations,” she said.

In a speech at the Consumer Electronics Show on January 8, 2020, Chao announced AV 4.0, a new set of voluntary guidelines which reconfirms the federal government’s hands-off approach to AV regulation.

The plan was quickly panned by safety advocates, including Cathy Chase, president of Advocates for Highway and Auto Safety, who cited Uber’s fatal crash, among others, in a statement denouncing AV 4.0. “Despite these disturbing incidents, the agency tasked with ensuring public safety on our roadways has abrogated their responsibility to issue rules requiring minimum performance standards for AVs.”

Tesla’s Autopilot feature, one of many driver-assist features which allow control of the vehicle to switch from human to computer, can distract drivers or give them a false sense of security.
The Verge

What’s the difference between semi-autonomous and fully autonomous?

There’s one safety debate that continues to divide the self-driving industry: Some automakers are still pushing for versions of vehicles which allow control to pass from human to computer, offering drivers the ability to toggle between semi-autonomous and fully autonomous modes.

Two fatal Tesla crashes—one in 2016 and one in 2018—that occurred while the drivers were using the vehicle’s Autopilot feature illustrated the dangers of a semi-autonomous mode. As the National Transportation Safety Board (NTSB) noted in its report of the 2016 crash, semi-autonomous systems give “far more leeway to the driver to divert his attention to something other than driving.”

Fully autonomous is the official policy recommendation from the Self-Driving Coalition for Safer Streets, a lobbying group that wants cars to eventually phase out steering wheels and let the software take over, 100 percent of the time. This completely eliminates the potential for human error.

In 2018, Waymo began conducting fully autonomous testing in Arizona without a human safety driver, and in 2019, began transporting passengers in fully autonomous vehicles. California now allows fully autonomous testing as well, but without passengers for now.

Especially after the Uber crash, San Francisco bike advocates worry that the tech isn’t powerful enough to see cyclists. The California Bicycle Coalition started a petition to stop fully autonomous vehicles from being tested on California streets.

At least for the near future, even fully autonomous vehicles will still have to contend with the mistakes of human drivers. To truly make self-driving technology the safest it can be, all the vehicles on the road should be fully autonomous—not just programmed to obey the rules of the road, but also to communicate with each other.

In 2017, National Association of City Transportation Officials (NACTO) created a Blueprint for Autonomous Urbanism, which encourages cities to deploy fully autonomous vehicles that travel no faster than 25 mph as a tool for making streets safer, “with mandatory yielding to people outside of vehicles.” A dozen U.S. cities including Austin, Detroit and Columbus, Ohio, are currently testing slow-moving autonomous shuttles like this on city streets.

From new street designs to accessibility guidelines to a focus on data sharing, NACTO’s policy document provides the most detailed AV recommendations for U.S. urban transportation planners. To plot the safest path forward for self-driving vehicles—and for cities to reap the many other environmental and social benefits of the technology—AVs should provide shared rides in regulated fleets, integrate with existing transit, and operate in a way that prioritizes a city’s most vulnerable humans above all users of the streets.

Los Angeles

Does L.A. Really Need a Gondola to Dodger Stadium?

Transportation

Somehow Elon Musk’s Tesla Tunnels Are Even Less Useful Now

Transportation

Why Bus-Loving Rep. Ayanna Pressley Wants Transit to Be Free

View all stories in Transportation