The Lose-Lose Ethics of Testing Self-Driving Cars in Public

March 23, 2018 at 04:13AM

Here’s the strange thing about where I live: When I walk outside my office, down to busy, honk-filled four-lane road that runs by it, I’m immediately part of a wide scale science experiment. A lot of us are, here in San Francisco, in metro Phoenix, Arizona, in Pittsburgh, Pennsylvania. We didn’t sign any forms or cast any votes, but here we are, in a living lab for self-driving tech.

A lot of the time, that’s exciting. One day, maybe crossing the street on foot at night won’t feeling like taking your life into your hands. This really could be the way to stop the deaths of 40,000 people on US roads every year. The blind, the old, those who can’t operate vehicles—their lives might shift wholly in a world where vehicles drive themselves.

Other times, it’s terrifying. This week, the living lab claimed its first life: a self-driving Uber hit and killed 49-year-old Elaine Herzberg, late Sunday evening as she crossed a wide road in Tempe, Arizona.

So now is the time to ask, even if it feels a bit late: How, exactly do we feel about unleashing this tech on public streets, inflicting it on people without their consent or even knowledge? What responsibilities does a government have to protect me and you and everyone we know as we live inside this science experiment? Sure, autonomous vehicles need to test and collect data from the real world, where they’ll encounter the strange and very human things lab simulators couldn’t imagine. But how did I get roped into this?

I don’t imagine I’m the only one feeling this way. In a paper published last fall, a group of behavioral researchers predicted reactions to robocar crashes would be “outsized”—that is, much greater than the reaction to an “everyday” pedestrian road death. The kind we saw 6,000 of in the US last year.

“People are inconsistent about what principles they want autonomous vehicles to follow,” the researchers wrote. “Individuals recognize the utilitarian approach to be the more ethical, and as citizens, want the cars to save the greater number. But as consumers, they want self-protective cars.” No, I don’t want to die.

So what’s a government regulator, the folk charged with making me both feel and be safe, to do?

Thus far, the US federal government has taken a hands-off approach, using a series of voluntary guidelines to gently shepherd the companies developing autonomous tech. “We are not in the business—we don’t know how—to pick the best technology or to pick the winners,” Transportation Secretary Elaine Chao said earlier this month. “The market will decide what is the most effective solution.”

The DOT’s newest version of those guidelines requests autonomous vehicle developers hand in a “voluntary safety self-assessment”, with details like how their vehicles are designed, how they react in emergencies, and how the companies approach cybersecurity. So far, only Waymo and General Motors have submitted such documents. Meanwhile, 52 companies have permits to test autonomous vehicles in California.

The DOT has also hosted conversations on how companies might share data, in the case of one stumbling onto a glaring technical flaw that others could use to improve their own software. But those are still just conversations. An industry predicted to be worth trillions of dollars in just a few decades is wary of the competition.

With federal legislation addressing AVs stuck in limbo, the states have stepped in. Amidst promises of techified economic booms, some have raced to provide tech developers with exactly the conditions they’d like: access to public roads, free cooperation with local law enforcement, and no demands that companies reveal what they’re up to, or why. States like Florida, Pennsylvania, and Arizona are down with the living lab.

Others, a bit less so. California’s Department of Motor Vehicles began handing out autonomous vehicle permits in 2014, and has only just approved the testing vehicles with no one inside (provided there’s a remote operator monitoring the thing from afar). The department also requires developers publicly report all crashes, and submit annual reports on “disengagements”—when a human has to retake control of the car. But even those numbers aren’t reported consistently across companies, and leave much to be desired.

“It makes it very hard to collect the information and actually make some insightful recommendations out of the data,” say Francesca Favaro, an engineer who studies autonomous systems safety at San Jose State University. “These failures are precursors to more dire accidents. We want to take a careful look at what’s going on in these testing stages.”

Eventually, the federal government could issue performance standards for autonomous vehicles, which would demand they perform specific tasks before the hitting. But no one has figured out what those will look like yet.

Regulation, or at least, settling on a philosophical approach to regulation, always takes time. But the government’s pokiness here—with technology that feels and looks really scary, even if it could actually make us so much safer—seems particularly damaging.

The unfortunate truth is that there will always be tradeoffs. A functioning society should probably create space—even beyond the metaphorical sense—to research and then develop potentially life-saving technology. If you’re interested in humanity’s long-term health and survival, this is a good thing. (Even failure can be instructive here. What didn’t work, and why?) But a functioning society should also strive to guarantee that its citizens aren’t killed in the midst of beta testing. We’ve made this work for experimental drugs, finding an agreeable balance between risking lives today and saving them tomorrow.

An analysis from the Rand Corporation published last year suggests the living lab will be worth it, finding that more than half a million lives might be saved by putting imperfect tech on the road instead of waiting for it to be flawless before deployment.

Still, it's uncomfortable. “Society tolerates a significant amount of human error on our roads,” Gil Pratt, who heads up Toyota’s research institute, said last year. “We are, after all, only human. On the other hand, we expect machines to perform much better.”

Maybe that’s a fair thing to expect—but only if we’re willing to let the things learn, alongside and among us.


Robo-Quandaries

Ethics

via https://news.google.coms/rss/search/section/q/ethics/ethics?ned=us&hl=en&gl=US