The self-driving car revolution reached a momentous milestone with the U.S. Department of Transportation’s release in September 2016 of its first handbook of rules on autonomous vehicles. Discussions about how the world will change with driverless cars on the roads and how to make that future as ethical and responsible as possible are intensifying. Some of these conversations are taking place at Stanford.
The topic of ethics and autonomous cars will be discussed during a free live taping of an episode of Philosophy Talk, a nationally syndicated radio show co-hosted by professors Ken Taylor and John Perry, at the Cubberley Auditorium.
Stanford News Service talked to several Stanford scholars for their insights on the most significant ethical questions and concerns when it comes to letting algorithms take the wheel.
Trolley problem debated
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is saying how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually replace our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about 50 years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
Engineers of autonomous cars will now have to tackle this question and other, more complicated scenarios, said Taylor and Rob Reich, the director of Stanford’s McCoy Family Center for Ethics in Society.
“It won’t be just the choice between killing one or killing five,” said Reich, who is also a professor of political science. “Will these cars optimise for overall human welfare, or will the algorithms prioritise passenger safety or those on the road? Or imagine if automakers decide to put this decision into the consumers’ hands, and have them choose whose safety to prioritise. Things get a lot trickier.”
But Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS), along with several other Stanford scholars, including mechanical engineering Professor Chris Gerdes, argue that agonising over the trolley problem isn’t helpful.
“It’s not productive.” Zoepf said. “People make all sorts of bad decisions. If there is a way to improve on that with driverless cars, why wouldn’t we?”
Zoepf said the more important ethical question is what is the level of risk society would be willing to incur with self-driving cars on the road. For the past several months, Zoepf and his CARS colleagues have been working on a project on ethical programming of automotive vehicles.
“We say, ‘let’s look at the tradeoffs inherent in safety and mobility,’” Zoepf said. “Should there be a designated right of way for automated vehicles, for example, or how fast should we permit automated vehicles to travel?”
Loss of jobs
Another ethical concern is the number of jobs that will be lost if self-driving vehicles become the norm, Taylor and Reich said. More than 3.5 million truck drivers haul cargo on U.S. roads, according to the latest statistics by the American Trucking Associations, a trade association for the U.S. trucking industry.
“You can’t outsource driving,” Taylor said. “Technology has always destroyed jobs but created other jobs. But with the current technology revolution, things may look differently.”
Technological developments can cause the loss of jobs. But tech companies and governments can and must take steps to prepare for those losses, said Margaret Levi, professor of political science and the director of the Center for Advanced Study in the Behavioral Sciences.
“We have to be prepared for this job loss and know how to deal with it,” Levi said. “That’s part of the ethical responsibility of society. What do we do with people who are displaced? But it is not only the transformation in labor. It is also the transformation in transport, private and public. We must plan for that, too.”
Transparency and collaboration
Some scholars have also pointed out the need for greater transparency in the design of driverless cars. “Should it be transparent how the algorithms of these cars are made?” Reich said. “The public interest is at stake, and transparency is an important consideration to inform public debate.”
But no matter their stance on a particular issue with self-driving cars, the scholars agree that there needs to be greater collaboration among disciplines in the development stage of this and other revolutionary technology.
“We need social scientists and ethicists on the design teams from the get-go,” Levi said. “That won’t resolve all the questions, but it would at least be a start to dealing with some of them.” At Stanford, some of these collaborations are already taking place.
Jason Millar, an engineer and postdoctoral research fellow with the Center of Ethics in Society, is also working on the CARS ethical programming project. He is tackling how to translate knowledge developed in academic and philosophical circles into the daily design work of technology and artificial intelligence products.
“The idea is to address the concerns upfront, designing good technology that fits into people’s social worlds,” Millar said.