The potential for self-driving cars to make roads safer is immense. There are some estimates that autonomous vehicles will reduce traffic deaths by as much as 90%. Given the chance to reduce loss of life, it's hard to argue that self-driving cars shouldn't be developed and deployed as quickly as possible. However, a deeper level of thinking complicates this logic.
Author: Sam Chase, The Connected Car
For example, the lives lost in self-driving car trials may pale in comparison to the lives they ultimately save, but that doesn't make it acceptable for deaths to occur during trials. Plus, there are those who are skeptical of self-driving cars' ability to bring about the safety benefits that have been promised. It's a challenging ethical dilemma that currently lacks a dominant framework to address it.
But if you find Carnegie Mellon philosophy professors Alex John London and David Danks to be persuasive, that framework may already exist, albeit in a different industry. In an opinion piece published in IEEE Intelligent Systems titled Regulating Autonomous Systems: Beyond Standards, the pair presented a compelling argument that autonomous vehicles ought to be regulated in the way that the US Food and Drug Administration tests drugs before approving them.
The need for such regulation, according to Dank, involves concerns of both safety and practicality. If early self-driving car technology performs poorly and endangers human drivers, it could hamper automakers' long-term ability to get their autonomous vehicles on the road.
"In the case of drugs, we have pretty well established procedures for looking at, for example, when a drug has more negative side effects than anticipated," Dank told Jalopnik. "There's a great risk if [driverless cars] are put on the road before we understand better how they're going to behave, there will be a number of accidents that will set back the possibility of these things being regulated in the right way."
The process Danks and London describe mirrors that of drug approval. 'Preclinical trials' for self-driving cars would place them in various environments, record how they respond, and adjust their technology accordingly. The next step would be human-assisted voyages in which autonomous vehicles navigate roadways with a person acting as co-pilot. As these processes and more were completed successfully, regulations would allow for further integration of AVs into normal, everyday operations.
The argument can be made, however, that the horse(power) is out of the barn. Self-driving cars are already testing on roads by the thousands, and legislation currently making its way through the House would put tens of thousands more out in traffic. But Danks and London do suggest an approach and philosophy to new technology that can guide our actions going forward.
"Autonomous vehicles have the potential to save lives and increase economic productivity," said London. "But these benefits won't be realised unless the public has credible assurance that such systems are safe and reliable."