Caution: This Car Is a Consequentialist

The Technology expo Mobile World Congress was held this week, and amongst shiny new phones and gadgets, a number of reveals pertained to the future of motoring. Across the board firms are leading with the idea of interconnected devices and the ‘Internet of Things’, and the automobile industry is no different.
Caution: This Car Is a Consequentialist
Renault Nissan announced plans to bring an autonomous, connected car to market by 2016. Benjamin Chasteen/Epoch Times
Updated:

The Technology expo Mobile World Congress was held this week, and among shiny new phones and gadgets, a number of reveals pertained to the future of motoring. Across the board firms are leading with the idea of interconnected devices and the Internet of Things, and the automobile industry is no different.

AT&T showed how to control a “smart home“ from your ”connected car,” while Visa wants us to use our wheels to order pizza. Tech and automobile companies are converging and partnering, with both Apple and Google racing ahead to provide dashboard operating systems. Of course, Google’s most distinctive contribution to motoring so far is still its cartoon-like driverless cars; a potentially transformative sector other companies are piling into.

This week Renault Nissan announced plans to bring an autonomous, connected car to market by 2016. This wouldn’t be as far down the line as a fully driverless car, capable only (pending regulatory approval) of driving autonomously in traffic jams. While Renault Nissan’s head predicted that autonomous roadway travel is not far off, moving beyond that point is far more difficult because we simply can’t currently ensure that autonomous vehicles make rational decisions in emergencies.

Questions of which emergency choices would be rational, and which, indeed, would be most ethical to take are some of the most interesting in the pursuit of automated transportation. Such cases bear a strong resemblance to the classic Trolley Problem thought experiment:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: 1) Do nothing, and the trolley kills the five people on the main track. 2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

In philosophy this problem (along with permutations involving fat men being shoved off bridges and looping tracks) is used to tease out consequentialist versus deontological intuitions—should you always act to save the many, or is it wrong to treat people as a means to and end?—as well as moral distinctions between forseeing and intending a death, and killing and letting die.

The ethics of driverless cars in this sense are particularly juicy. Unlucky drivers may experience their own trolley-like problems: should they swerve into the next lane or aim instead for the pavement? Generally, though, we’re pretty forgiving of a bad call made in a split second and under extreme duress, and wouldn’t really consider a particular choice an ‘ethical’ one.

Even if cars 'learn' with road time and experience, their instinctive behavior will already be premeditated and decided.

However, what driverless cars do in a dangerous situation will be thought far more significant. Even if cars ‘learn’ with road time and experience, their instinctive behavior will already be premeditated and decided; written in as part of the car’s program and consciously chosen by a coder.

This predetermination of the car’s behavior (and by extension, its moral philosophy) makes trolley-related problems both an ethical and legal minefield. Regardless (or perhaps, because,) of how effective driverless cars may be in reducing overall accidents, lawsuits are likely to be significant, and expensive. Whether or not a car is automated to minimize total casualties or programmed to never hit an innocent bystander, there will likely come a time when someone asks “why was my loved one already marked out to die in this situation?” Applying different moral codes will inevitably lead to very different ideas of what it is acceptable for a driverless car to do.

Picking the “right” ethical choice in a range of hypothetical situations will therefore be important, especially if the deployment of driverless cars relies on political and populist goodwill. There’s no end of considerations—is a foreseen death acceptable? Is a child’s life more valuable than an adult’s? How do the number and ages of individuals weigh against one another? Should a driverless car prioritize the safety of its passengers over pedestrians, or should it try to minimize third party harm? Should public and private vehicles act differently? Ultimately it may be governments who insist on taking these decisions, but that doesn’t make them any easier.

There’s a number of articles on this sort of thing, and  it will be interesting to hear further development and discussion of these issues as driverless technology evolves. However, one solution to “choosing” the right outcomes, which I found interesting, posited by Owen Barder, would be to let the market decide and with each person to purchase a car programmed with their own moral philosophy. This may not work out in practice, but I do like the idea of a “Caution: This Car is a Consequentialist” bumper sticker.

Charlotte Bowyer is the head of digital policy at the Adam Smith Institute.

This article was reprinted with permission from adamsmith.org

Charlotte Bowyer
Charlotte Bowyer
Author
Related Topics