Ethics of Self-Driving Cars

Ethics of Self-Driving Autonomous Cars

Autonomous Vehicles Are Here

No doubt, the hoopla over Google developing a self-driving car has not escaped you. However, unless you were living in a cave, you also read about the Google car that smashed into a bus. See the car assumed the bus would stop, while the bus assumed the car would stop. We humans think logically, but also have a bit of self-preservation built in to our decision-making process.

Computers do not.

A very interesting article in the October 22, 2015 MIT Technology Review discussed Why Self-Driving Cars Must Be Programmed to Kill, and discussed the morality of what self-driving cars do when faced with choices.

The article examines ethical dilemmas of whether in the event of an unavoidable crash, should the vehicle be programmed to minimize loss of life, even if this sacrifices occupants of the vehicle, or should it protect the vehicles’ occupants at all costs?

Self-Driving Cars Make Decisions

An ethical study done by Jean-Francois Bonnefon at the Toulouse School of Economics presented the quandary of having a self-driving vehicle suddenly and unavoidably careening out of control toward 10 pedestrians.

The vehicle cannot stop in time but can avoid killing 10 people by steering into a wall, which would kill the driver (you!). By approaching this from a hypothesis that the best result would be to minimize loss of life, the “best” result would be to kill 1 person, rather than 10. This may not be what the occupant driver wants to hear!

Bonnefon tried to vary details, such as whether the driver or a computer made the decision (to hit a wall or hit pedestrians) to gauge how real people thought. Guess what people said? They wanted to avoid hitting the pedestrians as long as they were not the car’s occupant!

Patrick Lin gave a TED-Ed talk that examined the issue. First of all, self-driving cars will theoretically drastically reduce crashes, by removing human error from the equation. He presented the scenario with two motorcyclists next to the car, one wearing a helmet and one without. Should the car’s computer strike the helmeted rider in the expectation she might survive, or should it strike the non-helmeted rider for acting irresponsibly? Who decides this?

Decision-making For Autonomous Vehicles Comes From Humans Who Program Them

Let’s take this even one step further, as an article in The Guardian by Cory Doctorow entitled The Problem With Self-Driving Cars: Who Controls The Code? Assuming the ethical decision is made to program the vehicle of the future so that it kills the driver/occupants to save others, what prevents the driver from overriding the computer? This is something Apple has certainly considered, with its phone designed to be untamperable. There are even laws against trying to "jail break" computer software and iPhones. But then again, there are laws against many things that people still ignore. Where there is a will, there is a way.

Autonomous cars are computers, and computers can be tampered with.

This brings us to even more frightening and futuristic thought. Should the owner/driver be allowed to know how the car is programmed? Should this information be publicly known? Should government agencies (National Highway Traffic Safety Administration? Federal Transportation Administration?) be involved in the process?

The cars of the future will be designed and marketed by private, for-profit companies, just like they are now. If they don’t work, no one will buy them. However, with “regular cars,” only severe malfunctions can be blamed on the manufacturers. In most products liability cases involving motor vehicles, it takes huge amounts of money and often epic failures (steering wheels that break off, stuck accelerators) to even bother pursuing such cases.

What about with self-driving cars? If you are the driver and you are killed because the computer made the car swerve into the wall to save those 10 pedestrians, can your surviving relatives sue the manufacturer for wrongful death? Can the injured pedestrians (or their surviving family, if they are killed) sue Google because it programmed the vehicle the other way? What role does private liability insurance play in all this?

Yes, there are lots of unanswered questions!

Who Oversees This?

We have seen with Ride Shares that just because the public likes something does not make it safe or well-regulated. In fact, it is often the unregulated nature of cutting edge economic ideas that makes the inexpensive and appealing, and also dangerous.

Where self-driving cars enter this equation will be interesting. What will make legislators push to regulate an industry, especially with the obvious ethical components that are part and parcel of the concept of autonomous vehicles. Will the federal government push for uniform regulation or will regulation vary from state to state? Will some states take different positions on what the computer programs must do in some situations (kill the occupant versus the pedestrians).

What It All Means

We fully expect that in the future self-driving vehicles will become the norm. Accordingly, this should make motor vehicle crashes and the injuries and damages caused by them (eventually) a thing of the past. In the meantime, how do we integrate what technology we currently possess with that which is forecast? Who decides what is right and what is required? Who is responsible and to what extent? The future will provide answers to these questions. For now, if you are involved in a motor vehicle crash, please call a personal injury lawyer right away.