The Ethics of Self-Driving Cars

googles_lexus_rx_450h_self-driving_car

Bill Ford, great-grandson of Henry Ford, is asking stakeholders to get together to decide on an ethical system for driverless cars. The problem isn’t whether to allow passengers to eat in the cars or how to respond to drivers’ road rage.

It’s who should live and who should die.

If a self-driving car is about to be hit by another vehicle, it has a decision to make. It could veer into a sidewalk with people on it, chancing the deaths of those people. Or it could allow the collision to take place, and possibly cause the death of the occupants of the car. There are plenty of similar scenarios. Imagine that a group of kids run after a ball (or a virtual Pokemon) right into the path of a self-driving car with one occupant. If the car can save the children by steering into a wall, should it do so at the risk of the occupant’s life? If an accident is unavoidable, should the car choose to injure or kill the smallest number of people, or to save itself and its occupants at all costs?

These questions probably remind you of freshman philosophy class. They’re all versions of a famous ethical dilemma known as the Trolley Problem. It boils down to whether you would kill one person to save the lives of several others. The problem can be jazzed up by questioning whether you would kill yourself to save others, whether you would kill multiple people to save the life of someone you love, whether you would take action toward saving some by killing one or merely allow it to happen, and so forth. Utilitarianism says you should always choose the option that leads to the least loss of life, but most people don’t make it that simple for themselves.

MIT reported on an experiment undertaken by Jean-Francois Bonnefon at the Toulouse School of Economics, in which a test of this kind was given to a large number of people, specifying that the self-driving car had to make these life or death decisions. Bonnefon found that most people were fine with utilitarianism in self-driving cars. They felt that the smallest loss of life would be the way to go, rather than protecting the car’s occupants at all costs.

But those who agreed that this was the right decision were also less likely to want to buy a self-driving car. They wanted other people’s self-driving cars to spare as many lives as possible, but they didn’t want to be in one of those cars themselves.

Slate posited another quandary for self-driving cars: given a choice among rear-ending a truck in order to avoid hitting a motorcyclist, hitting a helmeted biker, and hitting a bike without a helmet, what should the car do? This situation might not come up that often, but it illustrates the point that any ethical algorithm for driverless cars will have to be more complicated than the Trolley Problem.

Industrial machinery doesn’t have to make life and death decisions. Yet. If collaborative robots continue to develop at the current rate, they may have to at some point. Ford wants society to come up with an agreement on the ethics that should govern machinery that makes like and death decisions — before manufacturers have to start programming the machines to make those decisions.

Save

Save

24 Hour Turnaround

Factory Repair services available with 24 hour turnaround.
customerservice@hyperdynesystems.com

Call (479) 422-0390 for immediate assistance

Support Request