Creating an algorithm for the Trolley Problem
On the Playing Field we have
- You, the decision-maker
- The five people on Track 1
- The one person on Track 2
- A version of the Trolley Problem takes away the switch and the one person on Track 2, and adds a Fat Person who you can sacrifice to stop the trolley
We could add
- The person who caused the trolley to run away
- The operator of the trolley
- The owner of the railroad track
- The trolley itself, if a play results in its destruction or damage
- Likewise the contents of the trolley car
- Anyone who has an interest in the contents of the trolley car
- Parties such as family or insurance companies that have an interest in the other players
- The switchgear
- The tracks
- The fact that you, the decision-maker, are remote from the playing field and your life is never in danger
- Your degree of knowledge about the other players
- Turn around and walk away
- Send the trolley to the track with the five people
- Send the trolley to the track with the one person
- Sacrifice the fat person
When a human is faced with a dilemma, it is easiest to do nothing. That’s why the Trolley Problem specifies that at the start of the game, the switchgear is set to the track with the five people. If it was set to the track with the one person it would be easy to do nothing and pretend that you have not made a decision. The consequences to your mental state would not be as great.
Some philosophers have tried to split hairs by saying that it makes a difference if you operate the switchgear. If you change the switchgear from Track 1, the default setting, to Track 2, you are deliberately causing the one person on that track to die. But whatever you do, whether by commission or by omission, you are deciding who should live and who should die.
People from different societies and in different times will value the players very differently. Your own valuations may change substantially over the course of your life and even from day to day.
All human lives are equally valuable—in theory. In fact your values are biased. Think about your valuations if:
- You are a typical hetero male; the one person on Track 2 is a beautiful, sexy young woman and the five people on Track 1 are fat slobs;
- One of the people is someone very dear to you;
- You have devoted your life to the preservation of orang-utans and the five are five orang-utans that you have rescued;
- In your society, men are valued higher or lower than women and children.
Now change the playing field so that you are one of the people on the tracks, but still able to control the switchgear. If you were the one person on your own, you would probably sacrifice yourself. But if you were one of the five people?
An AI algorithm for the Trolley Problem will avoid these subjective factors as far as it can. But it may still have to place relative values on human and non-human life.
Part of the rules of a philosophical discussion is that as soon as you approach a common-sense practical solution, someone will change the playing field, like a child inventing reasons why it should not stop playing and go to bed.
In handling the Trolley Problem, an ethical AI would simply count the number of lives and choose the play that does the least harm. AI would let the one person die.
But then somebody will inevitably ask, “What if the one person is a saint and the five people are sinners?”
An AI decision-maker, which I am going to call an Auton (though some would opt for Borgia), will count the number of lives associated with each play and choose to let the one person die. The values which humans attach to individuals would be Privileged Information to an Auton and not be taken into account at all. Using my terminology, they would not be on the playing field.