A Real-life Example
I’m pretty sure that ethics have not been programmed into driverless cars at all so far. The driverless car is programmed to avoid collisions. If it does its best and human lives are lost, too bad.
Already, driverless cars may very well be programmed to select the play that yields the softest collision. But let’s look at a realistic scenario where the choice of plays has a strong ethical character. A hard collision with another car is unavoidable and these are the plays available to the Auton:
- Protect the occupants of this car even if it means the death of occupants of the other car.
- Accept the probability of serious injury to the occupants of this car if it avoids death to occupants of the other car.
And both of these plays are complicated if any of this scenery is present on the playing field:
- The other car is driven by a human who caused the collision.
- A mechanical failure of either car caused the collision. Presumably future driverless cars will alert each other when they are out of control.
- One car holds more occupants than the other. In a further refinement, driverless cars might announce how many occupants they carry.
Now you see why the ethics of Artificial Intelligence are so important!
In the case of driverless cars, Autons will certainly be held to far higher ethical standards than human drivers are, such are our prejudices.