Developing Moral Self-Driving Vehicles; The “trolley problem,” a famous thought experiment, poses the question: Should you move a lever to reroute a runaway trolley so that it kills one person instead of five? What if, as an alternative, you had to push someone into the trolley’s tracks in order to stop it? What option is moral in each of these situations?
Philosophers have argued for decades about whether we should choose a utilitarian solution (what is best for society; in this case, fewer deaths) or a one that prioritizes individual rights (such as the right not to be purposefully placed in danger).
Designers of automatic vehicles have also thought about how AVs might handle similar problems in recent years when faced with unforeseen driving scenarios. What should the AV do, for instance, if a bicycle abruptly enters its lane? Should it hit the cyclist or swerve into oncoming traffic?
The answer is right in front of us, says Chris Gerdes, co-director of the Center for Automotive Research at Stanford (CARS) and professor emeritus of mechanical engineering. The social agreement we currently have with other drivers, as outlined in.
How might current traffic regulations influence automated vehicles’ moral behavior?
Always observe the law is the company policy of Ford. Does that policy apply to automated driving? is the simple question that gave rise to our research. And under what circumstances, if ever, is it moral for an AV to break the law?
As we investigated these issues, we found that, in addition to the traffic code, appellate rulings and jury instructions also contribute to the development of the social contract that has evolved over the more than a century that we have been operating automobiles. And at the heart of that social contract is the obligation to drive safely and with consideration for other road users, abiding by the law only when it’s absolutely necessary to do so. Basically: In the same circumstances where breaking the law seems appropriate