Moral Machine: When, Exactly, Should A Driverless Car Kill People?
Moral Machine is a site that presents you with various scenarios involving a driverless car… and allows you to determine the best outcome. “Best” is kind of a tough word here, because in several instances, death is unavoidable. Your job, as the arbiter, is to determine which deaths (and which circumstances) make the most sense to you.
You may proceed from scenario to scenario by selecting the outcome you feel is most acceptable by you.
This has long been one of my favorite topics to discuss at parties. With technology hurtling us towards a world filled with driverless cars, the trolley problem becomes more of a thing we need to contend with.
Given the kind of data a driverless car can process, should humans then code behavior or processes that we deem ethical? How do we come to a concensus? If we choose to not code in behavior that uses available information, is that in itself unethical?
I have to admit – going through these scenarios made me feel really terrible, and really uncomfortable. I found myself cursing at the screen more than once.
I would highly recommend reading through the instructions, as I mostly reacted to the visuals when I first encountered the test. I also didn’t go into the details for each scenario, and tried to react primarily based on the images I was given (though I don’t think I caught the fact that some people were walking against a red light).
A interesting, albeit unpleasant, exercise. As in awe as I am of Tesla cars, I do wonder a lot about how we handle issues of speed and safety, and the preservation of life.
And whether that life should favor a passenger, or a pedestrian.
[via The Next Web]