The Mathematics of Murder: Autonomous Robots and Choosing Who to Kill
The Mathematics Of Murder: Should A Robot Sacrifice Your Life To Save Two? is an article by Erik Sofge that raises all sorts of interesting questions, related to autonomous robots and cars.
As we move closer towards seeing driverless cars on the road, what happens when a crash becomes inevitable? What calculations are built-in to the vehicle? What ethical decisions were made, months or years ago, in a meeting room, to determine what defines the “least harm?”
As the owner of the vehicle, should your autonomous car do everything in its power to save your life in the event of a collision? Even at the cost of others?
Perhaps it should veer away from an oncoming car, thereby avoiding a collision but choosing instead to hit pedestrians on the sidewalk. If the risk of colliding with the car meant fatalities and colliding with pedestrians meant only injuries, wouldn’t that be preferable? If your car can detect height or age, should it aim for adults over children? Senior citizens over the young?
A lot of interesting questions get raised, and this article has gotten me wondering what the right or best answer is. Or harder still, how we come to some kind of consensus as to what the right or best answer is.
[via MetaFilter, CC photo via Ben Husmann]
Related:
Mathematics During a Fire
Robot Halloween Costume
Robot Costume, Lunchtime
This is creepy. It’s not directly relatable to this topic, but have you read “The End of Eternity” by Asimov? It has to do with social engineering a bit and while it’s not robots doing the engineering the concepts remind me of this – how do we determine how to do the least harm or the most good?
Ben (May 14, 2014 at 9:55 am)