Licensed to kill

I've written before about the coming revolution of self-driving cars. I'm fascinated by the moral questions that arise as the liability for accidents transfers from individual, fallible drivers to software written by fallible programmers. 

In Why Self-Driving Cars Must Be Programmed to Kill, Technology Review considers the thorny question of how we can program vehicles to behave when a crash is unavoidable. When should the driver and passengers be sacrificed, for example, to save a larger group of people who are suddenly in harm's way?