The first case study of the Ethics of Acceptable Safety Event - Automated Cars. Any thoughts on this?
Can you programme ethics into a self-driving car?
How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? Who makes these ethical and moral decisions or will there always have to be a middle ground with drivers ultimately being responsible for their cars decsions?