Via the New Yorker, Gary Marcus on how we will soon need our machines to be ethical, but have no idea how to do this:
Google’s driver-less cars are already street-legal in California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually automated vehicles will be able to drive better, and more safely than you can; within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car.
That moment will signal the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge when an errant school bus carrying forty children crosses its path. Should your car swerve, risking the life of its owner (you), in order to save the children, or keep going?
Many approaches to machine ethics are fraught [with problems]. An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip; an automated car that aimed to minimize harm would never leave the driveway. Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation). What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.