The Pressing Conundrum Of Creating Moral Machines

Via the New Yorker, Gary Marcus on how we will soon need our machines to be ethical, but have no idea how to do this:

Google’s driver-less cars are already street-legal in California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually automated vehicles will be able to drive better, and more safely than you can; within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car.

That moment will signal the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge when an errant school bus carrying forty children crosses its path. Should your car swerve, risking the life of its owner (you), in order to save the children, or keep going?

Many approaches to machine ethics are fraught [with problems]. An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip; an automated car that aimed to minimize harm would never leave the driveway. Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire.

The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation). What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.

4 Comments on "The Pressing Conundrum Of Creating Moral Machines"

  1. BuzzCoastin | Dec 5, 2012 at 7:28 pm |

    the automobile created the police state
    the driver-less automobile will actually reduced the number of pigs on the highways
    and reduced court revenue & the whole driver fleecing biz by the cop/court racket

    it’s hard to see Big Homelander going for this
    and even less likely that the FEMA Camp interns want that to happen either

  2. alizardx | Dec 6, 2012 at 1:52 am |

    Based on IBM’s recent paper and Moore’s Law, we’re about 20 years away from sufficiently powerful DATA CENTER sized platforms capable of simulating human intelligence well enough to make the concept of “a machine with ethics” credible and a decade-plus of Moore’s Law iterations before those can be made as devices small enough to fit in a motor vehicle.

    “Don’t hit objects identified as things or people” via identification algorithms and software to prioritize choices is the best the hardware state of the art can support. And that is probably good enough, though it’ll probably take a number of lawsuits based on what happened when we don’t like the results of the choices the algorithms made to make these vehicles safe enough for mass production for use anywhere other than controlled-access highways.

    People are confusing Artificial Intelligences, i.e. software packages designed to emulate human expertise in certain very limited domains with sentient beings.

  3. Haystack | Dec 6, 2012 at 8:48 am |

    I think there should be a slider on your dashboard, right by the climate control, with “Better them than me” on one end and “Life’s a bugger anyway” on the other.

Comments are closed.