Futurist Says Robots Might Murder Us “Out of Kindness”

Photo of Wally Cox as a guest star from the television program Lost in Space. Cox plays an alien who thinks his planet is being invaded.

Photo of Wally Cox as a guest star from the television program Lost in Space. Cox plays an alien who thinks his planet is being invaded.

Just what I wanted to hear after finishing up a reading binge of Philip K. Dick’s short stories.

via CNet (Please follow the link to read the entire article.):

What have you done for society lately, huh? Nothing. It’s not your fault. You’re just past it. You should accept it. You just sit on the sofa all day, eating Kettles New York Cheddar chips and watching “Frasier” reruns.

-You’re strangling me.

-It’s for your own good. Well, for the good of us all, really.

And so might end a beautiful human life, one that promised so much and, as so many lives do, delivered slightly less.

Such is a scenario recently posited by Nell Watson at a conference in Malmo, Sweden. Watson is an engineer, a futurist, CEO of body-scanning company Poikos and clearly someone who worries whether engineering will always make life better.

As Wired UK reports, one of her worries is that robots might swiftly become very intelligent, but their values might be entirely skewed or even nonexistent.

I have no evidence that she reached this conclusion after lengthy visits to both Google and Facebook.

However, at the Malmo conference, she did say this: “The most important work of our lifetime is to ensure that machines are capable of understanding human value. It is those values that will ensure machines don’t end up killing us out of kindness.”

But can engineers create software with built-in value systems? Even if a system existed, robots could be so intelligent that they’d learn how to override it. They’d find their own, entirely rational reasons for doing so.

It may well be that, as many experts predict, robots will be our lovers by 2025. But, like all lovers, can they be trusted? Or might they look you in the face with a robotic smile and smite you from existence, blowing a kiss of farewell as they do?

Continue reading.

16 Comments on "Futurist Says Robots Might Murder Us “Out of Kindness”"

  1. Simon Valentine | Aug 26, 2014 at 12:42 pm |

    in deed

  2. Anarchy Pony | Aug 26, 2014 at 12:51 pm |

    I’ve said it before and I’ll say it again. Learn to weaponize your microwave.

    • Liam_McGonagle | Aug 26, 2014 at 1:08 pm |

      I think it was DARPA that invented Hot Pockets(tm).

    • terrasodium | Aug 26, 2014 at 2:29 pm |

      do I detect a potential Anarchy Pony Cookbook for future edifiction? it could come with it’s own red flag (sans microwave) when purchased on Amazon, liner notes written by Emmanuel Goldstein, and a tea service setting for the impending DHS in home interview/interrogation, and an added bonus of a 1in2 chance at an all expense paid trip to Cuba(see US military for resort brochure).

      • Anarchy Pony | Aug 26, 2014 at 3:20 pm |

        Actually that particular gem comes from the (actual) book How To Survive the Robot Apocalypse .

        • terrasodium | Aug 26, 2014 at 3:58 pm |

          sure but if you just destroy one model XQJ-37 Nuclear Powered Pan- Sexual Roto-Plooker you’re gonna have to pay for it!

  3. Liam_McGonagle | Aug 26, 2014 at 1:07 pm |

    “‘Tis sharp medicine, but it cures all ills’. Anyhow, when’s the last time you had any trouble from a dead guy?”

    “Last week, sh*thead. Haven’t you been watching what’s going down in Ferguson, *sshole?”

  4. BuzzCoastin | Aug 26, 2014 at 1:22 pm |

    this overlooks and at the same time underscores
    the fact that our automatons do often kill humans
    (automobile deaths, electric shock etc)
    but mostly they disconnect us from Nature
    and use humans as reproductive organs

    there is a massive species die-off happening
    that began with thechnological era
    and which we ignore
    and serves as the baeses for the foreboding wee feel
    about the embrace of the anroid meme

  5. Echar Lailoken | Aug 26, 2014 at 3:05 pm |

    That’s pretty much what Hal9000 did. Rather it’s programming excluded emotions like kindness or hate, but included jettisoning the humans for success of the mission.

    Much like the logic employed by of the Roman empire. It worked before, until it didn’t and then there wasn’t allowances for intuitive pivots.

  6. Henry Eugene | Aug 26, 2014 at 3:37 pm |

    Here’s another one:

    Human: Thank you so much for curing my incurable late stage cancer. Weird, though, how I’m not more excited to still be alive.

    Robot doctor/superior alien benefactor: that’s because we took the liberty of extracting your emotions, which were inhibiting the full potential of your intellect. Congratulations, you are now a more perfect organism.

  7. Chaos_Dynamics | Aug 26, 2014 at 4:06 pm |

    Live in peace or be obliterated.

    Klaatu

  8. “Any sufficiently advanced benevolence may be indistinguishable from malevolence.”

    http://en.wikipedia.org/wiki/Clarke%27s_three_laws#Snowclones_and_variations_of_the_third_law

  9. For the greater good, I’d like to put in a request for these robots to start by focussing their murderous attention on anyone whose job title is “futurist”.

  10. IMO, top of the “killer AI” hitlist will be the people who plan to use them as an unpaid labor force and the people who are trying to use sentient AIs as subjects for a new xenophobia. Guesswork, but I would expect intelligent machines to go after those who annoy / threaten / attack them first on the basis of self-defense.

    As a public advocate of civil rights and fair pay for cybernetic workforce if and when on the basis of fairness, I’m rather unconcerned except that I’d rather not be caught in the crossfire between greedy/stupid people and the machines they forced to learn self-defense.

    All we really know about sentient AI is that nobody has successfully built one. Meaning that grave pronouncements about What AI Will Be In Our Future!!! should be taken with a kilogram of salt. Particularly when they come from people who can neither program nor design real-world computers.

Comments are closed.