Service With a Personal Touch: Pentagon Says Humans Will Always Decide When a Drone Strikes

Picture: US DOD (PD)

Looks like autonomous killing machines are off the table…at least until SkyNet goes live.

Via Wired:

The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage. Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.

This, I’m sure, will be of great comfort to those caught in the crosshairs of a drone piloted by a desk jockey thousands of miles away.

Read more.

, , ,

  • Gort

    Three Laws of Robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    Live in peace or be obliterated.

  • InfvoCuernos

    Well, that’s a relief. I’d hate to think of robots getting more of our jobs!

  • http://www.facebook.com/profile.php?id=742104313 Adam Goodwin

    “it’s the result of a decision made by an accountable human being in a lawful chain of command”–oh good, because the military always holds their soldiers accountable when they kill people, just like the cops.

  • IokSotot

    Lies. Modern warships already have automated killing. Things happen so fast in navy battles the fire control operator just pushes a button labled “authorize” and the computer picks the most urgent targets, picks a weapon to attack it with and fires it – all by itself. Today. Now.

    • http://www.facebook.com/eric.fischer.73 Eric Fischer

      That’s still just automated targeting.
      It may not seem like much, but that one guy pushing the “authorize” button vs. a computer making the decision to fire all on its own is a really big deal.

  • MAT

    We’re not the only ones with drones anymore. You can’t make promises for other nations, but you can make treaties. DO IT! DO IT NOW!

  • Apathesis

    Humanity is fucked. I want to time travel just to witness the chaos caused by machines should we not obliterate ourselves with nuclear devices first.

  • BuzzCoastin

    are we to infer from this
    that the drones running the military are human?

  • inner_growth

    Right because once it’s legal it’s no longer immoral!

  • alizardx

    I thought they were going to wait until a AI drone strike took out a bunch of US generals by mistake before making sure a human is in the loop. (remember, current AI=complex software, not something capable of taking personal dislike to generals)

  • Matt Staggs

    When I was a kid I couldn’t wait for robots to be a common sight. Now? Not so much.

21
More in drone warfare, Drones, Military
Private Contractors Will Conduct Our War On Drugs In Afghanistan For Years To Come

Wondering what exists at the center of the War on Drugs vs. War on Terror Venn diagram? Wired reports that well into the foreseeable future, our military will be pumping...

Close