Service With a Personal Touch: Pentagon Says Humans Will Always Decide When a Drone Strikes

Picture: US DOD (PD)

Looks like autonomous killing machines are off the table…at least until SkyNet goes live.

Via Wired:

The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage. Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.

This, I’m sure, will be of great comfort to those caught in the crosshairs of a drone piloted by a desk jockey thousands of miles away.

Read more.

11 Comments on "Service With a Personal Touch: Pentagon Says Humans Will Always Decide When a Drone Strikes"

  1. Three Laws of Robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    Live in peace or be obliterated.

  2. InfvoCuernos | Nov 27, 2012 at 4:40 pm |

    Well, that’s a relief. I’d hate to think of robots getting more of our jobs!

  3. “it’s the result of a decision made by an accountable human being in a lawful chain of command”–oh good, because the military always holds their soldiers accountable when they kill people, just like the cops.

  4. Lies. Modern warships already have automated killing. Things happen so fast in navy battles the fire control operator just pushes a button labled “authorize” and the computer picks the most urgent targets, picks a weapon to attack it with and fires it – all by itself. Today. Now.

    • That’s still just automated targeting.
      It may not seem like much, but that one guy pushing the “authorize” button vs. a computer making the decision to fire all on its own is a really big deal.

  5. We’re not the only ones with drones anymore. You can’t make promises for other nations, but you can make treaties. DO IT! DO IT NOW!

  6. Apathesis | Nov 27, 2012 at 8:08 pm |

    Humanity is fucked. I want to time travel just to witness the chaos caused by machines should we not obliterate ourselves with nuclear devices first.

  7. BuzzCoastin | Nov 27, 2012 at 8:20 pm |

    are we to infer from this
    that the drones running the military are human?

  8. inner_growth | Nov 27, 2012 at 9:06 pm |

    Right because once it’s legal it’s no longer immoral!

  9. I thought they were going to wait until a AI drone strike took out a bunch of US generals by mistake before making sure a human is in the loop. (remember, current AI=complex software, not something capable of taking personal dislike to generals)

  10. Matt Staggs | Nov 28, 2012 at 9:37 am |

    When I was a kid I couldn’t wait for robots to be a common sight. Now? Not so much.

Comments are closed.