“If a superior alien civilization sent us a text message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here — we’ll leave the lights on’? Probably not — but this is more or less what is happening with AI, warns Stephen Hawking and several other scientists in this article at the Huffington Post. As a quick aside, did anyone see Transcendence? Is it as crappy as I hear?
Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, for example, world militaries are considering autonomous weapon systems that can choose and eliminate their own targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasized by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.
Latest posts by Matt Staggs (see all)
- A Changing Of The Guard: Meet Your New Site Editor - Jul 6, 2014
- Thirty Patients Contract TB After Visits To Acupuncture Clinic - Jul 1, 2014
- Drunk Midwesterners Make Up the Majority Of UFO Witnesses - Jul 1, 2014