For those who don't know the premise of the 1987—88 series, where every episode begins with the tagline "twenty minutes into the future," here's a quick recap. Investigative reporter Edison Carter works for Network 23 in an undefined cyberpunk future, where all media is ad-supported and ratings rule all. Reporters carry "rifle cameras," gun-shaped video cameras, which are wirelessly linked back to a "controller" in the newsroom. Edison's controller is Theora, who accesses information online — everything from apartment layouts to secret security footage — to help him with investigations. They're aided in their investigations by a sarcastic AI named Max Headroom, built by geek character Bryce and based on Edison's memories. Sometimes producer Murray (Jeffrey Tambor) helps out, as does Reg, a pirate TV broadcaster known as a "blank" because he's erased his identity from corporate databases. In the world of Max Headroom, it's illegal for televisions to have an off switch. Terrorists are reality TV stars. And super-fast subliminal advertisements called blipverts have started to blow people up by overstimulating the nervous systems of people who are sedentary and eat too much fat...
Tag Archives | Artificial Intelligence
From Surfdaddy Orca on h+ magazine:
The military is funding a project to create neural computing using memristors, a sophisticated circuit component which HP Labs describes as a stepping stone to “computers that can make decisions” and “appliances that learn from experience.”
In a video, HP researcher R. Stanley Williams explains how his team created the first memristor in 2008, while the article also explains how U.C. researchers made an even more startling discover: the memristor “already existed in nature.”
It matches the electrical activity controlling the flux of potassium and sodium ions across a cell membrane, suggesting memristors could ultimately function like a human synapse, providing the “missing link” of memory technology.
… Read the rest
HP believes memristors “could one day lead to computer systems that can remember and associate patterns in a way similar to how people do.” But DARPA’s SyNAPSE project already appears committed to scaling memristor technology to perform like a human synapse.
“You can ask your cell phone what it’s thinking about now, and the answer is that it isn’t. But in 50 years it will be. And it won’t be a companion of yours, you might be a companion of it.”
He’s serious. The CTO of D-Wave Systems says “massive amounts of money” are now going into artificial intelligence research, because “Microsoft, Google, Apple and other companies all want to dominate the mobile space, and to do that you need compelling applications… All of that requires better AI.”
D-Wave Systems worked with Google on the “Google Goggles” mobile phone app for augmented reality, using their systems to “teach” a neural network how to recognize objects like automobiles, and then transferring those algorithms to the mobile app. And to do it they used subatomic “quantum computing” – a crucial stepping stone to human-level artificial intelligence.
“I’m very excited by the possibility of building very effective unsupervised learning systems and contributing in a meaningful way to the creation of better-than-human level intelligence in machines,” says D-Wave’s CTO, adding “The existence of vast machine sentience is almost guaranteed to occur.”
21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence.
One also predicted that in 30 years, “virtually all the intellectual work that is done by trained human beings…can be done by computers for pennies an hour,” adding that AI “is likely to eliminate almost all of today’s decently paying jobs.”
The other milestones are passing a 3rd grade-level test, and passing a Turing test – and the experts estimate the probability that an AI passing a Turing test would result in an outcome that’s bad for humanity…with four estimating that probability was greater than 60%! (Regardless of whether the developer was private, military, or even open source…) Yet interestingly, the experts were skeptical of increased funding, citing misuse and close-mindedness.
“Since these experts are precisely those who would benefit most from increased funding, their skeptical views of the impact of hypothetical massive funding are very likely sincere.”
Artificial Intelligence already handles stock trading, fraud detection, and video games.… Read the rest
John Pavlus writes on io9.com:
… Read the rest
Science fiction has long played with the idea of projecting unified personalities/minds/”souls” into different bodies. The premise is baked into the plots of stories like Avatar and Caprica. But how would it work in the real world?
That’s what the science of “embodied cognition” is all about. The basic idea in this new(ish) research area (which overlaps with cognitive psychology, neuroscience, artificial intelligence, robotics, and others) is this: Your mind is defined by your physical form. Not just in terms of “the mind is what the brain does”-we all are pretty down with that already. This takes it further to encompass the whole enchilada: your mind-your “I”-is a function of a cephalized, bipedal, plantigrade, bilaterally symmetrical body between 1.5 and 2 meters tall with two arms terminating in five-fingered hands with opposable thumbs, two lungs, a warm-blooded vascular system, mostly hairless skin, two front-focused eyes, etc.