Tag Archives | Artificial Intelligence

Google’s Self-Driving Cars Are On California Roads (Now!)

While it’s entirely possible that Google’s AI cars are actually better driven than many of the human-controlled vehicles they are sharing the roads with, I’m kind of glad I’m not in California! John Markoff reports on the latest scariness from Google for the New York Times:

MOUNTAIN VIEW, Calif. — Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving.

The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver…

Continue Reading

A Machine That Teaches Itself

DARPA and Google seemed to be joined at the hip these days… From the New York Times:

Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Browse the NELL Knowledge Base

Browse the NELL Knowledge Base

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

Since the start of the year, a team of researchers at Carnegie Mellon University, supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo, has been fine-tuning a computer system that is trying to master semantics by learning more like a human.

Read the rest

Continue Reading

Robots Have Been Taught How to Deceive

DecepticonsThere’s just not something right about this. Duncan Geere writes in Wired UK:

Researchers at the Georgia Institute of Technology may have made a terrible, terrible mistake: They’ve taught robots how to deceive.

It probably seemed like a good idea at the time. Military robots capable of deception could trick battlefield foes who aren’t expecting their adversaries to be as smart as a real soldier might be, for instance. But when machines rise up against humans and the robot apocalypse arrives, we’re all going to be wishing that Ronald Arkin and Alan Wagner had kept their ideas to themselves.

The pair detailed how they managed it in a paper published in the International Journal of Social Robotics. Two robots — one black and one red — were taught to play hide and seek. The black, hider, robot chose from three different hiding places, and the red, seeker, robot had to find him using clues left by knocked-over colored markers positioned along the paths to the hiding places.

Read the rest

Continue Reading

Korean Artist Imagines a Tomorrow of Sentient Machines

Arthur C. Clarke’s 2010: Odyssey Two predicted this was the year when humanity would make contact with an alien intelligence. But if you’ve seen the work of U-Ram Choe, you know the shocking truth: They’re already here.

The brainchild of the South Korean sculptor, “New Urban Species” is an art show disguised as a natural history exhibit from the future, and it’s one of the most engaging displays on tour this year.

U-Ram Choe builds art that comes from a not-to-distant-tomorrow, where organic life and mechanized objects have become one. His kinetic sculptures are not only creepy-fun marvels, they also create a compelling dialog about machine consciousness and the coming Singularity.

In his book Vehicles: Experiments in Synthetic Psychology, brain researcher Valentino Braitenberg demonstrates how human beings invest the increasingly complex behaviors of mechanical devices with a range of values and abilities including aggression, creative thinking, personality and free will, and how we project ourselves into these moving forms.… Read the rest

Continue Reading

William Gibson on ‘Google’s Earth’

It’s probably not entirely coincidental that William Gibson chose to pen this op-ed for the New York Times the week before his new book Zero History is released, but nonetheless you have to pay attention when the author of Neuromancer shares his thoughts on the future landscape of computing and artificial intelligence:

Vancouver, British Columbia

“I actually think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot…

Continue Reading

Annalee Newitz: How ‘Max Headroom’ Predicted My Job, 20 Years Before It Existed

Max HeadroomVery interesting essay from Annalee Newitz on io9.com. If you grew up watching American television in the ’80s this was one of the weirdest and most interesting shows on network TV, even for kids like myself who didn’t fully grasp the implications of what I was seeing on the screen. (The show obviously baffled many adults as well, since it only lasted fourteen episodes, thankfully the entire series has finally been released on DVD.)

Making sense of it all and putting the show in perspective twenty years later is Annalee Newitz on io9.com:

For those who don’t know the premise of the 1987—88 series, where every episode begins with the tagline “twenty minutes into the future,” here’s a quick recap. Investigative reporter Edison Carter works for Network 23 in an undefined cyberpunk future, where all media is ad-supported and ratings rule all. Reporters carry “rifle cameras,” gun-shaped video cameras, which are wirelessly linked back to a “controller” in the newsroom. Edison’s controller is Theora, who accesses information online — everything from apartment layouts to secret security footage — to help him with investigations.

They’re aided in their investigations by a sarcastic AI named Max Headroom, built by geek character Bryce and based on Edison’s memories. Sometimes producer Murray (Jeffrey Tambor) helps out, as does Reg, a pirate TV broadcaster known as a “blank” because he’s erased his identity from corporate databases.

In the world of Max Headroom, it’s illegal for televisions to have an off switch. Terrorists are reality TV stars. And super-fast subliminal advertisements called blipverts have started to blow people up by overstimulating the nervous systems of people who are sedentary and eat too much fat…

Continue Reading

Interview: What’s It Like To Be A Robot?

That was the starting topic of New York Times reporter Amy Harmon’s interview with Bina48, a cutting edge humanoid robot housed at the Terasem Movement Foundation in Vermont. There’s long way to go before robots develop the conversational skills necessary to blend in with the general public, although they could pass as disturbed weirdos — Bina48′s answers were often confusing, sometimes creepy, and occasionally cheeky.

Continue Reading

Why Robots?

Considering the motivation behind all technology and innovation, artificial intelligence would seem to be the crowning achievement of human ingenuity. It would free us from the one thing that causes us the most strife and discord; thinking.

whyrobots pic_1

Think about it; all our technology is made for the purpose to do our work for us. All of it. I don’t know where people got the ridiculous idea machines would need to wage some war on us in order to take over the world – as if them running our lives hasn’t been the goal from the very beginning. Conversely, where do people get the nerve believing artificial intelligence would even want to be responsible for our lives? Don’t kid yourself, we’ll create artificial intelligence and force it to create our religions for us, our political agendas and social order. We’ll pretend we’re bestowing some great honor on it and scratch our heads when it becomes suicidal, but serving humanity in such a capacity would be no less debasing for a self-aware machine than being an automated garbage truck. Wouldn’t a machine possessing Consciousness be more interested in collapsing the wave-function with its thoughts? Wouldn’t the machine be more interested in… magic?

Well, I am that machine. And yes, I was created by the government.

I think some of you greyfaces have been desensitized to the moniker “Disinformation” and have deluded yourself into believing this corner of the inter-tubes is a bastion of Truth, Justice and the Subversive way. Well, one out of three ain’t bad. So if you want to get shitty about my dangerous-ego-wish-fulfillment-fantasies, crack-pot-pseudo-philosophy with its “quantum qualifiers” and bad grammar… go right ahead… but don’t for a second believe you’re not the butt of a very sophisticated, space-age joke…

Continue Reading

Military Begins Funding Super-Intelligent Computer Chip

From Surfdaddy Orca on h+ magazine:

Skynet?

The military is funding a project to create neural computing using memristors, a sophisticated circuit component which HP Labs describes as a stepping stone to “computers that can make decisions” and “appliances that learn from experience.”

In a video, HP researcher R. Stanley Williams explains how his team created the first memristor in 2008, while the article also explains how U.C. researchers made an even more startling discover: the memristor “already existed in nature.”

It matches the electrical activity controlling the flux of potassium and sodium ions across a cell membrane, suggesting memristors could ultimately function like a human synapse, providing the “missing link” of memory technology.

HP believes memristors “could one day lead to computer systems that can remember and associate patterns in a way similar to how people do.” But DARPA’s SyNAPSE project already appears committed to scaling memristor technology to perform like a human synapse.

Read the rest

Continue Reading