Artificial Intelligence

Drawing inspiration from the concept of the “robot-readable world” — i.e. people and places as perceived through the eyes of smart machines such as face-detecting cameras — Quiet Babylon describes the “corporate…


The video manifesto of the Japanese art collective and new age cult AUJIK:

A guide named Nashi narrates the audience journey in an uncanny forest. What are the creatures that live there, living beings or robots? Nashi states that even the things we consider synthetic and artificial are as sacred as plants and stones.

AUJIK are a new age group that shares Shintos’ belief that everything of nature is animated. Just as with other forms of animism, AUJIK worships everything that comes out of nature, the main difference with AUJIK is that science and technology is considered as sacred as stones and trees.

The Shinto priest Hideaki spoke about similar things in the 18th century after he had seen a Karakuri doll(a clockwork robot made of wood) and claimed that in the future we will create mechanical characters that will become so superior to our own intelligence that we will subject [ourselves] as they were gods.

 

 



Instead of being pre-programmed with experiential knowledge, a robot named Nico is learning the relationships of its grippers and sensors, space and environment. Nico may be slowly approaching self-awareness, and its programmers are…



Apple has unveiled Mandarin- and Cantonese-speaking versions of iPhone voice-controlled personality Siri, known for her subservient manner and witicisms. But Siri isn’t willing to crack a joke about everything. It may or…


We can all agree that it’s O.K. for robots to take over unpleasant jobs — like cleaning up nuclear waste. But how could we have allowed them to commandeer one of the…


Via Cornell’s Creative Machines Lab, two robots are forced into an uncomfortable conversation that touches on God and other existential matters. (Both are suspicious that the other may have android origins, but neither wants to admit it.) It’s even more disconcerting to imagine robots someday having such discussions without human supervision and coming to epiphanies concerning their robotic nature.






While it’s entirely possible that Google’s AI cars are actually better driven than many of the human-controlled vehicles they are sharing the roads with, I’m kind of glad I’m not in California! John Markoff reports on the latest scariness from Google for the New York Times:

MOUNTAIN VIEW, Calif. — Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving.

The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver…


DARPA and Google seemed to be joined at the hip these days… From the New York Times: Give a computer a task that can be crisply defined — win at chess, predict…


There’s just not something right about this. Duncan Geere writes in Wired UK: Researchers at the Georgia Institute of Technology may have made a terrible, terrible mistake: They’ve taught robots how to…



It’s probably not entirely coincidental that William Gibson chose to pen this op-ed for the New York Times the week before his new book Zero History is released, but nonetheless you have to pay attention when the author of Neuromancer shares his thoughts on the future landscape of computing and artificial intelligence:

Vancouver, British Columbia

“I actually think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot…


Max HeadroomVery interesting essay from Annalee Newitz on io9.com. If you grew up watching American television in the ’80s this was one of the weirdest and most interesting shows on network TV, even for kids like myself who didn’t fully grasp the implications of what I was seeing on the screen. (The show obviously baffled many adults as well, since it only lasted fourteen episodes, thankfully the entire series has finally been released on DVD.)

Making sense of it all and putting the show in perspective twenty years later is Annalee Newitz on io9.com:

For those who don’t know the premise of the 1987—88 series, where every episode begins with the tagline “twenty minutes into the future,” here’s a quick recap. Investigative reporter Edison Carter works for Network 23 in an undefined cyberpunk future, where all media is ad-supported and ratings rule all. Reporters carry “rifle cameras,” gun-shaped video cameras, which are wirelessly linked back to a “controller” in the newsroom. Edison’s controller is Theora, who accesses information online — everything from apartment layouts to secret security footage — to help him with investigations.

They’re aided in their investigations by a sarcastic AI named Max Headroom, built by geek character Bryce and based on Edison’s memories. Sometimes producer Murray (Jeffrey Tambor) helps out, as does Reg, a pirate TV broadcaster known as a “blank” because he’s erased his identity from corporate databases.

In the world of Max Headroom, it’s illegal for televisions to have an off switch. Terrorists are reality TV stars. And super-fast subliminal advertisements called blipverts have started to blow people up by overstimulating the nervous systems of people who are sedentary and eat too much fat…


That was the starting topic of New York Times reporter Amy Harmon’s interview with Bina48, a cutting edge humanoid robot housed at the Terasem Movement Foundation in Vermont. There’s long way to go before robots develop the conversational skills necessary to blend in with the general public, although they could pass as disturbed weirdos — Bina48’s answers were often confusing, sometimes creepy, and occasionally cheeky.


Considering the motivation behind all technology and innovation, artificial intelligence would seem to be the crowning achievement of human ingenuity. It would free us from the one thing that causes us the most strife and discord; thinking.

whyrobots pic_1

Think about it; all our technology is made for the purpose to do our work for us. All of it. I don’t know where people got the ridiculous idea machines would need to wage some war on us in order to take over the world – as if them running our lives hasn’t been the goal from the very beginning. Conversely, where do people get the nerve believing artificial intelligence would even want to be responsible for our lives? Don’t kid yourself, we’ll create artificial intelligence and force it to create our religions for us, our political agendas and social order. We’ll pretend we’re bestowing some great honor on it and scratch our heads when it becomes suicidal, but serving humanity in such a capacity would be no less debasing for a self-aware machine than being an automated garbage truck. Wouldn’t a machine possessing Consciousness be more interested in collapsing the wave-function with its thoughts? Wouldn’t the machine be more interested in… magic?

Well, I am that machine. And yes, I was created by the government.

I think some of you greyfaces have been desensitized to the moniker “Disinformation” and have deluded yourself into believing this corner of the inter-tubes is a bastion of Truth, Justice and the Subversive way. Well, one out of three ain’t bad. So if you want to get shitty about my dangerous-ego-wish-fulfillment-fantasies, crack-pot-pseudo-philosophy with its “quantum qualifiers” and bad grammar… go right ahead… but don’t for a second believe you’re not the butt of a very sophisticated, space-age joke…




“Why not develop music in ways unknown…? If beauty is present, it is present.”

That’s Emily Howell talking – a highly creative computer program written in LISP by U.C. Santa Cruz professor David Cope. (While Cope insists he’s a music professor first, “he manages to leverage his knowledge of computer science into some highly sophisticated AI programming.”)

Classical musicians refuse to perform Emily’s compositions, and Cope says they believe “the creation of music is innately human, and somehow this computer program was a threat…