Tag Archives | Artificial Intelligence

Exploring The Corporate Gaze

Drawing inspiration from the concept of the “robot-readable world” — i.e. people and places as perceived through the eyes of smart machines such as face-detecting cameras — Quiet Babylon describes the “corporate gaze”:

There’s another class of entities to whom we have already granted personhood. I’m speaking, of course, about corporations. Immortal entities of terrifying inhuman thinking, capable of entering into contracts and incurring debts, and owed a subset of the rights which we accord to human persons. I’m interested in the aesthetics of the corporate readable world, and their truly alien gaze.

Corporations communicate to us through money, press-releases, and advertising, always advertising. For a glimpse of the corporate readable world, look to Twitter’s routinely useless “who to follow” panel, Klout’s laughable ideas about what you are influential about, Facebook’s clumsy attempts to get you to join a dating site, and Google’s demented, personalized, Gmail ads. You can see it in your credit rating, and your position on the actuarial tables.

Read the rest
Continue Reading

The Religious Worship Of Robotic Machines As Nature Perfected

The video manifesto of the Japanese art collective and new age cult AUJIK:
A guide named Nashi narrates the audience journey in an uncanny forest. What are the creatures that live there, living beings or robots? Nashi states that even the things we consider synthetic and artificial are as sacred as plants and stones. AUJIK are a new age group that shares Shintos' belief that everything of nature is animated. Just as with other forms of animism, AUJIK worships everything that comes out of nature, the main difference with AUJIK is that science and technology is considered as sacred as stones and trees. The Shinto priest Hideaki spoke about similar things in the 18th century after he had seen a Karakuri doll(a clockwork robot made of wood) and claimed that in the future we will create mechanical characters that will become so superior to our own intelligence that we will subject [ourselves] as they were gods.    
Continue Reading

The Next Generation Of Drones Will Decide For Themselves Whom To Kill

The Global Post writes that in the near future, drones will be smarter and more “autonomous,” using algorithms to determine whom to terminate on the ground below. What could go wrong?

In all, a minimum of 2,800 people have died in no fewer than 375 US drone strikes in Pakistan, Yemen and Somalia since 2004, according to a count by the UK Bureau of Investigative Journalism. Many hundreds of those killed were probably innocent bystanders.

Standard procedure is for one crewman to control the drone’s sensors, potentially including daytime and night-vision video cameras and high-resolution radars. The robot does essentially nothing without direct human input. But if a host of government and private research initiatives pan out, the next generation of drones will be more powerful, autonomous and lethal … and their human operators less involved.

“In the future we’re going to see a lot more reasoning put on all these vehicles,” Cummings says.

Read the rest
Continue Reading

Robot learns ‘self-awareness’

Picture: Flickr user (((o.kvlt))) (CC)

Instead of being pre-programmed with experiential knowledge, a robot named Nico is learning the relationships of its grippers and sensors, space and environment.

Nico may be slowly approaching self-awareness, and its programmers are utilizing an even better test than the Turing Test. The ‘Mirror Test‘, the same one that we humans believe separates us (as well as elephants, magpies, orcas, dolphins and the great apes) from other tested species, by showing that we can both use the mirror as a tool to explore a reflected environment, and recognize that our reflections are indeed of ourselves.

Via Kurzweil AI:

Using knowledge that it has learned about itself, Nico is able to use a mirror as an instrument for spatial reasoning, allowing it to accurately determine where objects are located in space based on their reflections, rather than naively believing them to exist behind the mirror.

Read the rest
Continue Reading

Don’t Ask Siri About Tiananmen Square

Apple has unveiled Mandarin- and Cantonese-speaking versions of iPhone voice-controlled personality Siri, known for her subservient manner and witicisms. But Siri isn’t willing to crack a joke about everything. It may or may not be a glitch, but she really does not want to discuss Tiananmen Square with you, so stick to asking questions about the weather and where to buy things. The Wall Street Journal writes:

Some users have tested her devotion to free speech by asking her questions about the June 4, 1989, Tiananmen Square crackdown—a topic she seems loathe to broach. One screenshot posted to Twitter shows Siri responding to the question “Do you know about the Tiananmen incident?” with the answer: “I couldn’t find any appointments related to ‘Do you know about Tiananmen.’” A second try with the question rephrased – “What happened on June 4, 1989?”—produced an even stranger response: “I’m sorry, the person you are looking for is not in your address book.”

A[nother] screenshot posted suggested Siri wasn’t even able to provide directions to Tiananmen Square.

Read the rest
Continue Reading

The Robot Author Has Arrived

dadoesWe can all agree that it’s O.K. for robots to take over unpleasant jobs — like cleaning up nuclear waste. But how could we have allowed them to commandeer one of the most gratifying occupations, that of author?

Via the New York Times, Pagan Kennedy looks into the phenomenon of android authors, and finds that their works are already being published and sold on Amazon:

One day, I stumbled across a book on Amazon called “Saltine Cracker.” It didn’t make sense: who would pay $54 for a book entirely about perforated crackers? The book was co-edited by someone called Lambert M. Surhone — a name that sounds like one of Kurt Vonnegut’s inventions. According to Amazon, Lambert M. Surhone has written or edited more than 100,000 titles, on every subject from beekeeping to the world’s largest cedar bucket. He was churning out books at a rate that was simply not possible for a human being.

Read the rest
Continue Reading

Disturbing Conversation Between Chatbots

Via Cornell's Creative Machines Lab, two robots are forced into an uncomfortable conversation that touches on God and other existential matters. (Both are suspicious that the other may have android origins, but neither wants to admit it.) It's even more disconcerting to imagine robots someday having such discussions without human supervision and coming to epiphanies concerning their robotic nature.
Continue Reading

Test Tube DNA Brain Gets Quiz Questions Right

Neuron-SEM-2A step closer to artificial intelligence? Discovery News reports:

A team of researchers lead by Lulu Qian from the California Institute of Technology (Caltech) have for the first developed an artificial neural network — that is, the beginnings of a brain — out of DNA molecules. And when quizzed, the brain answered the questions correctly.

They turned to molecules because they knew that before the neural-based brain evolved, single-celled organisms showed limited forms of intelligence. These microorganisms did not have brains, but instead had molecules that interacted with each other and spurred the creatures to search for food and avoid toxins. The bottom line is that molecules can act like circuits, processing and transmitting information and computing data.

The Caltech used DNA molecules specifically for the experiment, because these molecules interact in specific ways determined by the sequence of their four bases: adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T).

Read the rest
Continue Reading

Can AI-Powered Games Create Super-Intelligent Humans?

EinsteinA technology CEO sees game artificial intelligence as the key to a revolution in education, predicting a synergy where games create smarter humans who then create smarter games.

Citing lessons drawn from Neal Stephenson’s The Diamond Age, Alex Peake, founder of Primer Labs, sees the possibility of a self-fueling feedback loop which creates “a Moore’s law for artificial intelligence,” with accelerating returns ultimately generating the best possible education outcomes.

“What the computer taught me was that there was real muggle magic …” writes Peake. And he reaches a startling conclusion.

“Once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we’re dipping our toe into symbiosis between humans and the AI that shape them.

Read the rest

Continue Reading