Tag Archives | Artificial Intelligence

Has The Dystopian Singularity Already Occurred, In The Form Of Corporations?

The prime futurist fear is that humanity will create some advanced technology with an ostensibly positive purpose, but it will buck our control and undo the world as it pursues some twisted version of the ends it was programmed to achieve. Quiet Babylon writes that this artificially-sentient oppressor has already arrived:

One of my favorite recurring tropes of AI speculation/singulatarian deep time thinking is meditations on how an evil AI might destroy us.

Here’s an example: The scenario imagined is where there is a button that humans push if the AI gets an answer right and the AI wants to get a lot of button presses, and eventually it realizes that the best way to get button presses is to kill all the humans and institute a rapid fire button-pressing regime.

You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.

Read the rest
Continue Reading

Would You Switch Off A Robot If It Begged Not To Be?

Will our machines acquire the ability to convince us that they are in fact alive? An unsettling study at the the Netherlands' Eindhoven University of Technology:
This video shows an experiment in which participants are asked to switch off a robot and thereby killing it. The robot begs for its live and we measured how long the participants hesitated. The perception of life largely depends on the observation of intelligent behavior. Even abstract geometrical shapes that move on a computer screen are being perceived as being alive… in particular if they change their trajectory nonlinearly or if they seem to interact with their environments. The robot's intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable. Participants hesitated almost three times as long to switch off an intelligent and agreeable robot (34.5 seconds) compared to an unintelligent and non-agreeable robot (11.8 seconds).
Continue Reading

Ray Kurzweil Designing Super-Intelligent Robot Assistant At Google

Are you ready for a virtual personal assistant which “knows better than you” constantly injecting itself into your life? A preview of things to come, via Technology Review:

Famed AI researcher and singularity forecaster Ray Kurzweil recently shed some more light on what his new job at Google will entail. It seems that he does, indeed, plan to build a prodigious artificial intelligence, which he hopes will understand the world to a much more sophisticated degree than anything built before–or at least that will act as if it does.

Kurzweil’s AI will be designed to analyze the vast quantities of information Google collects and to then serve as a super-intelligent personal assistant. He suggests it could eavesdrop on your every phone conversation and email exchange and then provide interesting and important information before you ever knew you wanted it.

Read the rest

Continue Reading

The End To The Era Of Biological Robots

Via Skeptiko, a fascinating interview with neuroscientist Dr. Mario Beauregard, who argues that, like the transition from classical to quantum physics, a revolution is coming in the way science will no longer perceive humans as being merely “biological robots”:

What we call the “modern scientific worldview”… is based on classical physics and this view is based on a number of fundamental assumptions like materialism, determinism, reductionism. So applied to mind and brain it means that, for instance, everything in the universe is only matter and energy that form the brain as a physical object, too, and the mind can be reduced strictly to electrical and chemical processes in the brain.

It means also that everything is determined from a material or physical point of view, so we don’t have any freedom. We’re like biological robots, totally determined by our neurons and our genes and so on. And so we’re reduced to material objects and we are determined by material processes.

Read the rest
Continue Reading

The Pressing Conundrum Of Creating Moral Machines

Via the New Yorker, Gary Marcus on how we will soon need our machines to be ethical, but have no idea how to do this:

Google’s driver-less cars are already street-legal in California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually automated vehicles will be able to drive better, and more safely than you can; within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car.

That moment will signal the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge when an errant school bus carrying forty children crosses its path. Should your car swerve, risking the life of its owner (you), in order to save the children, or keep going?

Many approaches to machine ethics are fraught [with problems].

Read the rest
Continue Reading

AI on the DSM IV – A Thought Experiment from David J. Kelley

With the recent news coverage of scientists discussing robot uprisings and the possible dangers of artificial intelligence, it’s interesting to see a direct thought experiment along these lines from Microsoft UX developer David J. Kelley. In a recent h+ Magazine article, Interview with an AI (Artificial Intelligence) – A Subtle Warning…,  Kelley provides an outline for an experiment that seeks to gain some understanding of how an AI would respond during an interview. As he explains it:

“I was thinking about ideas for an article on my train ride home from the experience lab I work in, and it came to me that it would be interesting to actually have an interview with an AI only a little bit better than us, maybe one that is one of the first kinds of true AI and for fun let’s say it has lived with us for a few decades incognito. But how can we do that?

Read the rest
Continue Reading

Think Tank To Study The Risk Of A Genocidal Robot Uprising

Good to know that we may finally have an answer on this. The BBC reports:

Cambridge researchers at the Centre for the Study of Existential Risk (CSER) are to assess whether technology could end up destroying human civilisation. The scientists said that to dismiss concerns of a potential robot uprising would be “dangerous”.

Fears that machines may take over have been central to the plot of some of the most popular science fiction films. But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention. “The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” the researchers write.

The CSER project has been co-founded by Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn. Prof. Price said that as robots and computers become smarter than humans, we could find ourselves at the mercy of “machines that are not malicious, but machines whose interests don’t include us”.

Read the rest
Continue Reading

When Predator Drones Acquire Minds Of Their Own

A preview of the uprising of the machines, from the Washington Post‘s glimpse into a secretive U.S. military base in the Horn of Africa:

Camp Lemonnier is the centerpiece of an expanding constellation of half a dozen U.S. drone and surveillance bases in Africa, created to combat a new generation of terrorist groups across the continent.

As the pace of drone operations has intensified in Djibouti, Air Force mechanics have reported mysterious incidents in which the airborne robots went haywire.

In March 2011, a Predator parked at the camp started its engine without any human direction, even though the ignition had been turned off and the fuel lines closed. Technicians concluded that a software bug had infected the “brains” of the drone, but never pinpointed the problem.

“After that whole starting-itself incident, we were fairly wary of the aircraft and watched it pretty closely,” an unnamed Air Force squadron commander testified to an investigative board, according to a transcript.

Read the rest
Continue Reading

The Hidden History of Artificial Intelligence – Transhumanism and Alchemical Agendas

A potent underground idea is usually scheduled for retirement once it makes it onto the History Channel . The Ancient Alien theory was kept alive in pulpish propagation by Erich Von Daniken and Zecharia Stichen for well on four decades (not including the seeds it sprouted from, which were planted much earlier.) However, now that it’s been relegated to awkward production, fleeting interviews, constant criticism and dull dramatization, the whole mythos is starting to get a bit dry.

Liminal philosophers like Christopher Knowles and Philip Coppens , whose theories have often tread parallel the ancient runways, keep their investigations fresh by swimming in a more cosmopolitan realm of shadows and contemporary myth. So suffice to say some vestige of the Ancient Alien mythos will continue to evolve in their able and imaginative hands.  In fact they’ve already spawned some precursory predictions on the fatted cognitive calves of speculation that are about to be offered up by Feral House Press to all the hungry heresy hunters looking for a new fix of fringe history.… Read the rest

Continue Reading