Tag Archives | Artificial Intelligence

The Threat And Promise Of Humans With Technologically-Boosted Superintelligence

radically amplified human intelligence

Via Sentient Developments, futurist and Singularity Summit co-organizer Michael Anissimov on radically amplified human intelligence (IA) as potentially even more powerful, and dangerous, than artificially intelligent machines:

The real objective of IA is to create “super-Einsteins”, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

The first step will be to create a direct neural link to information. Think of it as a “telepathic Google.”

The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint in high detail, or to learn new blueprints quickly.

The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts.

Read the rest
Continue Reading

Program Uses Algorithms To Tweet As You After Your Death

If social media is what you did while alive, does this mean you are living forever? CNET News on the app Liveson, which continues to generate tweets based on your personality and syntax, in a sense preserving you into eternity:

You might think your online fans will lose interest when you kick the bucket, but an upcoming app says it will let you keep tweeting from beyond the grave.

LivesOn will host Twitter accounts that continue to post updates when users [die]. Developers claim the app’s artificial-intelligence engine will analyze your Twitter feed, learn your likes and syntax, and then post tweets in a similar vein when you’re gone. You’ll become an AI construct, a proverbial ghost in the machine.

The app will launch in March. People who sign up will be asked to appoint an executor who will have control of the account.

Read the rest

Continue Reading

Has The Dystopian Singularity Already Occurred, In The Form Of Corporations?

The prime futurist fear is that humanity will create some advanced technology with an ostensibly positive purpose, but it will buck our control and undo the world as it pursues some twisted version of the ends it was programmed to achieve. Quiet Babylon writes that this artificially-sentient oppressor has already arrived:

One of my favorite recurring tropes of AI speculation/singulatarian deep time thinking is meditations on how an evil AI might destroy us.

Here’s an example: The scenario imagined is where there is a button that humans push if the AI gets an answer right and the AI wants to get a lot of button presses, and eventually it realizes that the best way to get button presses is to kill all the humans and institute a rapid fire button-pressing regime.

You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.

Read the rest
Continue Reading

Would You Switch Off A Robot If It Begged Not To Be?

Will our machines acquire the ability to convince us that they are in fact alive? An unsettling study at the the Netherlands' Eindhoven University of Technology:
This video shows an experiment in which participants are asked to switch off a robot and thereby killing it. The robot begs for its live and we measured how long the participants hesitated. The perception of life largely depends on the observation of intelligent behavior. Even abstract geometrical shapes that move on a computer screen are being perceived as being alive… in particular if they change their trajectory nonlinearly or if they seem to interact with their environments. The robot's intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable. Participants hesitated almost three times as long to switch off an intelligent and agreeable robot (34.5 seconds) compared to an unintelligent and non-agreeable robot (11.8 seconds).
Continue Reading

Ray Kurzweil Designing Super-Intelligent Robot Assistant At Google

Are you ready for a virtual personal assistant which “knows better than you” constantly injecting itself into your life? A preview of things to come, via Technology Review:

Famed AI researcher and singularity forecaster Ray Kurzweil recently shed some more light on what his new job at Google will entail. It seems that he does, indeed, plan to build a prodigious artificial intelligence, which he hopes will understand the world to a much more sophisticated degree than anything built before–or at least that will act as if it does.

Kurzweil’s AI will be designed to analyze the vast quantities of information Google collects and to then serve as a super-intelligent personal assistant. He suggests it could eavesdrop on your every phone conversation and email exchange and then provide interesting and important information before you ever knew you wanted it.

Read the rest

Continue Reading

The End To The Era Of Biological Robots

Via Skeptiko, a fascinating interview with neuroscientist Dr. Mario Beauregard, who argues that, like the transition from classical to quantum physics, a revolution is coming in the way science will no longer perceive humans as being merely “biological robots”:

What we call the “modern scientific worldview”… is based on classical physics and this view is based on a number of fundamental assumptions like materialism, determinism, reductionism. So applied to mind and brain it means that, for instance, everything in the universe is only matter and energy that form the brain as a physical object, too, and the mind can be reduced strictly to electrical and chemical processes in the brain.

It means also that everything is determined from a material or physical point of view, so we don’t have any freedom. We’re like biological robots, totally determined by our neurons and our genes and so on. And so we’re reduced to material objects and we are determined by material processes.

Read the rest
Continue Reading

The Pressing Conundrum Of Creating Moral Machines

Via the New Yorker, Gary Marcus on how we will soon need our machines to be ethical, but have no idea how to do this:

Google’s driver-less cars are already street-legal in California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually automated vehicles will be able to drive better, and more safely than you can; within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car.

That moment will signal the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge when an errant school bus carrying forty children crosses its path. Should your car swerve, risking the life of its owner (you), in order to save the children, or keep going?

Many approaches to machine ethics are fraught [with problems].

Read the rest
Continue Reading

AI on the DSM IV – A Thought Experiment from David J. Kelley

With the recent news coverage of scientists discussing robot uprisings and the possible dangers of artificial intelligence, it’s interesting to see a direct thought experiment along these lines from Microsoft UX developer David J. Kelley. In a recent h+ Magazine article, Interview with an AI (Artificial Intelligence) – A Subtle Warning…,  Kelley provides an outline for an experiment that seeks to gain some understanding of how an AI would respond during an interview. As he explains it:

“I was thinking about ideas for an article on my train ride home from the experience lab I work in, and it came to me that it would be interesting to actually have an interview with an AI only a little bit better than us, maybe one that is one of the first kinds of true AI and for fun let’s say it has lived with us for a few decades incognito. But how can we do that?

Read the rest
Continue Reading

Think Tank To Study The Risk Of A Genocidal Robot Uprising

Good to know that we may finally have an answer on this. The BBC reports:

Cambridge researchers at the Centre for the Study of Existential Risk (CSER) are to assess whether technology could end up destroying human civilisation. The scientists said that to dismiss concerns of a potential robot uprising would be “dangerous”.

Fears that machines may take over have been central to the plot of some of the most popular science fiction films. But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention. “The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” the researchers write.

The CSER project has been co-founded by Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn. Prof. Price said that as robots and computers become smarter than humans, we could find ourselves at the mercy of “machines that are not malicious, but machines whose interests don’t include us”.

Read the rest
Continue Reading