Tag Archives | AI

How Much Consciousness Does an Octopus Have?

Octopus fanned-out
Over at Wired, Emily Reynolds explores the mysteries and ethics of consciousness. Will we ever be able to quantify it?

What about an iPhone? And how much consciousness can we meaningfully ascribe to someone in a coma?

Animals ranging from parrots to elephants continue to challenge our perception of consciousness, long-held as a uniquely human trait. But the reaches of consciousness don’t stop at animals. As artificial intelligence gets smarter, we are faced with moral dilemmas of how machines could one day not just think but also feel.

The ethics of consciousness, not just in humans but also animals and machines, is complex. To try and make sense of it, research is currently underway to develop a method for objectively measuring consciousness — a formula that could explain how aware any living, or artificial, being is.

Continue reading.

Read the rest

Continue Reading

AI machine achieves IQ test score of young child


A team from the University of Illinois at Chicago gave the AI system, ConceptNet, an IQ test. According to Phys.org, “it scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds.”

Nancy Owano via Phys.org:

MIT Technology Review poses the bigger question: to what extent do these capabilities add up to the equivalent of ? Shedding some light on AI and humans, a team went ahead to subject an AI system to a standard IQ test given to humans.

Their paper describing their findings has been posted on arXiv. The team is from the University of Illinois at Chicago and an AI research group in Hungary. The AI system which they used is ConceptNet, an open-source project run by the MIT Common Sense Computing Initiative.

Results: It scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds

“We found that the WPPSI-III VIQ psychometric test gives a WPPSI-III VIQ to ConceptNet 4 that is equivalent to that of an average four-year old.

Read the rest
Continue Reading

Will Artificial Intelligence Get High?


Gabriella Garcia writes at Hopes&Fears:

With the speculative possibility of a sentient machine, can we assume that Artificial Superintelligence would “take drugs” or “get high”? Hopes&Fears looked toward researchers at Rensselaer AI & Reasoning Laboratory, as well as Dr. David Brin, a fellow at Institute for Ethics and Emerging Technologies, for the answer.

In the techno-dystopian future of Warren Ellis’ Transmetropolitan, gonzo protagonist Spider Jerusalem has a maker machine that can create everything from food to weapons to booze. Just one catch; the maker is constantly tripping on machine drugs—hence, Jerusalem’s sorely mismatched photographic “live-lenses,” which he requested from the maker while it was high on a hallucinogen simulator. Whether out of boredom of performing menial tasks, or perhaps rebelling against servitude, Jerusalem’s maker continues to manufacture and abuse machine drugs to the point of total uselessness.

If AI is being modeled by and after human behavior, why wouldn’t computers experiment with mind-altering substances or fall victim to addiction?

Read the rest
Continue Reading

Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature


Ronald C. Arken via IEEE Spectrum:

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. This article contains excerpts from “Lethal Autonomous Systems and the Plight of the Non-combatant,” published in AISB Quarterly, No. 137, July 2013.

I’ve been engaged in the debate over autonomous robotic military systems for almost 10 years. I am not averse to a ban, but I’m convinced we should continue researching this technology for the time being. One reason is that I believe such systems might be capable of reducing civilian casualties and property damage when compared to the performance of human warfighters. Thus, it is a contention that calling for an outright ban on this technology is premature, as some groups already are doing.

It must be noted that past and present trends in human behavior in warfare regarding adhering to legal and ethical requirements are questionable at best.

Read the rest
Continue Reading

Is effective regulation of AI possible? Eight potential regulatory problems

artificial brain

This post was originally published on Philosophical Disquisitions.

The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.

Despite these worries, debates about the proper role of government regulation of AI have generally been lacking. There are a number of explanations for this: law is nearly always playing catch-up when it comes to technological advances; there is a decidedly anti-government libertarian bent to some of the leading thinkers and developers of AI; and the technology itself would seem to elude traditional regulatory structures.… Read the rest

Continue Reading

Yes, androids do dream of electric sheep

Alex Hern at The Guardian:

Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying.

What do machines dream of? New images released by Google give us one potential answer: hypnotic landscapes of buildings, fountains and bridges merging into one.

The pictures, which veer from beautiful to terrifying, were created by the company’s image recognition neural network, which has been “taught” to identify features such as buildings, animals and objects in photographs.

They were created by feeding a picture into the network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition.

Read the rest
Continue Reading

Humans are Infinitely more Dangerous than Robots


Michael Lee Via IEET/World Future Society:

Innovator Elon Musk was widely reported in the media when he described artificial intelligence (AI) as probably the most serious threat to the survival of the human race. [1] But while artificial intelligence systems will certainly take over an increasing range and number of jobs formerly carried out by people, humans will remain infinitely more dangerous than robots for generations to come.

It is humans who have masterminded organised crime and its global empire of fraud and sex slavery. It is people who are behind today’s worldwide scourge of domestic violence. It was two brothers who raided the Paris offices of Charlie Hebdo, the French satirical weekly magazine, in which 12 people were killed. It was a young man with suicidal inclinations who co-piloted the Germanwings plane into the French Alps at 430mph, killing all 150 people on board. It was Al-Shabaab gunmen who stormed the residences of the Garissa University College in Northern Kenya while the students were sleeping, murdering at least 148 people in cold blood and injuring 79 others.

Read the rest
Continue Reading

Surfing the Liminal Aether with Bruce Damer Ph.D


Bruce Damer with Terence McKenna in 1999.

Via Midwest Real

Dr. Bruce Damer is a multi-disciplinary scientist and a (proud) woo-drenched renaissance man. He researches evolutionary biology, especially focusing on the murky questions surrounding the origin of life. Damer also designs asteroid-wrangling spacecrafts and is an expert in computer science who has spent decades researching emergent, lifelike virtual systems.


Why is it that we’re always searching for someone to tell us answers? We have an obsession with experts, scientists, teachers — gurus of all sorts. As long as I can remember, I’ve been under the impression that learning and knowledge come from some sort of external source, but what if that’s entirely backward? 

What if all of the answers are right there inside of you, somewhere within your own deepest murk just waiting to be discovered? Perhaps great men are simply skilled facilitators of knowledge and learning, while the actual evolving and growth is wholly incumbent upon the individual.Read the rest

Continue Reading

A Framework for Understanding our Ethical Relationships with Intelligent Technology

Hiroshi Ishiguro with the Telenoid R1

Hiroshi Ishiguro with the Telenoid R1

This was originally published on Philosophical Disquisitions.

How do we relate to technology? How does it relate to us? These are important questions, particularly in light of the increasingly ubiquitous and often hidden roles that modern computing technology plays in our lives. We have always relied on different forms of technology, from stone axes to trains and automobiles. But modern computing technology has some important properties. When it incorporates artificially intelligent programmes, and utilises robotic action-implementation systems, it has the ability to interfere with, and possibly supersede, human agency.

Some of this interference might be desirable. If a robotic surgeon can increase the success rate of a risky type of surgery, we should probably welcome it. But some of the interference might be less desirable. I have argued in the past that we should have some concerns about automated systems that render our public decision-making processes more opaque.… Read the rest

Continue Reading