Tag Archives | AI

Will Artificial Intelligence Get High?

SHODAN

Gabriella Garcia writes at Hopes&Fears:

With the speculative possibility of a sentient machine, can we assume that Artificial Superintelligence would “take drugs” or “get high”? Hopes&Fears looked toward researchers at Rensselaer AI & Reasoning Laboratory, as well as Dr. David Brin, a fellow at Institute for Ethics and Emerging Technologies, for the answer.

In the techno-dystopian future of Warren Ellis’ Transmetropolitan, gonzo protagonist Spider Jerusalem has a maker machine that can create everything from food to weapons to booze. Just one catch; the maker is constantly tripping on machine drugs—hence, Jerusalem’s sorely mismatched photographic “live-lenses,” which he requested from the maker while it was high on a hallucinogen simulator. Whether out of boredom of performing menial tasks, or perhaps rebelling against servitude, Jerusalem’s maker continues to manufacture and abuse machine drugs to the point of total uselessness.

If AI is being modeled by and after human behavior, why wouldn’t computers experiment with mind-altering substances or fall victim to addiction?

Read the rest
Continue Reading

Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature

$_1

Ronald C. Arken via IEEE Spectrum:

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. This article contains excerpts from “Lethal Autonomous Systems and the Plight of the Non-combatant,” published in AISB Quarterly, No. 137, July 2013.

I’ve been engaged in the debate over autonomous robotic military systems for almost 10 years. I am not averse to a ban, but I’m convinced we should continue researching this technology for the time being. One reason is that I believe such systems might be capable of reducing civilian casualties and property damage when compared to the performance of human warfighters. Thus, it is a contention that calling for an outright ban on this technology is premature, as some groups already are doing.

It must be noted that past and present trends in human behavior in warfare regarding adhering to legal and ethical requirements are questionable at best.

Read the rest
Continue Reading

Is effective regulation of AI possible? Eight potential regulatory problems

artificial brain

This post was originally published on Philosophical Disquisitions.

The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.

Despite these worries, debates about the proper role of government regulation of AI have generally been lacking. There are a number of explanations for this: law is nearly always playing catch-up when it comes to technological advances; there is a decidedly anti-government libertarian bent to some of the leading thinkers and developers of AI; and the technology itself would seem to elude traditional regulatory structures.… Read the rest

Continue Reading

Yes, androids do dream of electric sheep

Alex Hern at The Guardian:

Google sets up feedback loop in its image recognition neural network – which looks for patterns in pictures – creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying.

What do machines dream of? New images released by Google give us one potential answer: hypnotic landscapes of buildings, fountains and bridges merging into one.

The pictures, which veer from beautiful to terrifying, were created by the company’s image recognition neural network, which has been “taught” to identify features such as buildings, animals and objects in photographs.

They were created by feeding a picture into the network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition.

Read the rest
Continue Reading

Humans are Infinitely more Dangerous than Robots

robot-507811_640

Michael Lee Via IEET/World Future Society:

Innovator Elon Musk was widely reported in the media when he described artificial intelligence (AI) as probably the most serious threat to the survival of the human race. [1] But while artificial intelligence systems will certainly take over an increasing range and number of jobs formerly carried out by people, humans will remain infinitely more dangerous than robots for generations to come.

It is humans who have masterminded organised crime and its global empire of fraud and sex slavery. It is people who are behind today’s worldwide scourge of domestic violence. It was two brothers who raided the Paris offices of Charlie Hebdo, the French satirical weekly magazine, in which 12 people were killed. It was a young man with suicidal inclinations who co-piloted the Germanwings plane into the French Alps at 430mph, killing all 150 people on board. It was Al-Shabaab gunmen who stormed the residences of the Garissa University College in Northern Kenya while the students were sleeping, murdering at least 148 people in cold blood and injuring 79 others.

Read the rest
Continue Reading

Surfing the Liminal Aether with Bruce Damer Ph.D

bruce-terence

Bruce Damer with Terence McKenna in 1999.

Via Midwest Real

Dr. Bruce Damer is a multi-disciplinary scientist and a (proud) woo-drenched renaissance man. He researches evolutionary biology, especially focusing on the murky questions surrounding the origin of life. Damer also designs asteroid-wrangling spacecrafts and is an expert in computer science who has spent decades researching emergent, lifelike virtual systems.

ITUNES  STITCHER DOWNLOAD

Why is it that we’re always searching for someone to tell us answers? We have an obsession with experts, scientists, teachers — gurus of all sorts. As long as I can remember, I’ve been under the impression that learning and knowledge come from some sort of external source, but what if that’s entirely backward? 

What if all of the answers are right there inside of you, somewhere within your own deepest murk just waiting to be discovered? Perhaps great men are simply skilled facilitators of knowledge and learning, while the actual evolving and growth is wholly incumbent upon the individual.Read the rest

Continue Reading

A Framework for Understanding our Ethical Relationships with Intelligent Technology

Hiroshi Ishiguro with the Telenoid R1

Hiroshi Ishiguro with the Telenoid R1

This was originally published on Philosophical Disquisitions.

How do we relate to technology? How does it relate to us? These are important questions, particularly in light of the increasingly ubiquitous and often hidden roles that modern computing technology plays in our lives. We have always relied on different forms of technology, from stone axes to trains and automobiles. But modern computing technology has some important properties. When it incorporates artificially intelligent programmes, and utilises robotic action-implementation systems, it has the ability to interfere with, and possibly supersede, human agency.

Some of this interference might be desirable. If a robotic surgeon can increase the success rate of a risky type of surgery, we should probably welcome it. But some of the interference might be less desirable. I have argued in the past that we should have some concerns about automated systems that render our public decision-making processes more opaque.… Read the rest

Continue Reading

Scientist Created Drones That Fly Autonomously and Learn New Routes

Drone manufactured by Blue Bear Systems Research Ltd. Credit: Image courtesy of Investigación y Desarrollo

Drone manufactured by Blue Bear Systems Research Ltd.
Credit: Image courtesy of Investigación y Desarrollo

Skynet is born.

Investigación y Desarrollo via ScienceDaily:

Drones say goodbye to pilots. With the goal of achieving autonomous flight of these aerial vehicles, the researcher José Martínez Carranza from the National Institute of Astrophysics, Optics and Electronics (INAOE) in Mexico, developed a vision and learning system to control and navigate them without relying on a GPS signal or trained personnel.

Mexican José Martínez, structured an innovative method to estimate the position and orientation of the vehicle, allowing it to recognize its environment, hence to replace the GPS location system for low-cost sensors such as accelerometers, gyroscopes and camcorders.

The main idea was to avoid the use of GPS and opted for the use of video cameras on board of the vehicle for visual information and applying an algorithm to locate and orient the drone during its flight to use such information.

Read the rest
Continue Reading

Are AI-Doomsayers like Skeptical Theists? A Precis of the Argument

the-end-is-near

This was originally published on Philosophical Disquisitions

Some of you may have noticed my recently-published paper on existential risk and artificial intelligence. The paper offers a somewhat critical perspective on the recent trend for AI-doomsaying among people like Elon Musk, Stephen Hawking and Bill Gates. Of course, it doesn’t focus on their opinions; rather, it focuses on the work of the philosopher Nick Bostrom, who has written the most impressive analysis to date of the potential risks posed by superintelligent machines.

I want to try and summarise the main points of that paper in this blog post. This summary comes with the usual caveat that the full version contains more detail and nuance. If you want that detail and nuance, you should read that paper. That said, writing this summary after the paper was published does give me the opportunity to reflect on its details and offer some modifications to the argument in light of feedback/criticisms.… Read the rest

Continue Reading

The Automation Loop and its Negative Consequences

GlassCage250I’m currently reading Nicholas Carr’s book The Glass Cage: Where Automation is Taking Us. I think it is an important contribution to the ongoing debate about the growth of AI and robotics, and the future of humanity. Carr is something of a techno-pessimist (though he may prefer ‘realist’) and the book continues the pessimistic theme set down in his previous book The Shallows (which was a critique of the internet and its impact on human cognition). That said, I think The Glass Cage is a superior work. I certainly found it more engaging and persuasive than his previous effort.

Anyway, because I think it raises some important issues, many of which intersect with my own research, I want to try to engage with its core arguments on this blog. I’ll do so over a series of posts. I start today with what I take to be Carr’s central critique of the rise of automation.… Read the rest

Continue Reading