Slime mold finds the quickest path between food and has even shown signs of having memory – despite not having a brain. A human-like robot face has been hooked up so that its expressions are controlled by the electrical signals produced when yellow slime mold shies away from light, or moves eagerly towards food. Physarum polycephalum is a common yellow slime mold which ranges in size from several hundred micrometres to more than one metre. It is an aggregation of hundreds or thousands of identical unicellular organisms that merge together into one huge "cell" containing all their nuclei.
Tag Archives | Robotics
This video demonstrates the hexapod router cutting a 3D face in high density foam.
Venue interviews one of the more interesting professors you’ll run into at any university, Ken Goldberg:
… Read the rest
The Hayward Fault runs through the center of the UC Berkeley campus, famously splitting the university’s football stadium in half from end to end. It has, according to the 2008 Uniform California Earthquake Rupture Forecast, a thirty-one percent probability of rupturing in a magnitude 6.7 or greater earthquake within the next thirty years, making it the likeliest site for the next big California quake.
Nonetheless, for the majority of East Bay residents, the fault is out of sight and out of mind—for example, five out of six Californian homeowners have no earthquake insurance.
Meanwhile, three-quarters of a mile north of Memorial Stadium, and just a few hundred yards west of the fault trace, is the office of Ken Goldberg, Professor of Industrial Engineering and Operations Research at Berkeley.
Humans will evolve and adapt themselves to enhanced science and technology just as men and animals in the past evolved to adapt themselves to their natural circumstances. The artist sees this as our destiny, not as a negative, gloomy dystopia. The artist considers it important to escape from human bondage in order to achieve harmony between men and machines. He thinks this harmony can be achieved through the process of religious practices and spiritual enlightenment. The machine man was based on the artist, but this "I" is not a past "I" any more. His own existence vanishes, and a new being-as-machine man emerges. Z is thus a process of becoming the perfect "I".
This video shows an experiment in which participants are asked to switch off a robot and thereby killing it. The robot begs for its live and we measured how long the participants hesitated. The perception of life largely depends on the observation of intelligent behavior. Even abstract geometrical shapes that move on a computer screen are being perceived as being alive… in particular if they change their trajectory nonlinearly or if they seem to interact with their environments. The robot's intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable. Participants hesitated almost three times as long to switch off an intelligent and agreeable robot (34.5 seconds) compared to an unintelligent and non-agreeable robot (11.8 seconds).
Via the New Yorker, Gary Marcus on how we will soon need our machines to be ethical, but have no idea how to do this:
… Read the rest
Google’s driver-less cars are already street-legal in California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually automated vehicles will be able to drive better, and more safely than you can; within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car.
That moment will signal the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge when an errant school bus carrying forty children crosses its path. Should your car swerve, risking the life of its owner (you), in order to save the children, or keep going?
Many approaches to machine ethics are fraught [with problems].
Good to know that we may finally have an answer on this. The BBC reports:
… Read the rest
Cambridge researchers at the Centre for the Study of Existential Risk (CSER) are to assess whether technology could end up destroying human civilisation. The scientists said that to dismiss concerns of a potential robot uprising would be “dangerous”.
Fears that machines may take over have been central to the plot of some of the most popular science fiction films. But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention. “The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” the researchers write.
The CSER project has been co-founded by Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn. Prof. Price said that as robots and computers become smarter than humans, we could find ourselves at the mercy of “machines that are not malicious, but machines whose interests don’t include us”.
It’s easy to understand why presidents, politicians and the military love robots. They don’t talk back. They follow orders. You press a button and they do what they are told. They are considered so efficient, and so lethal.
These modern killing machines represent science fiction reborn as science “faction.” Robots and drones don’t burn Korans or pose with the heads of their captives on the battlefield. (Robots also don’t protest wars.) Lose the human factor and you get silent but deadly total destruction.
And that’s why drone warfare has become such a weapon of choice. You have video game jockeys sitting on their asses in front of consoles of digital displays at an Air Force base outside Las Vegas, targeting suspected terrorists in Afghanistan. After a couple of quick kills, they take the rest of the day off.
It’s only later, that we get the reports of civilians decimated as collateral damage.… Read the rest