R.U. Sirius and Jay Cornell are the authors of Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity. Transhumanism has been a hot but divisive topic on disinformation, so we felt there was a need to foster greater understanding of just what transhumanism is, and is not, hence the format of the book is an A-Z encyclopedia.
We asked Jay and R.U. to answer a few questions about the book and the topic in general:
RU, you have long been associated with the transhumanism movement; can you tell us how you got hooked and what your personal interest in transhumanism is?
RU: In a sense, I go way back to the 1970s, although I wasn’t familiar with the term transhumanism then. I think the only person using it at that time was a guy named F.M. Esfandiary. I was, if you will, turned on and tuned in by Timothy Leary and his cohort in conscious evolution Robert Anton Wilson. Some of the ideas began to appear in the mid-‘70s, specifically the idea of Space Migration Intelligence Increase Life Extension (SMI2LE) for an expansive human future. And there was Leary’s Future History series from the late ‘70s, which I still think is some kind of slightly off-kilter key to a potential positively mutated humanity. I mean, it’s not the future that has happened and probably won’t be the future that will happen but it’s the future that should happen.
In the ‘90s, MONDO 2000 dealt with all the transhumanist issues and even had one of the earliest interviews in which Vernor Vinge, the SF author who really defined the idea of a technological singularity, expanded on that notion. MONDO wasn’t precisely transhumanist or extropian. MONDO was ambiguous about it all. I’m still ambiguous and I hope that comes across in the book. I like to have one foot out and one foot in with any movement or ism that seems like it might be useful.
In 2008, I was hired by h+, the transhumanist organization, to edit a magazine, also called h+. That was a great experience and it got me and kept me in touch with a lot of the technologies and sciences that are starting to come into play. But I never joined. I generally feel like a transhumanist when someone says something simple or stupid about transhumanism. At which point I think, no, you don’t get it and I rise to the defense. But I can also rise to satirize it and some of that is also in the book. Human behavior is always pretty funny.
I have to ask, do you ever have nightmares about possible adverse outcomes?
RU: I had a psychedelic experience where I saw what it would be like to be stuck in empty space fully conscious with no chance of it ever changing or ending. I imagine this would be what it would be like to get uploaded into a virtual world and then… there’s a glitch!
More generally, we have segments in the book on Warbots and there’s plenty in there that acknowledges potential dystopias — a posthuman/human class divide, total surveillance (we’re there!), biological horror whether planned or as unintended consequence. I wouldn’t want to characterize this as a utopian book.
Actually, in the literal sense, my nightmares are usually about being on a bus that starts going in the wrong direction and then realizing I’m in my underwear.
Jay: My nightmare involves the downside of genetic engineering getting cheaper and easier. While I expect it to lead to many positive breakthroughs, I worry that some terrorist or apocalyptic nut is going to try to create super-smallpox or super-Ebola.
What would either of you say to Elon Musk, who recently told Walter Isaacson that a “super-intelligent” machine might decide to destroy human life and held up AI as our greatest existential threat? Or for that matter Stephen Hawking, who told the BBC: “The development of full artificial intelligence could spell the end of the human race.”
R.U.: I’ve never been able to work up a fear of the robot apocalypse. I don’t know why but that one seems wrong to me. Now, you could have military or weaponized AI in the hands of states or non-state actors, with the human beings being the ones actually responsible for apocalyptic results.
I really don’t see why we can’t insure that our self-made gods take specicide against humans off the table. I mean, I’ve heard the reasons why people fear that, but they just don’t resonate with me.
I think this may be a fear for comfortable and privileged nerds. If you’re suffering from deprivation and the potentials are expressed to you clearly, then an intelligence brilliant enough to resolve our resource, health and social conflict problems fairly quickly would seem to be a good thing.
On the other hand, the possibility that we could be brainwashed or herded into a limited and limiting quasi-utopia in which nothing that isn’t approved by, for instance, the government of China or that doesn’t have a deal with Apple, can enter our implant/neocortex… that seems palpably plausible. You’ll be just falling into a blissful sleep or about to have an enhanced organism when you’ll suddenly be forced to pay attention to an ad beamed directly into your brain that makes you just have to buy some delicious kibble immediately.
Jay: It’s not a totally unreasonable fear, but like R.U. I’m not losing sleep over the Skynet scenario. There’s a big difference between an artificial super-intelligence and one that can actually do things in the physical world, or even survive without humans. So let’s not give it the ability to physically maintain itself, and definitely not the ability to launch nukes.
Another issue is that we tend to imagine artificial intelligence as somehow human, with feelings and motivations. But our feelings and motivations are rooted in our biology, which long predates our intelligence. We’re animals with intelligence added on top. So would an artificial intelligence “want” anything? I assume we’d create it to want more knowledge, but would it want anything else that we hadn’t intended? If it did have desires, why would it want to destroy the humans who created it, and are needed to maintain it?
On the other hand, we all know that “sanity” can be a fuzzy concept when it applies to humans. I suspect that we’ll discover that creating an artificial intelligence is one thing, but that creating a sane one is a different issue. There’s a common prejudice among smart people that more intelligence automatically solves problems, and I think this colors our thinking about artificial super-intelligences. We should not assume that they will be “wise” in the human sense of the term.
We’re seeing actual examples of AI manifesting right now. What other transhumanist trends are happening in the present as opposed to speculatively in the future?
R.U.: We’re probably one iteration from replacement parts being better than the original. I’m not sure that people will want to have the old ones removed and replaced. Maybe if they looked hot. Nanobots that go inside the body are coming online. There’s an apparent breakthrough in cryonics, which surprises the hell out of me. We covered it in our addendum. Doctors are actually placing patients in suspended animation to perform operations on them. We’re 3d printing organs. Actually, 2014 was a year that was full of astonishing breakthroughs. There are improved cloaking devices, bionic eyes, an electronic patch that delivers drugs through the skin….
William Gibson said the future is here it’s just not distributed yet. It was a bit glib when he said it in the early ‘90s, but it’s starting to ring true. Getting this stuff to start having a real and positive impact on ordinary people, or even rich people in most cases, is slow going.
Jay: I’m quite hopeful about Perforene, a graphene membrane being developed by Lockheed Martin. It promises to make desalinization much cheaper. Given the fresh water shortage in many parts of the world, that could be an incredible boon. And if you want to make a big dent in global warming, turn some deserts green.
Do you both believe that those with the means to take advantage of transhumanist technology will become almost a separate species from plain old homo sapiens? What are the ethics and pitfalls of unequal access?
R.U.: I think that’s a legitimate concern. I’ve proposed Steal This Singularity, a play on the spirit of Abbie Hoffman’s 1970s Steal This Book. I described it as the notion that the current and future extreme technological society should not be dominated by Big Capital, Authoritarian States or the combination thereof.
It’s a playful concept but I think everything about how we will define the future is still in play. People are restless and rebellious. There’s a tremendous potential for engagement and people power and individual autonomy and I think a popular awareness of the positive aspects of what some of these technologies can do for us or with us only makes all that all the more critical and exciting.
Jay: It is a concern. Nobody wants a mentally-enhanced, immortal transhumanist overclass, except the people who imagine themselves in it. But the social inequities created by technological advances seem to work themselves out. The rich get early access to new technology, but at a very high price. 40 years ago, the rich were paying hundreds or thousands of dollars for the first LED digital watches. That helped fund the technology, and all they got was a few years of looking hip, and they had to press a button to see the time. The rich have done the same with automobiles, air travel, and in hundreds of other ways.
But being an early adopter isn’t risk-free, as thalidomide, metal-on-metal hip implants, and many other examples have shown. I’m OK with rich people getting the first brain enhancements and immortality treatments. Let’s see how well they work before we worry about inequality.
What would your recommendations be to readers who want to be prepared for (and perhaps partake in) the transhumanism revolution?
R.U.: Well, read the book, of course. I mean, you can follow the cues in there. It’s really a place to start. There are dozens of ideas, technologies, people, organizations you can look into from there.
Jay: Don’t just read the book, buy the book. The buying part is absolutely crucial.
Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity is available at Amazon and other good book stores now. More information on the book is available at the official site.
R. U. Sirius (Ken Goffman) is a writer, editor and well-known digital iconoclast. He was co-publisher of the first popular digital culture magazine, MONDO 2000, from 1989-1993 and co-editor of the popular book, MONDO 2000: A User’s Guide to the New Edge. He has written about technology and culture for Wired, The Village Voice, Salon, BoingBoing, Time, S.F. Chronicle, Rolling Stone, and Esquire, among other publications. Sirius/Goffman also lectures widely having appeared as part of the Reality Hacking series at Trinity University in San Antonio, Texas, at the TedX conference in Brussels, and at San Francisco’s popular Dorkbot event. Visit him at StealThisSingularity.com.
Photo by Bart Nagel
Jay Cornell is a writer, editor, web developer, and little-known semi-iconoclast. He is the former managing editor of h+ magazine, and the former associate publisher of Gnosismagazine. He is currently senior web developer at Landkamer Partners, and a member of the Board of Advisors of the Lifeboat Foundation, a nonprofit organization dedicated to defending humanity from existential risks. Email him, but note that spammers and scammers will be found and consumed by swarms of nanobots.
Photo by Bart Nagel
Latest posts by Disinformation (see all)
- Jim Marrs, RIP - Aug 3, 2017
- The Hypnotic Bar Goes Psychedelic - Mar 15, 2017
- Steve Bannon, Julius Evola and ‘The Forbidden Book’ - Feb 14, 2017