Ray Kurzweil Wants to Make Google Sentient


Man’s best friend?

What is smarter than hoovering up the personal information and innermost thoughts of every person on the planet and then stuffing it into a single database?  Creating an artificial intelligence system capable of understanding it.  From The Guardian:

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.

And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who’s probably the world’s leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was “a Manhattan project of AI”. If artificial intelligence was really possible, and if anybody could do it, he said, “this will be the team”. The future, in ways we can’t even begin to imagine, will be Google’s.


Kurzweil has worked with Google’s co-founder Larry Page on special projects over several years. “And I’d been having ongoing conversations with him about artificial intelligence and what Google is doing and what I was trying to do. And basically he said, ‘Do it here. We’ll give you the independence you’ve had with your own company, but you’ll have these Google-scale resources.'”

And it’s the Google-scale resources that are beyond anything the world has seen before. Such as the huge data sets that result from 1 billion people using Google ever single day. And the Google knowledge graph, which consists of 800m concepts and the billions of relationships between them. This is already a neural network, a massive, distributed global “brain”. Can it learn? Can it think? It’s what some of the smartest people on the planet are working on next.


And once the computers can read their own instructions, well… gaining domination over the rest of the universe will surely be easy pickings. Though Kurzweil, being a techno-optimist, doesn’t worry about the prospect of being enslaved by a master race of newly liberated iPhones with ideas above their station. He believes technology will augment us. Make us better, smarter, fitter. That just as we’ve already outsourced our ability to remember telephone numbers to their electronic embrace, so we will welcome nanotechnologies that thin our blood and boost our brain cells. His mind-reading search engine will be a “cybernetic friend”. He is unimpressed by Google Glass because he doesn’t want any technological filter between us and reality. He just wants reality to be that much better.

Read the rest of the article here

Joe Allen

Joe Allen is a writer and fellow primate who wonders why we came down from the trees. A lifelong student of religion and science, he's also kept his hands dirty as a land surveyor, communal farm hand, kitchen servant, and for over a decade, by climbing steel as an entertainment rigger. His work appears in various outlets from left to right because he prefers liberty to security.

Daily interjections: @EvoPsychosis

Latest posts by Joe Allen (see all)

12 Comments on "Ray Kurzweil Wants to Make Google Sentient"

  1. Anarchy Pony | Feb 23, 2014 at 12:00 pm |

    Seems legit.

  2. The Well Dressed Man | Feb 23, 2014 at 12:21 pm |

    Why else would Google have hired him to take over their entire engineering operation? I for one welcome our new skynet overlords – may we live in interesting times.

  3. Ray Kurzweil wants a lot of things, very few of which are realistic. But hey, lets toast to my brain merging with my Playstation, or whatever the hell his masturbatory fantasy is.

  4. I think the author in that artificial “idiocy” article touched on it well:

    “Every single byte of data on earth was made, not found. And each was manufactured according to methods whose biases are baked into their very being.”

    In other words, all computational processes depend upon the prior reality of intentional consciousness (i.e. ours). Not only in what’s considered important enough to collect and quantify, but how it is interpreted. On this factor alone, it should go without saying, but the data-map is not the territory.

    Ascribing the very phrase “artificial intelligence” to computational processes has spellbound us as a society. It would be one thing if it were a harmless figure of speech. But images/metaphors shape not only the way we conceive the world, but the way we perceive it (and one should not mistake our ways of seeing the world for the world as it truly is…Geoge Lakoff’s work on cognitive psychology is greatly informative here). We have imposed the metaphor of an artificial mind on computers and then reimported the image of a thinking machine and imposed it upon our minds.

    I recently mentioned David Bentley Hart’s latest book “The Experience of God: Being, Consciousness, Bliss” in these here comments (on an unrelated article), and this is actually a subject that he covers at some length. And, to my mind, he makes a pretty convincing case as to its faulty philosophical premises and ultimate shortcomings. It would be impossible for me to properly represent it (since it exists within a much greater philosophical/phenomenological context), and to do it justice at that, but here’s one such excerpt:

    “Neither brains nor computers, considered purely as physical systems, contain algorithms or symbols; it is only as represented to consciousness that the physical behaviors of those systems yield any intentional content. It is in the consciousness of the person who programs or uses a computer, and in the consciousness that operates thorough the physical apparatus of the brain, that symbols reside.

    We speak of computer memory, for instance, but of course computers recall nothing. They do not even store any ‘remembered’ information – in the sense of symbols with real semantic content, real meaning – but only preserve the binary patterns of certain electronic notations. And I do not mean simply that the computers are not aware of the information they contain; I mean that, in themselves, they do not contain any semantic information at all. They are merely silicon parchment and electrical ink on which we record symbols that possess semantic content only in respect to our intentional representations of their meanings. A computer no more remembers the files stored in it than the paper and print of this book remember my argument to this point.”

    Kurzweil is on the verge of admitting as much in the article, but asserts its feasibility in a mighty logical leap. Even if an “artificial intelligence” were to ostensibly perform actions resembling semantic understanding, it will still depend on the programmers who have, through their intelligence and creativity, written programs that allow external forces to affect and alter the functioning of the automata that run those programs. And that still says nothing of an inner “felt”/private experience of these events. A computer may identify a “cat,” but that says nothing of what is like to experience that sensation. Knowledge – regarded as rote recall, pure imagery identification, and cross-referencing – is not intelligence (Jiddu Krishnamurti offers great insight on this topic). Rational thought – understanding, intention, will, consciousness – is not pure computation.

    Again, David Bentley Hart makes a much better (and more thorough) case for this in his book (can’t recommend it enough…it’s a great book on philosophy and consciousness/mind in general). Worth quoting at length:

    “Computational models of the mind would make sense if what a computer actually does could be characterized as an elementary version of what the mind does, or at least as something remotely like thinking. In fact, though, there is not even a useful analogy to be drawn here. A computer does not even really compute. We compute, using it as a tool. We can set a program in motion to calculate the square root of pi, but the stream of digits that will appear on screen will have mathematical content only because of our intentions, and because we – not the computer – are running algorithms. The computer, in itself, as an object or a series of physical events, does not contain or produce any symbols at all; its operations are not determined by any semantic content but only by binary sequences that mean nothing in themselves. The visible figures that appear on the computer’s screen are only the electronic traces of sets of binary correlates, and they serve as symbols only when we represent them as such, and assign them intelligible significances. The computer could just as well be programmed so that it would respond to the request for the square root of pi with the result ‘Rupert Bear’; nor would it be wrong to do so, because an ensemble of merely material components and purely physical events can be neither wrong nor right about anything – in fact, it cannot be about anything at all. Software no more ‘thinks’ than a minute hand knows the time or the printed word ‘pelican’ knows what a pelican is. We might just as well liken the mind to an abacus, a typewriter, or a library. No computer has ever used language, or responded to a question, or assigned a meaning to anything. No computer has ever so much as added two numbers together, let alone entertained a thought, and none ever will. The only intelligence or consciousness or even illusion of consciousness in the whole computational process is situated, quite incommutably, in us; everything seemingly analogous to our minds in our machines is reducible, when analyzed correctly, only back to our own minds once again, and we end where we began, immersed in the same mystery as ever. We believe otherwise only when, like Narcissus bent above the waters, we look down at our creations and, captivated by what we see reflected in them, imagine that another gaze has met our own.
    …when a believer in artificial intelligence claims that the electrochemical operations of a brain are a kind of computation, and that consciousness arises from that computation, he or she is saying something utterly without meaning. All computation is ontologically dependent on consciousness, simply said, and so computation cannot provide the foundation upon which consciousness rests. One might just as well attempt to explain the existence of the sun as the result of the warmth and brightness of summer days.”

    Also, it’s worth mentioning that Gary Kasparov (mentioned in the article) played Deep Junior (the successor to Deep Blue) to a draw in 2003. And Kasparov never faced an intelligent entity called “Deep Blue,” but, to again quote Dr. Hart, “the conscious intentions of its programmers, who used its circuitry to run algorithms that were largely the distillation of a vast archive of past chess matches, some of them Kasparov’s own; the computer was merely the alembic through which the distillate flowed.”

    • mannyfurious | Feb 23, 2014 at 4:24 pm |

      Such a great reply. It’s too bad that so much of what you (and DBH) are trying to get across is so foreign to the normal way most of us think about and make sense of the world, that most people won’t actually understand what is being said. They won’t understand how computers are simply a tool that needs our consciousness to be functional to any extant, and are not “minds,” or any functional simulacrum threreof, in and of themselves. It’s the confusion of metaphor (as you wrote) for the real thing, which people, unless they pay attention to their language/grammar and how it influences how we all experience our world, will never see.

      • Well stated, yourself! I really have Robert Anton Wilson (and, by extension, Alfred Korzybski) to thank for initially cluing me in to the power of metaphor/framing devices (especially when unrecognized as such) and how they go into structuring our very thoughts and perceptions (and influence our decisions, behaviors, experiences, etc…). That’s everything from our guiding/governing cultural narratives to individual words/frames/rhetorical flourishes (and, like you said, language/grammar as such). The word “frame” even implies something deeper, because it reveals that anything not fitting within the frame of that picture is invisible to us. There’s plenty of shit that still escapes me, and if I happen to consider it or if some of it’s deeper assumptions/implications are spelled out for me (which is more often the case), it catches me by surprise.

    • Anarchy Pony | Feb 23, 2014 at 4:47 pm |

      Slow clap.

  5. Thurlow Weed | Feb 23, 2014 at 5:47 pm |

    HAL: Dave, this conversation can serve no purpose anymore.

  6. Skynet anyone?

  7. WTFMFWOMG | Feb 24, 2014 at 2:31 am |

    The “singularity” will really be a sudden rise in human intelligence, with access to information previously filtered through the previous media paradigm. This should be the first sign, a sudden “democratization” of data resulting in proverbial light bulbs going off in people’s brains everywhere, raising the collective level of consciousness. I hope this happens before we let the machines take over. I like to think that this has affected me a little, broadened my worldview, and I see it in others.

Comments are closed.