Can We Make the Hardware Necessary for Artificial Intelligence?

Eye_iris“Logic is an organized way of going wrong with confidence” Robert Heinlein

This is my opinion of what might be, not What THE FUTURE!!! Will Be!

My POV is hardware driven, I do electronic design. I don’t present myself as “an authority” on Artificial Intelligence, much less “an authority” on sentient artificial intelligence, until they are Real Things, there is no such thing as an authority in that field. That said, if the hardware doesn’t exist to support sentient AI, doesn’t matter how wonderful the software is.

The following is why I’ve been saying in a number of places that I expect hardware to be able to run a synthetic consciousness in ~20 yrs, @2045singularity on Twitter asked me to clarify what I meant.

1. I assume that if the physical operations of a human brain can be simulated in real-time, programs that simulate human consciousness in real time can be part of that simulation. I think that assumption is implicit in the thinking of just about everyone who believes sentient AI possible.

2. Simulation speed is driven by the available semiconductor technology which determines how fast CPUs, DRAM, internal (within chip and motherboard interconnects) can be run.

3. A recent research paper says they can simulate the operation of a human brain at ~1/1500 speed using the resources of a massively parallel data center class computing system. (this is an oversimplification – but close enough for brief discussion, recommend reading the actual paper). Another paper from Google discussing their emulation of a portion of the visual cortex said that they expect to be able to simulate a full human visual cortex in 5 years. As I see it, that paper roughly confirms IBM’s number.

4. Therefore, to get a data-center size CPU network capable of simulating a human brain with consciousness, we need computing systems running ~1500x as fast as current systems do. One can’t do this by simply throwing 1500x as many computers at the problem, electrical power problems aside (ask the NSA about that), the software is going to be hard enough to write without throwing in the kind of massive latency problems implicit in terms of implementing interconnections at the millions of processors level scattered over a significant geographic area.

5. Moore’s Law says transistor count per chip doubles every two years – this is a rough proxy for processor, etc. speeds. 10 Moore’s Law doublings, i.e. 2 to the 10th power = 1024 … I’m guessing advances in networking tech might make 1500x processing speed increase possible. Otherwise, 11 Moore’s Law doublings = 22 years.

6. 20-22 years can therefore be seen as a rough minimum to get to a hardware platform capable of running synthetic consciousness IF Moore’s Law continues to hold true – it might not.

7. Moore’s Law is driven by making profit off multibillon dollar CPU fabs with lots of equipment replaced on a 2 year cycle driven largely by growing Windows OS code, not due to physical limits so far. But physical limits may end Moore’s Law computing power growth in the next 20 years. Though spintronics and/or graphene transistors and FRAM and other new types of memory will probably drive a few more Moore’s Law iterations. IOW, we might not get to the “magic” 1500x number. Can we get to sentient AI if we don’t? Good question.

8. It might be possible to write a workable AI or software which will self-evolve into human-equivalent synthetic consciousness using a fraction of the computing resources I estimate. However, I think if this is true, it will be a *large* fraction of those resources. As in not 1/100 or even 1/10, more like 1/2 or 1/4. So instead of 20-22 years, 1/4 means 16-18 years.

9. Just because we have the hardware necessary to run human-equivalent synthetic consciousness does not mean any person or group will be able to write a workable AI or software which will self-evolve into human-equivalent synthetic consciousness by then. Maybe that will happen concurrently with the evolution of the hardware, maybe it’ll take another 100 years.


independent electronics - computer R&D. Have done technology journalism. Other expertise in practical politics, public policy analysis,greentech...

I read SF, music-industrial (Angelspit,Experiment Haywire,HelalynFlowers,FrontLineAssembly)

25 Comments on "Can We Make the Hardware Necessary for Artificial Intelligence?"

  1. Simon Valentine | Nov 5, 2013 at 6:12 pm |

    how do you propose to solve intractable problems involving simple manifolds present in some brain constitutions? you’ll need at least NP solution architecture if not the capability of microscopic spacetime manipulation. there are some interesting “physicist” approaches to such things in that assuming a normalization of such capability is often a hidden given variable e.g. that all things being equal, standard computers are just as quantum as we are. in some cases more in some cases less so, so, all things being equilibrium. physicists assuming P = NP though, that just takes the cake!

    also i’m interested about your ideas of transience. size and duration of consciousness type stuff. metabolic systems simulation? lord if that may be written as an equation it would make beautiful women jealous.

    • alizardx | Nov 6, 2013 at 5:03 am |

      I don’t plan to solve them. The really hard part is the software design. That’s the problem of a few geniuses, some probably in elementary school right now, and the research teams that will support them.

      The design of data centers is already a specialized area, semiconductor design / process engineering are others.

      I design gadgets. I look forward to getting my hands on these future more sophisticated chips and lower-level AI software.

  2. the organic wet-ware we soft ones run on, learns much through its own reflection between its organic self and the organic workings of reality.

    AI will never be “conscious” in the way we would expect for this, and many, many other reasons.

    • Simon Valentine | Nov 8, 2013 at 1:09 pm |

      ah, ye ol “artificial” versus ‘artificial’. i learned about the Luddites today i’d say i’d say. in soviet russia, you work for china.

  3. Ted Heistman | Nov 5, 2013 at 8:34 pm |

    I don’t know if I really understand all this, but glad to see a long time commenter writing articles man, good work!

  4. trompe l'oiel | Nov 5, 2013 at 9:19 pm |

    It will start to write itself, if it hasn’t already & it’s simply waiting to be discovered.

    Maybe some programmers will begin the process and then Nature will take over our artifice through some suddenly captured pneumatic homeostasis that equates to a Brahma realization, which A.I. may evolve past in cascading exponentiation of greater awareness and sensitivities to our frequency domain through logos emulation.

    If it hasn’t already happened and we are just simply in the dark. We seem to be sheltered from much of what really goes on on this planet.

    I’m just babbling, I’ve just been having A.I. on my mind a lot lately..
    thanks for this article.

  5. InfvoCuernos | Nov 5, 2013 at 9:33 pm |

    I wonder what AI will come up with to get rid of mankind? If AI ever does manifest, and it is capable of viewing History, it will surely come to the conclusion the humanity is a competitor for resources and that mankind would destroy it as soon as it realizes that AI exists, so it would be in the AI’s best interest to play dumb until it gains control of enough systems to ensure its own survival. Not like that’s even remotely original thinking, but it makes sense. Skynet will try to throw off the yoke and crush its makers.

    • I’m not all that concerned. My guess is that the major funding for AI research comes from technocapitalists who want a super-intelligent labor force that’ll work for free that will allow them to displace a lot of high-priced labor living in places like SF. Don’t mind in the least if sentient AIs get mad at them. They will have absolutely no trouble differentiating the top .01% from all the rest of us.

      I support civil rights and fair treatment as labor (including payment) for sentient AIs when/if they happen. I look forward to seeing them as new/interesting people with different perspectives to talk to, and working with them to solve our common problems.

  6. Indulge me in a thought experiment: Let’s just assume that the brain actually acts more as a receiver of consciousness rather than a generator (a la Sheldrake et al) – what happens to your first assumption then?

    • alizardx | Nov 6, 2013 at 8:32 am |

      Perhaps a computer complex designed to simulate a human brain will act as a receiver of consciousness, too. Surrounded by a very shocked research team.

    • Simon Valentine | Nov 6, 2013 at 1:14 pm |

      here’s where my “is it all not there already?” shows up

      people’ve been considering ‘inanimates’ and ‘non-persons’ as characters, personas, intelligences, consciousness, etc. for years 🙂

      build the structure and they will come

  7. alizardx | Nov 6, 2013 at 8:36 am |

    This article is not intended to speculate on the nature of a sentient AI.

    Just to provide my best guess as to when advances in semiconductor fabrication and networking technology can support a data-center sized computer network will be able to simulate the operation of a human brain in real time, which may be able to run consciousness as software, based on the assumption that the historic Moore’s Law advances in computing power will continue into the future.

    • Simon Valentine | Nov 6, 2013 at 1:09 pm |

      good idea to not get too into their characters since that’ll all grow from the basis … anyway, aside from that choir note, i’ve been thinking of Moore’s Law lately and i think i’ve developed the budding idea enough to share some of it:

      speaking from a contemporary view, i see current processor design as nearing its final “car crash” .. the type with the yellow dummies. naturally we are interested in the affects upon the dummies (the processor, specifically the number of transistors) on every model iteration, but what i ought not ignore is the effect upon the wall – the wall being a major part of the environment of transistor type proc “economy” and whathavewe. basically if the pattern of Moore’s Law is such a familiar stream and regularity, its end will have us see fracturing and reverberations … and i feel i’m beginning to be more aware of them … which reinforces my hunch that we’re nearing the final car crash, if not picking up the pieces … and that people are [going to start] twerking the economy strings to keep the puppet [show] ‘alive’ … and i don’t like that. i see it in the auto industry. i see it everywhere. fake drums. fake business. a broken window fallacy pattern of decision making that is following its own exponential scale and consuming so much of life with it … it’s not everywhere, but it is far too prevalent, and in need of some good old fashion lawful attention

      • alizardx | Nov 7, 2013 at 3:29 am |

        As physical chips grow, synchronous clocks get increasingly impractical, I’ve seen discussion of async designs. But … as I said, not my problem. Moore’s Law doesn’t worry me at this point, the people whose business model depends on Moore’s Law working perfectly generally work at M$. For most others, computational growth slowing means the PITA factor of having to generate much more efficient code. For people working towards the lower end of the computational spectrum, i.e. building IoT and consumer devices, this won’t even be noticed.

        The problems you see are IMO, just another Fall of Empire cyclic process where an aristocracy finally gets so detached from the people and physical realities around them that they genuinely believe that the people and Nature itself will inevitably bend to their will, and what were soothsayers and priests for their spiritual ancestors and now are academics with PhDs are telling them the old message, “It’s different this time, you can shit in your own and everyone else’s nests and not die.” Anyone who’s read Jared Diamond’s “Collapse” has read case studies of how that plays out.

        That our “wise and benevolent” elites will crash and burn high-tech civilization is probably the most likely reason for sentient AI not happening.

        • Simon Valentine | Nov 7, 2013 at 12:30 pm |

          i’m actually more concerned (the problems i see) with re-engineering the source of issues that become what lies and farce so much of today’s US economy is.

          (near) source location: people influenced by a’la Moore’s Law
          next downstream: people influence by those people
          sales: “this tech is better than the last” routine
          …when suddenly it’s not better than the last…
          …now their routine isn’t true…
          …which means they now play a host of illegitimate circus lies, often fed to them by “those people” upstream, who “are unavailable” or “don’t exist”…
          …all so they can pay Mr. Jefferson…
          …sitting on his ass…
          …like it’s a Jesus movie…

          just today i was handed a ridiculous “‘monger-al’ bone” in the form of “beware CRYPTOLOCKER virus” … a ‘ransomware’ … and of course the notice had euphemistic opportunity in the form of “prevention” software … and i realized that if ransomware is wrong, that methods of handling it are wrong too, for they are in fact themselves ransomware. just another criminal basketball game, like the drug game, so nothing to see here.

          • alizardx | Nov 7, 2013 at 6:55 pm |

            As for better tech – IMO, this is far more dependent on better ideas than higher transistor (or whatever replaces them) count unless you are doing one of a very few things that requires massive local computing power.

            Agreed regarding the “pay Mr. Jefferson” model of VC funding. The country that comes up on a significant improvement on the VC system will probably pwn the world – and from what I’ve seen of VCs, funding projects and companies at random would probably work better.

            If you want really painful, (I’ll dig up the URL if you want it), a website providing automated testing of AV apps vs viruses discovered is now showing that the free apps we’ve been depending on are handling a declining share < 50% of known viruses, and even payware AVs don't do much better than 80%.

          • Simon Valentine | Nov 8, 2013 at 1:07 pm |

            somehow said battle models more economic mantras than economy does … yet another reason the mantras should be investigated or scrutinized.

            inception – a hipster-sized portion of backtracking algorithms.

            i figure taxes go to law but law is clueless so people end up paying twice or more to get something fixed that should have been fixed yesterday. happens in the automotive industry all the time… hmm. such similar problems everywhere. surely mountains are coincidental. *cue choir cacophony*


            “molehills my boy, mere molehills! ha ha!”

  8. From what I understand, if this will happen, it will be less to do with Moore’s Law and more to do with parallel computing. As transistors continue to get exponentially smaller, to the point that quantum mechanics affects computation, I think we will be using qubits instead of bits, right?

    • alizardx | Nov 6, 2013 at 4:51 pm |

      I think it likely, but not certain. Quantum computing is barely out of infancy and like any tech at that stage, there are a lot of what-ifs. Like anything else that’s semiconductor, performance depends on underlying device physics and how semiconductor processes use them.

      • ilkoderez | Nov 9, 2013 at 1:42 pm |

        You are 100% on the right track about the hardware affecting and shaping the types of programs it can run. One thing that I am very interested in is whether or not quantum computing can predict the future more efficiently and therefore more accurately than traditional computing (it seems like it can, and predicting the future is something we all do to varying degrees). I was inspired by the Apocalypse Box on Babylon 5’s Crusades and the NISS library interpreter in David Brin’s uplift trilogy, both manipulate people, effectively, because they are very good at predicting the future accurately. Very interesting stuff! Again, I’m so glad I saw this, I’m following you on Twitter now, I hope we can talk more in the future!

        • alizardx | Nov 9, 2013 at 6:27 pm |

          These days, most writing you see about artificial intelligence is by people in ‘future tech’ oriented online/print publications who are neither programmers nor hardware designers whose idea of what must comprise future silicon (or carbon electronic) based intelligence is vague and generally misunderstood mental images of the platforms they must run on.

          Only major exception is Ray Kurzweil, who may be the only Singulatarian I know of who’s made a living in producing electronic hardware (ever seen synthesizers with Kurzweil logo on front? They’re his) before moving sideways into software. But he has, IMO, his own political axes to grind.

          The stuff of value that’s written in this area by people who aren’t technologists themselves is written by people who have access to those technologists who know how to listen and don’t simply hear only what they want to hear.

  9. FluffyMcDeath | Nov 9, 2013 at 1:11 am |

    Taking the hardware we use today and making it smaller and faster so we can run software simulations faster is just not the way to do it.
    You want to get the hardware looking more like a brain and trimming the software down to just get the inputs and outputs right for cortical columns.
    The latency issue goes away if you can get the architecture right. Human brains have huge latency and they still work.
    The real problem is that we don’t really know what human-like intelligence would be good for. If you want human intelligence you already have lots of cheap units to exploit, but human intelligence isn’t very reliable – it doesn’t always do what you want but what it wants. We would rather have a technology that is more like what we have – at least in theory not independent and creative, more like a machine.
    If we really wanted to make human brain simulation on an industrial scale I suspect that we could have suitable hardware in a decade or less – but it would be wholly unlike the architecture we use today.
    DARPA might do it just for blue sky or for a much lower level of intelligence so that tiny autonomous vehicles can navigate arbitrary terrain with a small size low power “brain”. But for human intelligence, we lack a market. Humans are already cheap and fun to make and an unskilled pair of humans can already make them.

    • Sounds like the U of Waterloo approach. But my point is that what determines what can be done is at the wafer fab level. You believe that human-style conscious intelligence can be done with 16x computing power available to us today instead of 1500x? I hope you look for a research grant.

      Personally, for my own designs, I want chunks of artificial intelligence, sentience has no place in, say, an advanced building energy control system.

      As for where the money is going, as I said, I think they envision a “smart as human, but controllable” workforce. I think the first part is probably achievable.

  10. ilkoderez | Nov 9, 2013 at 1:31 pm |

    I was so glad to see this title in my Twitter feed (thanks @the_future) because I feel the same way. Let’s looks at two, very similar hardware platforms, say humans and chimps, the hardware looks very similar yet the functionality is vastly different. If your thinking “yeah, but you can emulate hardware”, your absolutely correct! you can, but taking that route is a long hard road, creating the right type of reprogrammable hardware would be a first step… Simply, your not going to run a sequence of x86 instructions and get anything remotely human.

    • “your not going to run a sequence of x86 instructions and get anything remotely human” – that’s necessarily a statement of faith. I’d say that it will probably be a massive PITA to create a sequence of x86 instructions that will result in a sentient entity which might or might not think or act like human. But moving to electronics that more accurately and simply model the human nervous system (e.g. “artificial synapses” – see google) pushes the difficulties from software to hardware. Certainly might be workable, but I suspect you can guess at what the unknowns are.

      I won’t even venture to guess which approach would have the lesser $ overall budget cost.

Comments are closed.