I recently made the pilgrimage to our local movie theater to view the latest ‘rise of the machines’ artificial intelligence flick. I always make a point of watching these movies (AI, Terminator, iRobot, Transcendence, etc), as I am interested in the divide between fantasy and realty when it comes to building intelligent machines. You see, for more than a decade I have been in the business of solving the very hard problem of learning processors that physically adapt. It is not hardware and it is not software. Its soft-hardware or, as it was called in Ex Machina, ‘wetware’. (The technical term is currently ‘neuromemristive’.) My journey has taken me to the edge of questions such as “what is life?”, and what I discovered fundamentally changed my perception of how the self-organized world works.
I have had the great opportunity to help launch and advise large government-sponsored programs to running my own programs to now, co-founder of Knowm Inc., on the cusp of commercializing a physical learning processor based on memristors that are now available in our web store. I am not the prototypical mad scientist inventor. I do not work for some large omni-present global technology corporation. I am not a billionaire playboy. I’m just a guy who realized there was a way to solve a very hard problem with major consequences if a solution was found…and I dedicated my life to doing it. Since I begun around 2001 I have had quite the journey. I have forged amazing friendships, inadvertently sparked a few skirmishes, and even made a few enemies and frienemies. I have navigated the beltway-bandits of DC with my flesh mostly intact, contracted with various branches of the DoD including DARPA, the Air Force and Navy. I have been contacted by spooks, both foreign (probably) and domestic (for sure). I am acutely aware of the problem, who is working on it, and perhaps more importantly how they think about it. I am aware of where reality gives way to fantasy. This is why I love AI flicks, and why I found Ex Machina so compelling. You see, I know how the neural processor they show in that movie could actually work.
Any good science fiction movie has at least one technology that must be accepted for the movies premise to make sense. In space travel this is the Faster Than Light (FTL) or Warp Drive. Without such a technology, the plot is not going to go anywhere–the whole movie would be stuck inside a metal tube adrift in empty space for thousands or millions of years. The FTL Drive makes it all possible. AI movies, by contrast, always have a special ‘learning processor’ from which the plot evolves. There is good physical reason for this, which I formally call the adaptive power problem. Without such a technology, AI is doomed to power efficiencies millions of times worse than biology. Ironically those closest to trying to solve the problem of AI often forget this fact, lost in a sea of mathematics and wholly oblivious to the real physical constraints of their math. Hollywood, oddly enough, consistently gets it right–at least in principle. If we want something like a biological brain–or better–it is not going to work like a modern computer. It is going to be something truly different, all the way down to its chemistry and how it computes information. Nothing short of a change in our understanding of what hardware can be will get us to the levels of efficiency of the brain–the original wetware.
In iRobot it was the positronic brain. In Terminator it was the Neural Net CPU. In Transcendence it was a quantum processor. Most interesting to me, in Ex Machina they called it “wetware” and it was built of a “gel” to “allow the necessary neural connections to form”. It was, I thought, a beautiful structure:
How would such a processor, as shown above, actually work? I was asking myself exactly this same question over a decade ago. The answer that I found changed the course of my life.
The basic problem I came up against was the efficient emulation of a biological synapse. Synapses have two properties that we need. First, a collection of synapses need to perform an integration over their values or weights. Second, the devices must change or adapt as they are used so they can learn. These two properties are called integration and adaptation. Integration is not all that difficult. Electronic currents sum easily, if a synapse were represented as a resistor or pair of resistors. The real problem is adaptation. One ‘solution’ is to ignore the problem and build a non-adaptive chip. While this may sound crazy, this is exactly what many in the neuromorphic community do simple because our current tools are so limited. Learning, or perhaps more generally adaptation or plasticity, enables a program to adapt based on experience to attain better solutions over time. An AI processor that cannot learn cannot be intelligent. Indeed, the problem is finding a way to make continuous adaptation an efficient operation and to make it available as a resource to our computing systems. What will occur when modern computing gains access to trivially-cheap and efficient perceptual processing and learning?
Particles in Suspension
I thought about the problem of building adaptive electronic synapses for about a year until I had a big epiphany. I was a Physics major at the time taking electricity and magnetism (E&M), which means my brain was recently exposed to something relevant to solving my problem. Electricity and Magnetism is all about understanding electric and magnetic fields–a study of Maxwell’s equations. You learn how materials behave when exposed to electric fields, for example a conductive particle. The electric field pulls at the charges in the material and, if they are free to move, causes the charge to separate and form dipoles:
The dipole, in turn, feels a force by the electric field. If the electric field is homogeneous, then the particles will act like little magnets and align the principle axis of the particle with the field. If the field is inhomogeneous, the particle will both align but also feel a force. My “aha!” moment occurred when I realized that this effect could be used to build electrically variable resistive connections.
If you placed a bunch of conducting particles into a non-conductive liquid or gel and then exposed them to an electric field between two terminals, they should spontaneously organize themselves to bridge the divide. I hit the college research library to see if I could find anybody doing it. I discovered a whole scientific field, called dielectrophoresis (DEP), and study after study showing that what I was thinking was clearly possible. Everybody was looking at the DEP force as a mechanism to manipulate small particles but nobody was looking at ways to build functional electronic synapses. Only a few years later, papers like this started to be published:
Let me dwell on this idea a bit, because it is important. The more you look at the natural world the more you realize that there is something going on. Something that people rarely talk about, but nonetheless drives into the heart of how everything works. Nature is organizing itself to dissipate energy. For example, the human economy is a massive self-organizing energy-dissipating machine, sucking fossil fuels out of the ground and channeling the energy into the construction of all sorts of gizmos and gadgets. The structure that we see around us, both human and non-human made, is there because free energy is being dissipated and work is being done. Rather than everything decaying away according the the Second Law of Thermodynamics we see self-organized structure everywhere, and this structure is the result of energy dissipating pathways. Or, to be more direct, it is energy dissipation pathways. It is incredible if you think about it, and the fact that Nature will self-organize a wire from independent particles if a voltage (a potential energy difference) is present gets to the heart of what self-organization actually is: matter configuring itself to dissipate free energy.
Once you understand what is going on, you can start to understand how to harness it. Think about it. If a bunch of particles will spontaneously organize themselves out of a colloidal suspension to dissipate energy then what happens when we control the energy? What happens if we make the maximal-energy-dissipation solution the solution to our problem? Will the particles self-organize to solve our problem? The answer, it turns out, is “yes”. In fact, one of the bigger shocks in my life was when I discovered that the way nature builds connections in response to voltage gradients matches seminal results in the field of machine learning on how to best make decisions or classifications. Thinking about this still gives me goosebumps because I appreciate what it means: there is a relatively clear path between physically self-organizing electronics and advanced machine learning processors. In other words, Ex Machina’s wetware brain is possible, and we have a pretty good idea how it all works.
During my time advising the DARPA Physical Intelligence program I met for the first time a man named Alfred Hubler. Meeting him is like traveling back in time to the period of Tesla when amazing contraptions were giving us a peek into a new and largely unexplored world. Hubler was an experimentalist, and one of his experiments was to place metal ball-bearing in a petri dish with castor oil and apply 20,000 volts. The large voltage is needed because the particles and distances are so large. The first time I saw this was in a live demonstration for government officials and advisors on a performer site-visit review in Malibu. When you see how the particles behave, you start to question your ideas about what life actually is.
It is not just that wires grow, although that is amazing to me. It’s how they grow, like little squirming tree-worms, branching and wiggling as they try to bridge the voltage gradient. Nothing quite prepares you to see totally inanimate matter acting as if it has purpose–as if it were alive. While I knew that such things were possible theoretically at that time, as I had mostly already developed AHaH Computing–it still shook me to the core to actually see it in real life. I was watching a machine built of metal parts and oil act as if it were alive and intelligent. A sort of physical intelligence, as the DARPA program it was being demonstrated under was all about.
So how do we get from Hubler’s Petri dish to the gel-based network in Ex Machina? While I can’t show you how to make it look exactly like a translucent blue crystal ball, I can show you how to build it with the technology we have now. I can explain how this new form of ‘wetware’ works in principle because that’s what the theory of Anti-Hebbian and Hebbian computing is all about. It is about finding a way to use Nature as Nature itself does. To not just compute a brain but to actually build one. To find a way for matter to organize itself on a chip to solve computational problems.
The driving physical engine to a learning processor is structural reconfiguration. Adaptation is not computed. Rather, physical circuits actually move and adapt in response to potential energy gradients. This is what makes the learning processor powerful. By reducing adaptation to a physical process we no longer have to compute it–we get it for free as a by-product of physics. Memory and processing merge, voltages and clock-rates drop and power efficiencies explode. AHaH Computing is about understanding how to build circuits that adapt or learn to solve your problems and, as a result, dissipate more energy ‘as a reward’. The path to maximal energy dissipation is the path that solves your problem, and the result is that Nature self-organizes to solve your problem.
What we have managed to understand so far is simple because it has to be simple. We can’t jump straight from where we are now to a fully 3D self-organizing iridescent orb. We must plug-in to the existing technological backbone and add only those pieces that are missing. That’s what we are doing with kT-RAM and the KnowmAPI. Rather than particles in liquid or gel suspension we have ions in a multi-layered glass-like structure above traditional CMOS electronics. Just like Hublers Petri dish, conductive pathways are evolved within the structures–they are just much, much smaller.
All the technologies needed to create a learning processor have arrived. It is going to happen and it is only a matter of time. While i’ve been focused on the learning problem, other groups have focused on the routing issues. Some have converged on core grid structures, while I favor hierarchical and self-similar routing architectures. In the end it does not really matter. What matters is that the technological building blocks we need are here and ready to be put together. It is more a problem of human organization, money and politics than it is a technological one. That may sound crazy, but just consider some facts. Vision systems have met and slightly surpassed humans in some domains like facial recognition (Facebook). Natural language processors have beat world experts in Jeopardy (IBM Watson). Cars are driving themselves. When you combine these algorithmic successes with the availability of neuromorphic and neuromemristive hardware you get a technological revolution unlike anything we have yet seen. The stakes are huge, and the first participants to achieve general-purpose machine intelligence backed by wetware will likely dominate computing and perhaps a great deal more.
The computers we have today are amazing, but they are nothing like the self-organizing world of living systems and brains. Computers today are a tiny part of what is possible. Due to an amazing convergence of technological capabilities we suddenly find ourselves looking out into a new world and taking our first steps. The rewards will be staggering–and the consequences of being left out terrifying. Machine intelligence is possibly the last technology humanity will invent. After this critical moment technology can invent itself.
24 Comments