Knowm Inc. was founded to commercialize neuromemristive processors we call Thermodynamic RAM, which is an implementation of a new type of brain-inspired computing we call AHaH Computing, where, unlike digital computers, the processor and memory are combined. Modern digital computing architectures are based on the separation of memory and processing. This places a restrictive limit on the data bandwidth between CPU and RAM and introduces very large inefficiencies as energy is expended in memory-processing communication. By reducing synaptic integration and adaptation to analog operations on memristor circuits and minimizing chip operating voltages, AHaH Computing radically improves the efficiency of machine learning operations.
Knowm Inc. exists to fill a niche in the rapidly evolving technological landscape and lead the computing industry toward neuromemristive processors. The roots of Knowm Inc. were planted in 2002, when lead inventor Alex Nugent began patenting his ideas around adaptive computing architectures and founded an intellectual property holding company called KnowmTech. Initial seed funding for the endeavor was made possible by business woman and entrepreneur, Hillary Riggs. The portfolio now includes over 40 patents spanning memristive components and circuits all the way to large scale neuromorphic architectures. Alex Nugent co-created and advised the DARPA SyNAPSE program and more recently has been awarded government SBIR and STTR contracts to further develop the technology. In 2012, physicist, electrical engineer and software developer, Tim Molter, joined the effort to lead software development and further design chip architectures with Alex. This collaborative effort lead to the publication of the formal introduction to AHaH computing in early 2014: AHaH Computing–From Metastable Switches to Attractors to Machine Learning. More recently, Knowm has partnered with key experts in the field to further ramp up efforts on a path to commercialization. Collaborators include memristor fabrication pioneer Kris Campbell, Ph.D. from Boise State University and memristor circuit designer pioneer Dhireesha Kudithipudi, Ph.D. from Rochester Institute of Technology. Most recently, investor and consultant Sam Barakat has joined the team to help launch Knowm Inc.
Alex’s original idea and inspiration was “reevaluate our preconceptions of how computing works and build a new type of processor that physically adapts and learns on its own”. By observing Nature he noticed one major difference between modern computers and Nature’s brains: brains don’t separate processor from memory, as we do with a CPU and RAM. In a nervous system (and all other natural systems), the processor and memory are the same machinery. The distance between processor and memory is zero. Another observation was that whereas modern chips must maintain absolute control over internal states (ones and zeros), Nature’s computers are volatile – the components are analog, their states decay, and they heal and build themselves continuously. Pursuing this observation, Alex discovered something that was in plain sight but seldom recognized as significant: Nature’s transistor – two energy dissipating pathways competing for conduction resources. This simple adaptive building-block leads to the formation of energy-dissipating fractals found at all scales of Nature and life, from rivers to trees to lightening and brains. Driven by the second law of thermodynamics, matter spontaneously configures itself to dissipate the flow of energy! To dive into the exciting ideas and concepts of this phenomenon, check out What is Knowm?.
The challenge remaining was to figure out how to recreate this phenomenon on a chip and understand it sufficiently to interface with existing hardware and solve real-world machine learning problems. Around the same time, independent researcher Dr. Kris Campbell was perfecting ‘variable resistor’ devices, and HP made the connection to Leo Chua’s theoretical prediction of a missing circuit element he called he memristor. In part due to the influence of the DARPA SyNAPSE program, memristors began to appear in the literature. However, a unified theory on how to use these new devices in learning systems was not available. Thanks in part to funding from the Air Force Research Labs, Alex and Tim were able to publish the theory of AHaH Computing. Years of work designing various chip architectures and validating capabilities lead to the specification of Thermodynamic RAM or kT-RAM for short, a co-processor ‘core’ that can be plugged into existing hardware platforms to accelerate machine learning tasks such as unsupervised clustering, supervised and unsupervised classification, complex signal prediction, anomaly detection, unsupervised robotic actuation and combinatorial optimization of procedures – all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
In addition to our original Tungsten Knowm memristor we’re happy to announce the availability of three new variations of probe-able raw die – Chromium, Tin and Tungsten – each containing 180 individual memristors in 9 different device sizes. To round out our memristor offerings, we are now making raw device data available for purchase, mainly intended for researchers who may not have the necessary equipment for characterizing memristors. While these memristors were designed for our own neuromemristive processor, Thermodynamic RAM, they are also excellent candidates for other memristor applications such as non-volatile RAM, oscillators, analog computing, ternary arithmetic and logic.
We are the first to develop and make commercially-available memristors with bi-directional incremental learning capability. The device was developed through research from Boise State University’s Dr. Kris Campbell, and this new data unequivocally confirms Knowm’s memristors are capable of bi-directional incremental learning. This has been previously deemed impossible in filamentary devices by Knowm’s competitors, including IBM, despite significant investment in materials, research and development. With this advancement, Knowm delivers the first commercial memristors that can adjust resistance in incremental steps in both direction rather than only one direction with an all-or-nothing ‘erase’. This advancement opens the gateway to extremely efficient and powerful machine learning and artificial intelligence applications.
The different doping materials (W, Cr, Sn) in the active layer of the memristors lead to different physical and electrical differences such as switching speed, endurance, data retention, on and off resistance states and the incremental sensitivity. For a full description of the memristors including a device model please download the data sheet at: knowm.org/downloads/Knowm_Memristors.pdf. Raw research die for laboratory study of device operation via a probe station over a wide range of device sizes. The die are 7860 μm by 5760 μm and consists of 9 columns of 20 devices of sizes 1 μm, 2 μm, 3 μm, 4 μm, 5 μm, 6 μm, 10 μm, 20 μm, and 30 μm. The data sheet also includes a memristor model we developed to accurately model not only our devices but many other memristors as well. Full source code and explanation is included.
The memristor material stack is based on mobile metal ion conduction through a chalcogenide material. The devices are fabricated with a layer of metal that is easily oxidizable, located near one electrode. When a voltage is applied across the device with the more positive potential on the electrode near this metal layer, the metal is oxidized to form ions. Once formed, the ions move through the device towards the lower potential electrode. The ions move through a layer of amorphous chalcogenide material (the active layer) to reach the lower potential electrode where they are reduced to their metallic form and eventually form a conductive pathway between both electrodes that spans the active material layer, lowering the device resistance. Reversing the direction of the applied potential causes the conductive channel to dissolve and the device resistance to increase. The device is bipolar, cycling between high and low resistance values by switching the polarity of the applied potential. The resistance is related at any time to the amount of metal located within the active layer.
Memristors are available for sale and can be shipped worldwide. http://knowm.org/product-category/memristor/
Via our collaboration with Boise State University, Knowm Inc. is proud to offer the worlds first CMOS Back End Of Line (BEOL) Memristor service. Over the last decade, Dr. Kris Campbell has developed and perfected CMOS-compatible memristor technology. We are offering this service to lower the barriers to memristive technology and help jump-start the neuromemristive computing era. Multiple memristor types are possible covering a range of threshold voltages, resistance ranges, switching speeds, data retention, and cycling durability. Services includes layout design, all microfabrication steps for device fabrication, BEOL processing on CMOS die or wafers, wire bonding and packaging.
How does nature compute? A brain, like all living systems, is a far-from-equilibrium energy dissipating structure that constantly builds and repairs itself. We can shift the standard question from “how do brains compute?” or “what is the algorithm of the brain?” to a more fundamental question of “how do brains build and repair themselves as dissipative attractor-based structures?” Just as a ball will roll into a depression, an attractor-based system will fall into its attractor states. Perturbations (damage) will be fixed as the system reconverges to its attractor state. As an example, if we cut ourselves we heal. To bestow this property on our computing technology we must find a way to represent our computing structures as attractors.
In our PLoS One paper we detailed how the attractor points of a plasticity rule we call Anti-Hebbian and Hebbian (AHaH) plasticity are computationally complete logic functions as well as building blocks for machine learning functions. We further showed that AHaH plasticity can be attained from simple memristive circuitry attempting to maximize circuit power dissipation in accordance with ideas in non-equilibrium Thermodynamics. Our goal was to lay a foundation for a new type of practical computing based on the configuration and repair of volatile switching elements. We have traversed the large gap from volatile memristive devices to demonstrations of computational universality and machine learning. In short, it has been demonstrated that:
Watch the following video to see Knowm Inc. CEO, Alex Nugent guest lecture for the Brain Inspired Computing class of Dr. Dhireesha Kudithipudi at Rochester Institute of Technology, present the concepts behind the theory of AHaH Computing, given in April 2014. Alex introduces AHaH Computing, providing motivation for its foundations in self-organization, the link with margin maximization, universal logic and independent component analysis. In addition, Alex reviews the kT-RAM instruction set and shows how to use it to solve a benchmark classification problem and interpret the results.
We’ve also recently created a 9-part video series called “AHaH Computing in a Nutshell” in order to clearly explain the motivations behind AHaH Computing and how it works. The series starts out with an introduction to memristors, followed by an explanation of AHaH Computing and what computational problems it is designed to solve. The benefits are illustrated in a clear and graphical manner so that anybody can understand. Watch the video series on our Vimeo or Youtube channels!
If you’re hungry for more after watching all the videos and reading the PLoS One paper on AHaH Computing, make sure to check out Alex’s write-up on AHaH computing at http://knowm.org/ahah-computing and his blog article How to Build the Ex-Machina Wetware Brain.
Because the AHaH rule describes a physical process, we can create efficient and dense analog AHaH synaptic circuits with memristive components. One version of these mixed signal (digital and analog) circuits forms a generic adaptive computing resource we call kT-RAM. kT-RAM is a fundamentally new type of computing substrate that resolves a serious problem in machine learning, or any other computational program where lots of memory must be ‘adapted’ and ‘integrated’ constantly. Every modern computing system currently separates memory and processing. This works well for many tasks, but it fails for large-scale adaptive systems like brains or large machine learning models like neural networks. kT-RAM provides a universal synaptic integration and adaptation substrate and solves, in analog memristive hardware, the learning problems that we would otherwise have to compute by shuttling information back and forth between memory and processing. The kT-RAM architecture also allows for drastic reductions of operating voltages. When you combine these two features, you end up with a computer co-processor that can operate at the scales of biology in terms of power consumption and artificial intelligence.
In neural systems, the algorithm is specified by two things: the network topology and the plasticity of the interconnections or synapses. Any general-purpose neural processor must contend with the problem that hard-wired neural topology will restrict the available neural algorithms that can be run on the processor. It is also crucial that the NPU interface merge easily with modern methods of computing. A ‘Random Access Synapse’ structure satisfies these constraints. Thermodynamic-RAM is the first attempt at realizing a neuromemristive processor implementing the theory of AHaH Computing. While many alternative AHaH architectures are feasible that each offer specific advantages over others, kT-RAM aims to be a general computing substrate.
Where did the name “Thermodynamic RAM” come from? Thermodynamics is the branch of physics that describes the temporal evolution of matter as it flows from ordered to disordered states, and the Knowm Synapse (2 serially connected memristors) is an energy-dissipation flow structure, hence “thermodynamic”. Thermodynamic RAM is actually a very simple circuit architecture and its design bootstraps itself from regular ol’ computer memory or RAM. In essence, to build kT-RAM, you take standard RAM, strip the sensing circuitry and the bit-storing cells, add memristor pairs at each cell location and connect them all via a triple H-Tree electrode network.
The Knowm API is a special Machine Learning (ML) library. It is the result of over a decade of research and development. It can be used to solve problems across many domains of machine learning, from classification, prediction, and anomaly detection to feature-learning, robotic actuation, and combinatorial optimization as shown in a series of articles titled Machine Learning Capabilities with Thermodynamic RAM and the Knowm API. It is a collection of code modules built on top of a kT-RAM emulator, the first general-purpose neuromemristive processor designed on the principles of AHaH Computing.
The Knowm API is focused on adaptive machine learning tasks that span Perception, Planning and Control. To date we have shown:
To explore at a finer detail many of the above capabilities in the form of demonstrations and machine learning benchmarks, make sure to check out Machine Learning Capabilities With Thermodynamic RAM and the Knowm API. Stay tuned to our blog’s RSS feed to get the newest demonstrations and announcements. We believe the Knowm API will eventually encompass a great many things, and we have only just started.
The kT-RAM Technology Stack is a specification that goes from memristors to machine learning applications. Thermodynamic-RAM is the first attempt at realizing a working neuromorphic processor implementing the theory of AHaH Computing. While several alternative designs are feasible and may offer specific advantages over others, the first design aims to be a general computing substrate geared towards reconfigurable network topologies and the entire spectrum of the machine learning application space. Defining the individual levels of this ‘technology stack’ helps to introduce the technology step by step and group the necessary pieces into tasks with focused objectives. This allows for separate groups to specialize at one or more levels of the stack where their strengths and interests align. Improvements at various levels can propagate throughout the whole technology ecosystem, from materials to markets, without any single participant having to bridge the whole stack. In a way, the technology stack is an industry specification. As shown in the figure below, we are now working on the final critical slice of the stack – replacing our “emulated kT-RAM” with real kT-RAM. Work has already begun on the first prototype chips, and we’re getting really close now to running our demos and benchmarks on our neuromorphic hardware!
Knowm Inc. is a bottom-up organization. We started with nothing more than a big idea, a great deal of motivation and the courage to forge a new path. Over the last decade we have boot-strapped our way from the bottom to the leader in neuromemristive technology. We have had to cross such a large swath of the technology stack to find a solution that we are acutely aware of a simple truth: nothing big can be accomplished alone. Good ideas come from places, people or situations you would not expect. Neuromemristive technology opens the gates to a new era in computing with new rules. We have only explored a fraction of a percent of what is possible, and we need your help to explore the rest. If you are excited about Knowm and AHaH Computing, we want to work with you. Enthusiasm is a very powerful force when coupled to a team and the right tools. We are seeking technical experts, application developers, machine learning experts, students, artists, thinkers, philosophers and more. Would you like to be a part of this exciting new technology? We would love to work with you.
We’ve created a full index of all the material we’ve so far created in the form of white papers, videos, blogs, tutorials and more for learning about the Knowm technology stack all on one page. If you have any questions, feel free to contact us, post a question on our Reddit Forum and/or check out our frequently asked questions page!