Share on Facebook13Tweet about this on TwitterShare on Reddit0Share on Google+1Share on LinkedIn1Share on StumbleUpon0Buffer this pageEmail this to someonePrint this page

The Knowm API

To develop Machine Learning applications with the Knowm API you’ll need to understand the basics of AHaH computing and kT-RAM. The Knowm API is our programming interface with the digital emulator of Thermodynamic-RAM a.k.a kT-RAM. As we shall see, it allows us to create arbitrary instances of kT-RAM cores, with varying configurations, types, and initializations before testing our creations on real-world problems.

Like any tutorial, it helps when you can play with the technology as it is introduced, so we would recommend you sign up for the Knowm Developer Community and gain access to the full source code, working examples, in-depth tutorials and one-on-one help. Otherwise, feel free to follow along and learn.

What is kT-RAM

kT-RAM is the first neuro-memristive processor architecture we are emulating with the Knowm API and it’s where the magic of Machine Learning happens. The following is a crash course in kT-RAM which you can read before getting to the tutorials. If you’d like a more in-depth introduction I recommend the lesson series we have available to KDC members.

Kt-RAM is built on top of a the memristor circuit component, a circuit component that limits the flow of electrical current ( resistance ) with respect to the amount of charge that has previously flowed through it ( memory ). Hence its name, the memristor. You can read more about the memristor from our lesson  on the kt-synapse

On kT-RAM these memristors are combined into differential pairs we call a synapse.


Knowm’s Synapse can be emulated with two memristors.

and these synaptic connections have been organized into a grid of cells, with a tree-like structure (H-Tree) connecting every cell to the AHaH Controller circuitry.



When a voltage is applied at leaves of this tree by an AHaH Controller, current is propagated up the H-Tree, through each activated cell, and back down the trunk, where the voltage is read via sensing circuitry. As energy flows through the activated synapses the conductivities of the individual synapses change, and hence, so will the voltage outputs seen at the base of the trunk on the kT-RAM chip.

By understanding how the memristor pairs adapt as we drive them (AHaH Computing), and by changing the drive patterns, we can accomplish a number of useful functions ranging from discrete logic to large-scale inference and pattern recognition. The goal of the KDC is to achieve primary performance parity with state-of-the-art machine learning methods, using kT-RAM or other AHaH Computing processors.

You can instantiate a kT-RAM object like this with the emulator with the following code:

The digital core types allow us to run efficient simulations with different degrees of accuracy, in this case the conductivity of each memristor in the differential-pair synapse is discretized into 256 levels or one byte of information. Via interchangeable cores, we can deploy efficient digital cores optimized for existing digital hardware, or more accurate physics-based models aimed at insuring circuit function. To read more about the core types here.

One of the advantages of the kT-RAM structure is that it can be easily demarcated into sub-trees of arbitrary sizes, so long as the synapses of each sub-tree are not activated together at the same time. In the picture above the tree’s labeled A, and B are examples of AHaH nodes of size 8 and 16. To create the AHaH nodes A and B seen above, we call the ktram.create method with a name, the number of synapse it has and the initial conductivity of those synapses. Other initial conductances are possible.

The structure of the kT-RAM chip also allows us to easily reference the AHaH nodes’ synapses and apply voltages through them. For instance we could apply a voltage through the synapses highlighted in the picture below and then read the voltage at the root. In this case, these four synapses interact in deciding the voltage at the root while the other synapses remain floating.

kT-RAM Synaptic Activation

kT-RAM Synaptic Activation

This brings up a fundamental aspect of AHaH computing: the spike code. We run the chip by selectively pulsing a set of synapses at one time and seeing how they interact. The available synapses on kT-RAM are called the spike channels, the active spikes make up the spike pattern, and the way we convert information into spikes before feeding it onto the chip is called a spike-encoding. Interestingly, this is exactly how our sensory nerves interface our brain, by spiking a set of wires that eventually fire up individual neurons.

Spike Encoding

Spike Encoding

With the emulator we can reference particular synapses by creating an array of integers which correspond to the individual synapses and then set them active on kT-RAM. For instance, we could spike all the eight synapses of the “A” and “B” AHaH nodes created earlier like this.

Finally we can apply voltages through kT-RAM by selecting a previously instantiated AHaH node by name and executing one of 14 instructions from the kT-Core Instruction Set. To do this with the emulator we call an execute instruction. Each instruction will apply a voltage in a particular direction, e.g forward or reverse and this will effect how the conductivities of the synapses change and what outputs you read off. You can read more about the specifics here.

The following instruction pair FF-XX applies the forward voltages through the spiked synapses. We also read the voltage at the root by calling getY(). Note: this particular dual-instruction increases the conductivity of each spiked synapse a reverse operation would have done the opposite.

At this point we’ve introduced, kT-RAM, how we interface with it, how we read the voltages it gives off, and how this effects its state. The next thing to do is build our simplest Machine Learning tool: a Linear Classifier.

Further Reading

More in-depth publicly available information about kT-RAM and the full instruction set can be found at Cortical Processing with Thermodynamic-RAM.

TOC: Table of Contents
Previous: Introduction to Machine Learning
Next: Linear Classifier with the Knowm API

Share on Facebook13Tweet about this on TwitterShare on Reddit0Share on Google+1Share on LinkedIn1Share on StumbleUpon0Buffer this pageEmail this to someonePrint this page

Related Posts

Subscribe To Our Newsletter

Join our low volume mailing list to receive the latest news and updates from our team.

Leave a Comment


Subscribe to our low-volume mailing list to receive important updates and announcements directly in your inbox.