What is the Knowm API?

The Knowm API is a special Machine Learning (ML) library. It is the result of over a decade of research and development. It can be used to solve problems across many domains of machine learning, from classification, prediction, and anomaly detection to feature-learning, robotic actuation, and combinatorial optimization as shown in a series aof articles titled Machine Learning Capabilities with Thermodynamic RAM and the Knowm API
. It is a collection of code modules built on top of a Thermodynamic Random Access Memory (kT-RAM) emulator, the first general-purpose processor designed on the principles of AHaH Computing. kT-RAM is a fundamentally new type of computing substrate that resolves a serious problem in machine learning, or any other computational program where lots of memory must be ‘adapted’ and ‘integrated’ constantly. Every modern computing system currently separates memory and processing. This works well for many tasks, but it fails for large-scale adaptive systems like brains or large ML models like neural networks. Indeed, there is no system in Nature outside of modern human digital computers that actually separates memory and processing, so it’s a wonder we have been able to do as much as we have. kT-RAM provides a universal adaptation or learning substrate and solves, in physically adaptive hardware, the learning problems that we would otherwise have to compute by shuttling information back and forth between memory and processing.

Why should I use the Knowm API?

The Knowm API is a software ‘hook’ to hardware accelerators and especially kT-RAM. There is no known method of computing more efficient at synaptic integration and adaptation than AHaH circuits. By using the Knowm API your application will port to physical kT-RAM hardware when it becomes available. When your application is running hundreds to thousands of times more efficiently than your competitors, you are going to be glad you took the time to learn about Knowm and the Knowm API. Consider it a bonus that it is easy to use and state-of-art.

Do you need some more reasons to invest your time and energy in the Knowm API?

  1. Use a common framework to solve a diverse set of learning problems.
  2. Take advantage of the SENSE Platform for turn-key, horizontally scalable hardware acceleration.
  3. Join a growing world-wide community of curious entrepreneurs, thinker and tinkerers who are learning how to exploit secrets of natural self-organization.

What Kind of Problems Can the Knowm API Solve?

The Knowm API is focused on adaptive machine learning tasks that span Perception, Planning and Control. To date we have shown:

  1. Analog signal to spike conversion (sparse spike encoding)
  2. Multi-label, online optimal linear supervised and semi-supervised classification
  3. Feature learning (multiple approaches)
  4. Clustering
  5. Temporal prediction
  6. Anomaly detection
  7. Combinatorial optimization or hill-climbing
  8. Robotic actuation/temporal-difference learning
  9. Universal reconfigurable logic
  10. Random non-repeating set iteration
  11. Random number generation

We believe the Knowm API will eventually encompass a great many things, and we have only just started. We hope you can help us search the universe of possibilities with us.

The bottom line for you as a programmer or technical architect is that by using the Knowm API, you commit to learning one framework, but you can use it for a wide range of problems. By studying the tutorials and demo applications, you will quickly see a pattern emerge and gain confidence in your own ability to use Knowm API to solve your particular problem.

What Programming Languages are Available?

The Knowm API is available today as a Java library. Java is a statically typed compiled language offering speed advantages over dynamically typed languages such as Python. We have integrated Trove’s off-heap collections framework for fast and efficient synaptic access and updates on regular CPUs, and are also porting kT-RAM emulators to parallel processors such as Adapteva’s Epiphany, GPUs and FPGAs. By taking advantage of every computing resource available, kT-RAM emulators provide tradeoffs in synaptic bandwidth and space, enabling optimization to end-use applications. We have also developed a horizontally scalable parallel computing framework called the SENSE Platform. Need more power? Plug more SENSE Servers into your network.

We welcome partnerships with organizations and individuals who want to port the Knowm API to their platforms or languages of choice or build kT-RAM emulators for specific hardware platforms. In many cases, we will gladly offer code bounties.

What is AHaH Computing?

AHaH Computing is the theoretical underpinning that lets you understand how to use kT-RAM and effectively exploit the Knowm API. Continuously adaptive-learning systems are extremely hard to understand if you do not have the right conceptual framework, and it has taken us a long time to make it simple. This is where AHaH Computing comes in. All adaptation on kT-RAM is based on Anti-Hebbian and Hebbian (AHaH) plasticity. AHaH plasticity is one of the simplest rules for adaption you may come across. But don’t let its simplicity fool you. There is a rich computational universe lurking under its simple exterior. After more than a decade of studying it we are still finding new ways to use AHaH plasticity. Fortunately you do not have to take the decade long, arduous and confusing path that we took. We have created a conceptual framework to help you understand how to think about AHaH attractor states and have produced many working examples for you to learn from.

What is kT-RAM?

Not every mathematical function has a physical analog, but for those that do we can exploit the laws of physics to solve our problem and gain tremendous energy and space savings. The AHaH rule is a physical process and occurs whenever energy dissipates through a pliable (plastic) container. kT-RAM exploits this process and provides an adaptation resource to computer programs. You may be familiar with GPUs and how they can be used as parallel processors to speed up execution of some types of program. Think of kT-RAM as a co-processor that speeds up execution of learning operations.

kT-RAM is composed of cores, where each core can be partitioned into collections of synapses of arbitrary sizes (AHaH Nodes). AHaH Nodes can be allocated over multiple cores, single cores, all the way down to individual synapses on one core.

kTRAM

There are a few generations of kT-RAM ahead of us:

  1. First Generation: Emulated on commodity hardware (Epiphanies, CPUs, FPGA’s, GPUs, etc)
  2. Second Generation: Peripheral devices & co-processors
  3. Third Generation: Direct integration with multi-core routing and computing architectures
  4. Fourth Generation: Semi-Fixed topological multi-chip systems

As kT-RAM evolves from emulator to co-processor, processing speed will go up and power consumption will go down. Software will be written, algorithms will be evolved and applications will be deployed. As some application spaces settle in on specific topological architectures, dedicated application-specific kT-RAM chips will be produced to further increase power and space efficiency, for example in robotic control systems.

How does kT-RAM Compare to [Some Learning Algorithm]?

The short answer is that kT-RAM is not an algorithm. It is a computational primitive for adaptive computations and can be configured for use in many different types of algorithms. Asking this question is comparable asking how GPUs or CPUs compare to [some learning algorithm]. It is not really a valid question because they are not the same thing.

The kT-RAM specification connects the dots from low-level memristor physics where energy flows through adaptive matter all the way to real-world machine learning applications built on the Knowm API ML modules. Based on our current results and experience we believe kT-RAM offers all the capability needed to achieve state-of-the-art across most branches of machine learning. However, we have not even come close to comparing our results to every other method out there, just as we have not come close to exploring the territory that kT-RAM offers. AHaH attractor states are computationally universal and consequently offers a universe of possibilities. The few possibilities that we have explored have shown kT-RAM is not just a theoretical play-thing but rather a tool with real-world practical potential.

Where and When Can I Download the Knowm API?

The Knowm API including all benchmarks and applications as well as in-depth tutorials for understanding the essentials of AHaH Computing and kT-RAM are available for all Knowm Developer Community members. Make sure to also read our article called Collaborative Cognitive Computing to learn about the motivation behind the KDC and how it benefits everyone involved!

4 Comments

    • duncan fairbanks
      reply

      kt-ram technology sounds like an essential first step towards learning without the Von neumann bottleneck. as a student of computer engineering and machine learning, i am interested in supporting this technology.

      • Alex Nugent
        Alex Nugent
        reply

        duncan–thanks for the support! We also see it as a first step. I believe kT-ram is a good starting point because it is simple and it can be used as a specification. That means we can make kt-ram out of many technologies (digital and analog CMOS as well as memristors), and building up the application space. the only thing that matters in the end is utility. dont forget to subscribe to the newsletter and we will let you know when the Knowm api is available for developers (soon!).

    • Joni Dambre
      reply

      Hi,
      I came accross your site and your technology sounds fascinating. However, I was wondering how it relates to neural networks, both to the computational paradigm and to hardware realisations thereof. As computational paradigm, recurrent neural networks are equally biologically inspired and they also naturally integrate memory and computation. Obviously, they exist in many flavours and some are more biologically inspired than others. Secondly, several hardware realisations of (analog and spiking) neural networks exist.

      Im interested in your thoughts on this …

      • Alex Nugent
        Alex Nugent
        reply

        Joni–Recurrent neural networks are algorithms. kT-RAM is an adaptive computational substrate. A number of neuromorphic chips have been built over the years, to varying degrees of success, each limited to the hardware available. kT-RAM is not so much biologically inspired as it is “nature inspired”. We are taking a universal adaptive building block found in nature and providing it as a computational resource. Existing neural processors are either useless (they cant learn or do not solve benchmark problems at required levels) or they are limited in scope as they are implementations of specific algorithms. kT-RAM is a low-level resource capable of providing memory, logic and machine learning functions at a hardware level and can be used to solve a number of problems in a number of algorithms. It merges memory and processing, reducing synaptic activation and adaptation to a physical process that does not have to be computed.

Leave a Comment