The Knowm API is a special Machine Learning (ML) library. It is the result of over a decade of research and development. It can be used to solve problems across many domains of machine learning, from classification, prediction, and anomaly detection to feature-learning, robotic actuation, and combinatorial optimization as shown in a series aof articles titled Machine Learning Capabilities with Thermodynamic RAM and the Knowm API
. It is a collection of code modules built on top of a Thermodynamic Random Access Memory (kT-RAM) emulator, the first general-purpose processor designed on the principles of AHaH Computing. kT-RAM is a fundamentally new type of computing substrate that resolves a serious problem in machine learning, or any other computational program where lots of memory must be ‘adapted’ and ‘integrated’ constantly. Every modern computing system currently separates memory and processing. This works well for many tasks, but it fails for large-scale adaptive systems like brains or large ML models like neural networks. Indeed, there is no system in Nature outside of modern human digital computers that actually separates memory and processing, so it’s a wonder we have been able to do as much as we have. kT-RAM provides a universal adaptation or learning substrate and solves, in physically adaptive hardware, the learning problems that we would otherwise have to compute by shuttling information back and forth between memory and processing.
The Knowm API is a software ‘hook’ to hardware accelerators and especially kT-RAM. There is no known method of computing more efficient at synaptic integration and adaptation than AHaH circuits. By using the Knowm API your application will port to physical kT-RAM hardware when it becomes available. When your application is running hundreds to thousands of times more efficiently than your competitors, you are going to be glad you took the time to learn about Knowm and the Knowm API. Consider it a bonus that it is easy to use and state-of-art.
Do you need some more reasons to invest your time and energy in the Knowm API?
The Knowm API is focused on adaptive machine learning tasks that span Perception, Planning and Control. To date we have shown:
We believe the Knowm API will eventually encompass a great many things, and we have only just started. We hope you can help us search the universe of possibilities with us.
The bottom line for you as a programmer or technical architect is that by using the Knowm API, you commit to learning one framework, but you can use it for a wide range of problems. By studying the tutorials and demo applications, you will quickly see a pattern emerge and gain confidence in your own ability to use Knowm API to solve your particular problem.
The Knowm API is available today as a Java library. Java is a statically typed compiled language offering speed advantages over dynamically typed languages such as Python. We have integrated Trove’s off-heap collections framework for fast and efficient synaptic access and updates on regular CPUs, and are also porting kT-RAM emulators to parallel processors such as Adapteva’s Epiphany, GPUs and FPGAs. By taking advantage of every computing resource available, kT-RAM emulators provide tradeoffs in synaptic bandwidth and space, enabling optimization to end-use applications. We have also developed a horizontally scalable parallel computing framework called the SENSE Platform. Need more power? Plug more SENSE Servers into your network.
We welcome partnerships with organizations and individuals who want to port the Knowm API to their platforms or languages of choice or build kT-RAM emulators for specific hardware platforms. In many cases, we will gladly offer code bounties.
AHaH Computing is the theoretical underpinning that lets you understand how to use kT-RAM and effectively exploit the Knowm API. Continuously adaptive-learning systems are extremely hard to understand if you do not have the right conceptual framework, and it has taken us a long time to make it simple. This is where AHaH Computing comes in. All adaptation on kT-RAM is based on Anti-Hebbian and Hebbian (AHaH) plasticity. AHaH plasticity is one of the simplest rules for adaption you may come across. But don’t let its simplicity fool you. There is a rich computational universe lurking under its simple exterior. After more than a decade of studying it we are still finding new ways to use AHaH plasticity. Fortunately you do not have to take the decade long, arduous and confusing path that we took. We have created a conceptual framework to help you understand how to think about AHaH attractor states and have produced many working examples for you to learn from.
Not every mathematical function has a physical analog, but for those that do we can exploit the laws of physics to solve our problem and gain tremendous energy and space savings. The AHaH rule is a physical process and occurs whenever energy dissipates through a pliable (plastic) container. kT-RAM exploits this process and provides an adaptation resource to computer programs. You may be familiar with GPUs and how they can be used as parallel processors to speed up execution of some types of program. Think of kT-RAM as a co-processor that speeds up execution of learning operations.
kT-RAM is composed of cores, where each core can be partitioned into collections of synapses of arbitrary sizes (AHaH Nodes). AHaH Nodes can be allocated over multiple cores, single cores, all the way down to individual synapses on one core.
There are a few generations of kT-RAM ahead of us:
As kT-RAM evolves from emulator to co-processor, processing speed will go up and power consumption will go down. Software will be written, algorithms will be evolved and applications will be deployed. As some application spaces settle in on specific topological architectures, dedicated application-specific kT-RAM chips will be produced to further increase power and space efficiency, for example in robotic control systems.
The short answer is that kT-RAM is not an algorithm. It is a computational primitive for adaptive computations and can be configured for use in many different types of algorithms. Asking this question is comparable asking how GPUs or CPUs compare to [some learning algorithm]. It is not really a valid question because they are not the same thing.
The kT-RAM specification connects the dots from low-level memristor physics where energy flows through adaptive matter all the way to real-world machine learning applications built on the Knowm API ML modules. Based on our current results and experience we believe kT-RAM offers all the capability needed to achieve state-of-the-art across most branches of machine learning. However, we have not even come close to comparing our results to every other method out there, just as we have not come close to exploring the territory that kT-RAM offers. AHaH attractor states are computationally universal and consequently offers a universe of possibilities. The few possibilities that we have explored have shown kT-RAM is not just a theoretical play-thing but rather a tool with real-world practical potential.
The Knowm API including all benchmarks and applications as well as in-depth tutorials for understanding the essentials of AHaH Computing and kT-RAM are available for all Knowm Developer Community members. Make sure to also read our article called Collaborative Cognitive Computing to learn about the motivation behind the KDC and how it benefits everyone involved!