## Signal Prediction

Consider the following scenario: A researcher at a wealth management fund has gathered data concerning the movement of its security interests over the previous ten years. For each year over that period, the fundamental and technical data information concerning the price movement of that security has been recorded and they would like to assess the value of each instrument at some time in the future.

If these researchers decided to use their historical information to build a predictive model from this data, they would be applying a technique from Signal Prediction.

Signal prediction is not limited to finance however, it is broadly described as any prediction which involves using the prior history of a signal or group of signals to determine the future state of the signal. In the case above, the signal of interest is the price of the asset.

Signals can be any kind of order-dependent information, including price indices, global temperatures, resource supplies, or object positions. Signals like these, in particular, are important to understand and accurate predictive models are highly sought after. Widely in use are those for financial & insurance purposes, telecommunications, resource planning, healthcare diagnostics, and robotic actuation. Predictive climate models specifically, have the potential to effect, on a large scale, the actions of government and people during this century.

## Methods

Current prediction approaches and algorithms are diverse but can be broadly be grouped into statistical, regression, and Machine Learning techniques.

### Regression

Regression models attempt to find a predictive equation based on the response of our signal to a set of independent predictor variables. Linear Regression, for instance, finds linear combinations of these predictor variables which fit a number of data points. Other regression types, such as discrete, non-linear apply different functions to the independent variables. In all cases, the objective of a regression method is to select the parameters of the model which minimize the error function between the model and the data.

Once a regression model has been found, predictions are made by extrapolating the interest signal at some later date.

### Time Series Models

Time series models account for the fact that data points taken over time may have an internal structure (such as autocorrelation, trend or seasonal variation) that should be accounted for. Statistical methods are used to extract these features from a signal and commonly decomposed into the trend, seasonal, and cyclical component of the series. Future values are then assumed to follow these trend components.

## Machine Learned Models

Machine Learning models employ techniques to enable computers to learn a model through a set of examples. These include Artificial Neural Networks, Support Vector Machines, K-Nearest neighbors and Decision Trees. The learned model is then applied to unseen data for prediction.

If you’re interested in using one of these Machine Learning techniques, we built a comprehensive list of Statistical and Machine Learning libraries in this article.

## Signal Prediction with kT-RAM

Signal Prediction on kT-RAM is most similar to another Machine Learning technique because no explicit equation is formed (as in regression), nor are signals explicitly decomposed into components (as done by time series models). Instead, a model of that signal is both learned and represented by synaptic connections on kT-RAM.

In one approach, we pose the future state of our series as a multi-label classification problem. In this way, the complex signals can be learned and predicted using the AHaH classifier.

As a simplified proof of concept to demonstrate this, a complex temporal signal prediction experiment can be designed. For each moment of time, the real-valued signal F(S(t)) is converted into a sparse spike encoding representation. These features are buffered to form a feature set:

[latex isblock=true] \{ F(S(t – N)), F(S(t – N+1)), F(S(t – N+2)), \ldots , F(S(t-1)) \} [/latex]

This feature set is then used to make predictions of the current feature F(S(t)), and the spikes of the current feature are used as supervised labels. After learning, the output prediction may be used in lieu of the actual input and run forward recursively in time. In this way, extended predictions about the future are possible.

In the next article, we demonstrate this machine learning method through an example. We use a complex sine wave as the signal and a kT-RAM classifier to make predictions.

## Further Reading

TOC: Table of Contents

Previous: Clustering KNN Encodings

Next: Complex Sine Wave Prediction

## Subscribe To Our Newsletter

Join our low volume mailing list to receive the latest news and updates from our team.