## Complex Sine Wave Example

In this example, we generated a signal from the summation of five sinusoidal signals with randomly chosen amplitudes, periods, and phases. We then apply a simple supervised classifier model to make recursive and non-recursive predictions into the future.

For those that have joined the Knowm Development Community, the source code for the experiment is available in under `SignalPredictionAppKtRam`

.

1 |
org.knowm.knowmj.prediction; |

We will be going over the relevant parts of this App during the following tutorial.

## Signal Generation

This example uses a number of superimposed sine waves for our signal. The `ComplexSineSignalGenerator`

, generates a sine wave from the superposition of a number of random sub-sine waves.

1 |
org.knowm.knowmj.syndata.scalar; |

The wave can be generated and queried as follows.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
int numSubSines = 3; // The number of sub sine waves to be superimposed double periodScaleLow = 1; // period lower bound double periodScaleHigh = 2; // period upper bound float noise = 0; // Multiplier to randomly generated gussain signal noise ComplexSineSignalGenerator signalGenerator = new ComplexSineSignalGenerator(numSubSines, periodScaleLow, periodScaleHigh, noise); // Get signal at time = 0 float s1 = signalGenerator.getSignal() // Get signal at time = 1 float s2 = signalGenerator.getSignal() // Get signal at time = t int t = 0; float s1 = signalGenerator.getSignal(t) |

## Spiked Feature Vector Encoding

The `SignalPredictionAppKtRam`

uses a `Float_A2D_Buffered`

encoder to convert the real-valued signal values S(t) into spiked feature vectors.

1 |
org.knowm.knowmj.module.encoder._float |

First these signals are converted into a sparse encoding F(S(t)) using a spatially adaptive encoder `A2D_Encoder`

.

1 |
org.knowm.knowmj.module.encoder._float; |

Then spiked features are buffered to form a feature set vector:

## Adaptive Encoding

The adaptive encoder is a simple recursive method for producing a spike encoding. It can be conveniently realized through strictly anti-Hebbian learning via a binary decision tree with AHaH nodes at each tree node. Starting from the root node and proceeding to the leaf node. In this example, the input S(t) is summed with a bias b, y=S(t) + b. Depending on the sign of the result y, it is routed in one direction or another toward the leaf nodes. The bias is then updated according to anti-Hebbian learning.

If we then assign a unique integer to each node in the decision tree, the path that was taken from the root to the leaf becomes the spike encoding. This process is an adaptive analog to digital conversion. This generates an adaptive binning method for values in our signal – one with finer precision around areas of high density. You can learn more about this encoder type in this article on the A2D Encoder

## Classification and Learning

We use a kt-RAM based `LinearClassifier`

, to make predictions for future signal values.

1 |
org.knowm.knowmj.module.classifier; |

A number of AHaH nodes compose the linear classifier, one for each bin generated by our adaptive A2D spike encoder. Each AHaH nodes label is thus connected to the adaptive binning created by the A2D encoder wich changes as it adapts to our distribution. We could also use a simple discretization of the input space is we used a `Float_MaxMin`

encoder instead.

Each AHaH node is trained on the spiked feature vectors and outputs are evaluated by taking the label associated with the AHaH node with highest activation.

During `SignalPredictionAppKtRam.nonRecursivePredict`

we use the previous true signal values to predict the next value as follows. We only apply a single pass to the data.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
private void nonRecursivePredict() throws IOException { float signalThen = 0; float signalNow = 0; for (int i = 0; i < timeSteps; i++) { // Get current Signal signalNow = signalGenerator.getSignal(); // Find current signals AHaH node label Set truthLabels = getTrueLabels(signalNow); // Encode previous signal int[] spikes = simpleTemporalBufferSpikeEncoder.encode(signalThen); // train the classifier with the old signal data to learn the current signal data ClassifierOutput classifierOutput = classifier.classify(spikes, truthLabels); // reconstruct the predicted spike code back into a real-value signal double signalPrediction = reconstruct(classifierOutput); // Save state information signalData[i] = signalNow; signalPredictionData[i] = signalPrediction; // Update postion signalThen = signalNow; } } |

During `SignalPredictionAppKtRam.recursivePredict()`

we use the predicted values from the previous time steps to predict the next value. We only apply a single pass to the data.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
private void recursivePredict() throws IOException { float signalThen = 0.0f; float signalNow = 0.0f; for (int i = 0; i < timeSteps; i++) { if (i < timeSteps - testDuration) { // learn signalNow = signalGenerator.getSignal(); Set trueLabels = getTrueLabels(signalNow); int[] spikes = simpleTemporalBufferSpikeEncoder.encode(signalThen); ClassifierOutput classifierOutput = classifier.classify(spikes, trueLabels); signalData[i] = signalNow; signalPredictionData[i] = reconstruct(classifierOutput); } else { // recursive prediction. feed output back as input. int[] signalSpikes = simpleTemporalBufferSpikeEncoder.encode(signalThen); ClassifierOutput classifierOutput = classifier.classify(signalSpikes, new HashSet()); signalNow = reconstruct(classifierOutput); signalData[i] = signalGenerator.getSignal(); signalPredictionData[i] = signalNow; } signalThen = signalNow; } } |

## Results

The signal was generated from the summation of five sinusoidal signals with randomly chosen amplitudes, periods, and phases. Our linear classifier is simulated on kT-RAM with 8-bit `BYTE`

core precision and we set our A2D encoder to a depth of 6 giving us spatial resolution with 2^n-1 = 32 bins and hence 32 unique labels.

## Recursive Prediction

The experiment ran for a total of 10,000-time steps. During the last 300-time steps, recursive prediction occurred. The following results are generated.

## Non-Recursive Prediction

We run the experiment for a total of 10,000-time steps allowing our classifier to use the previous true time steps for prediction. Below are two separate functions and their approximations.

We record the error for both using an exponential moving average and generate the following plots.

Initially, the undertrained classifier and A2D encoder are not well suited to the problem and we see a high error. Over time, the error term decreases as the Linear Classifier learns and the A2D encoder adapts.

Increasing the time of our simulation from 10,000 steps to 100,000 creates a nonlinear decrease in error.

## Conclusion

We outline a simple model for sine wave signal prediction from randomly generated data. A spatially adaptive encoder is used to bin the real-valued signals and create sparse spike encodings. These are then buffered and used to train an AHaH based linear classifier for prediction. Two types of predictions are made. 1) Recursive: predictions are used to generate the next signal prediction and 2) Non-Recursive: true signal values are used as buffered features for prediction.

The results in Figures 1-5 display the accuracy of these models by graphing both the predicited signal to the actual signal. Large spikes away from the actual signal can be seen. These reflect an incorrect classification by an AHaH node far from the correct location of the signal. The number of these errors is decreased as extra training is used (Figures 6 and 7).

## Further Reading

TOC: Table of Contents

Prev: Signal Prediction

Next: Reinforcement LearningImage by rhoftonphoto

## Subscribe To Our Newsletter

Join our low volume mailing list to receive the latest news and updates from our team.