Soletta machine learning
Machine learning for IoT devices
 All Data Structures Files Functions Typedefs Enumerations Enumerator Groups Pages
Enumerations | Functions
Neural network engine

The SML neural network engine. More...

Enumerations

enum  sml_ann_activation_function {
  SML_ANN_ACTIVATION_FUNCTION_SIGMOID, SML_ANN_ACTIVATION_FUNCTION_SIGMOID_SYMMETRIC, SML_ANN_ACTIVATION_FUNCTION_GAUSSIAN, SML_ANN_ACTIVATION_FUNCTION_GAUSSIAN_SYMMETRIC,
  SML_ANN_ACTIVATION_FUNCTION_ELLIOT, SML_ANN_ACTIVATION_FUNCTION_ELLIOT_SYMMETRIC, SML_ANN_ACTIVATION_FUNCTION_COS, SML_ANN_ACTIVATION_FUNCTION_COS_SYMMETRIC,
  SML_ANN_ACTIVATION_FUNCTION_SIN, SML_ANN_ACTIVATION_FUNCTION_SIN_SYMMETRIC
}
 The functions that are used by the neurons to produce an output. More...
 
enum  sml_ann_training_algorithm { SML_ANN_TRAINING_ALGORITHM_QUICKPROP, SML_ANN_TRAINING_ALGORITHM_RPROP }
 Algorithm types used to train a neural network. More...
 

Functions

struct sml_objectsml_ann_new (void)
 Creates a SML neural networks engine. More...
 
bool sml_ann_set_activation_function_candidates (struct sml_object *sml, enum sml_ann_activation_function *functions, unsigned int size)
 Set the neural networks activation function candidates. More...
 
bool sml_ann_set_cache_max_size (struct sml_object *sml, unsigned int max_size)
 Set the maximum number of neural networks in the cache. More...
 
bool sml_ann_set_candidate_groups (struct sml_object *sml, unsigned int candidate_groups)
 Set the number of neural network candidates. More...
 
bool sml_ann_set_desired_error (struct sml_object *sml, float desired_error)
 Set the neural network desired error. More...
 
bool sml_ann_set_initial_required_observations (struct sml_object *sml, unsigned int required_observations)
 Set the required number of observations to train the neural network. More...
 
bool sml_ann_set_max_neurons (struct sml_object *sml, unsigned int max_neurons)
 Set the maximum number of neurons in the network. More...
 
bool sml_ann_set_training_algorithm (struct sml_object *sml, enum sml_ann_training_algorithm algorithm)
 Set the neural network training algorithm. More...
 
bool sml_ann_set_training_epochs (struct sml_object *sml, unsigned int training_epochs)
 Set the neural network train epochs. More...
 
bool sml_ann_supported (void)
 Check if SML was built with neural networks support. More...
 
bool sml_ann_use_pseudorehearsal_strategy (struct sml_object *sml, bool use_pseudorehearsal)
 Set the pseudorehearsal strategy. More...
 
bool sml_is_ann (struct sml_object *sml)
 Check if the SML object is a neural network engine. More...
 

Detailed Description

The SML neural network engine.

A neural network consists in a set of neurons that are inter-connected and distributed in layers, usually three (Input, hidden and output layers). For every connection between neurons there is a weight associated to it, these weights are initialized randomly with values between -0.2 and 0.2 and adjusted during the training phase, so the neural network output predict the right value.

The neuron is the basic unit of the neural network and it's responsible for producing an output value based on its inputs, the output is calculated using the following formula:

$ f(\sum_{i=1}^{n} W_i * I_i) = Out (1)$

Where:

The activation function is chosen by the user and it can be any differentiable function, however most problems can be solved using the Sigmoid function.

As an example, imagine that one wants to predict if a light should be on/off if Bob and Alice is present in the room using the following trained neural network.

ann.png

Consider that Bob is present (input is 1) and Alice (input is 0) is not.

The first step is to provide the Bob's and Alice's presence to the input neurons, In a neuron network the input neurons are special, because they do not apply the formula (1) to produce an output. The output value from a input neuron is the input value itself, so in this case the N1 and N2 neurons will output 1 and 0 respectively.

Consider that all neurons are using the sigmoid function defined as:

$\frac{1}{1 + e^{-x}}$

Using the formula (1) the N3's output will be:

\begin{eqnarray*} OutputN3 &=& \frac{1}{1 + e^{-x}}\\ &=& \frac{1}{1 + e^{-((1 * 3) + (0 * 1.2))}}\\ &=& 0.95 \end{eqnarray*}

The same thing for N4:

\begin{eqnarray*} OutputN4 &=& \frac{1}{1 + e^{-x}}\\ &=& \frac{1}{1 + e^{-((1 * 3.4) + (0 * 2.5))}}\\ &=& 0.96 \end{eqnarray*}

Finally the Light state (N5):

\begin{eqnarray*} Light state &=& \frac{1}{1 + e^{-x}}\\ &=& \frac{1}{1 + e^{-((0.95 * 4) + (0.96 * 5))}}\\ &=& 0.99 \end{eqnarray*}

The neural network predict that the light state is 0.99, as it's very close to 1 we can consider that the light should be On.

Remarks
The values used in the neural network above (inputs, outputs, neuron weights) were chosen randomly.

The example above uses two neurons in the hidden layer, however for some problem, two neurons is not good enough. There is no silver bullet about how many neurons one should use in the hidden layer, this number is obtained by trial and error. SML can handle this automatically, during the training phase SML will automatically choose the neural network topology (how many neurons the hidden layer must have), it will also decide which is the best activation function.

The SML neural network engine has two methods of operation, these methods try to reduce or eliminate a problem called catastrophic forgetting that affects neural networks.

Basically catastrophic forgetting is a problem that the neural network may forget everything that has learnt in the past and only accumulate recent memory. This happens due the nature of their training. In order to reduce this problem the following methods were implemented.

The first method is called pseudo-rehearsal (the default one), in this method only one neural network is created and every time it needs be retrained, random inputs are generated and feed to the network. The corresponding outputs are stored and used to train the neural network with the new collected data.

The second method of operation consists in creating N neural networks, that are very specific for each pattern that SML encounters and every time the SML wants to make a prediction, it will choose which is the best neural network for the current situation. It is possible to set a limit of how many neural networks SML will have in memory, this limit can be set with sml_ann_set_cache_max_size. This cache implements the LRU algorithm so, neural networks that were not recent used will be deleted.

To know more about catastrophic forgetting: https://en.wikipedia.org/wiki/Catastrophic_interference

Enumeration Type Documentation

The functions that are used by the neurons to produce an output.

See Also
sml_ann_set_activation_function_candidates
Enumerator
SML_ANN_ACTIVATION_FUNCTION_SIGMOID 

Sigmoid function. Defined for 0 < y < 1

SML_ANN_ACTIVATION_FUNCTION_SIGMOID_SYMMETRIC 

Sigmoid function. Defined for -1 < y < 1

SML_ANN_ACTIVATION_FUNCTION_GAUSSIAN 

Gaussian function. Defined for 0 < y < 1

SML_ANN_ACTIVATION_FUNCTION_GAUSSIAN_SYMMETRIC 

Gaussian function. Defined for -1 < y < 1

SML_ANN_ACTIVATION_FUNCTION_ELLIOT 

Elliot function. Defined for 0 < y < 1

SML_ANN_ACTIVATION_FUNCTION_ELLIOT_SYMMETRIC 

Elliot function. Defined for -1 < y < 1

SML_ANN_ACTIVATION_FUNCTION_COS 

Cosinus function. Defined for 0 <= y <= 1

SML_ANN_ACTIVATION_FUNCTION_COS_SYMMETRIC 

Cosinus function. Defined for -1 <= y <= 1

SML_ANN_ACTIVATION_FUNCTION_SIN 

Sinus function. Defined for 0 <= y <= 1

SML_ANN_ACTIVATION_FUNCTION_SIN_SYMMETRIC 

Sinus function. Defined for -1 <= y <= 1

Algorithm types used to train a neural network.

See Also
sml_ann_set_training_algorithm
Enumerator
SML_ANN_TRAINING_ALGORITHM_QUICKPROP 

Faster than the standard backpropagation. Uses the gradient information to update the weights; It's based on the Newton's method.

SML_ANN_TRAINING_ALGORITHM_RPROP 

Uses the sign of the gradient to update the weights.

Function Documentation

struct sml_object* sml_ann_new ( void  )

Creates a SML neural networks engine.

Remarks
It must be freed with sml_free after usage is done.
Returns
A sml_object object on success.
NULL on failure.
See Also
sml_free
sml_object
bool sml_ann_set_activation_function_candidates ( struct sml_object sml,
enum sml_ann_activation_function functions,
unsigned int  size 
)

Set the neural networks activation function candidates.

Activation functions resides inside the neurons and they are responsible for producing the neuron output value. As choosing the correct activation functions may required a lot of trial and error tests, the SML uses an algorithm that tries to suit the best activation functions for a given problem.

Remarks
By default all sml_ann_activation_function are used as candidates.
Parameters
smlThe sml_object object.
functionsThe sml_ann_activation_function functions vector.
sizeThe size of the vector functions.
Returns
true on success.
false on failure.
bool sml_ann_set_cache_max_size ( struct sml_object sml,
unsigned int  max_size 
)

Set the maximum number of neural networks in the cache.

This cache is used to store the neural networks that will be used to predict output values. Setting this limit is only necessary if one sets pseudorehearsal strategy to false, otherwise it's ignored.

Remarks
The default cache size is 30.
0 means "infinite" cache size.
Parameters
smlThe sml_object object.
max_sizeThe max cache size
Returns
true on success.
false on failure.
See Also
sml_ann_use_pseudorehearsal_strategy
bool sml_ann_set_candidate_groups ( struct sml_object sml,
unsigned int  candidate_groups 
)

Set the number of neural network candidates.

During the training phase the SML will choose the network topology by itself. To do so, it will create (M * 4) neuron candidates (where M is the number of activation function candidates) and then N (where N is the number of candidate groups) candidate groups will be created, ending up with M*4*N candidate neurons. The only different between these candidate groups is the initial weight values.

Remarks
The default number of candidates is 6
Parameters
smlThe sml_object object.
candidate_groupsThe number of candidate groups.
Returns
true on success.
false on failure.
See Also
sml_ann_set_max_neurons
sml_ann_set_activation_function_candidates
bool sml_ann_set_desired_error ( struct sml_object sml,
float  desired_error 
)

Set the neural network desired error.

This is used as a shortcut to stop the training phase. If the neural network error is equals or below desired_error the train will be stopped.

Remarks
The default desired error is 0.001
Parameters
smlThe sml_object object.
desired_errorThe desired error
Returns
true on success.
false on failure.
bool sml_ann_set_initial_required_observations ( struct sml_object sml,
unsigned int  required_observations 
)

Set the required number of observations to train the neural network.

The SML will only train the neural network when the observation count reaches required_observation. However this number is only a hint to SML, because if SML detects that the provided number is not enough, the required_observations will grow. There is a way to control how much memory the SML will use to store observations, this memory cap can be set with sml_set_max_memory_for_observations

Remarks
The default required observations is 2500
Parameters
smlThe sml_object object.
required_observationsThe number of required observations
Returns
true on success.
false on failure.
bool sml_ann_set_max_neurons ( struct sml_object sml,
unsigned int  max_neurons 
)

Set the maximum number of neurons in the network.

SML automatically add neurons to the hidden layers when training the neural network, you can prevent the neural network to grow too big by setting the max neurons. Larger networks are more difficult to train, thus required more time. However, smaller networks will have poor predictions.

Remarks
The default max_neurons is 5
Parameters
smlThe sml_object object.
max_neuronsThe maximum number of neurons in the neural network.
Returns
true on success.
false on failure.
bool sml_ann_set_training_algorithm ( struct sml_object sml,
enum sml_ann_training_algorithm  algorithm 
)

Set the neural network training algorithm.

The training algorithm is responsible for adjusting the neural network weights.

Remarks
The default training algorithm is SML_ANN_TRAINING_ALGORITHM_QUICKPROP
Parameters
smlThe sml_object object.
algorithmThe sml_ann_training_algorithm algorithm.
Returns
true on success.
false on failure.
bool sml_ann_set_training_epochs ( struct sml_object sml,
unsigned int  training_epochs 
)

Set the neural network train epochs.

The training epochs is used to know when to stop the training. If the desired error is never reached, the training phase will stop when it reaches the training_epochs value.

Remarks
The default training epochs is 300
Parameters
smlThe sml_object object.
training_epochsThe number of training_epochs
Returns
true on success.
false on failure.
See Also
sml_ann_set_desired_error
bool sml_ann_supported ( void  )

Check if SML was built with neural networks support.

Returns
true If it has neural network support.
false If it is has not neural network support.
bool sml_ann_use_pseudorehearsal_strategy ( struct sml_object sml,
bool  use_pseudorehearsal 
)

Set the pseudorehearsal strategy.

For more information about the pseudorehearsal strategy look at the Neural_Network_Engine_Introduction

Remarks
The default value for pseudorehearsal is true.
Parameters
smlThe sml_object object.
use_pseudorehearsaltrue to enable, false to disable
Returns
true on success.
false on failure.
bool sml_is_ann ( struct sml_object sml)

Check if the SML object is a neural network engine.

Parameters
smlThe sml_object object.
Returns
true If it is fuzzy.
false If it is not fuzzy.