I have read a lot of papers on RBM and it seems to be difficult to grasp all the implementation details.
So, I want to share my experience with people facing the same problems. My tutorial is based on variant of RBM-s named Continuous Restricted Boltzmann Machine or CRBM for short. CRBM have very close implementation to original RBM with binomial neurons (0,1) as possible values of activation. At the end of the article I provide some code in Python. No guarantee is given that this implementation is correct so let me know of any bugs found.
First you may have a look at the original papers that describe theory behind RBM neural networks:
Restricted Boltzmann Machine is a stochastic neural network (that is a network of neurons where each neuron have some random behavior when activated). It consist of one layer of visible units (neurons) and one layer of hidden units. Units in each layer have no connections between them and are connected to all other units in other layer (fig.1). Connections between neurons are bidirectional and symmetric . This means that information flows in both directions during the training and during the usage of the network and that weights are the same in both directions.
RBM Network works in the following way:
First the network is trained by using some data set and setting the neurons on visible layer to match data points in this data set.
After the network is trained we can use it on new unknown data to make classification of the data (this is known as unsupervised learning)
For purpose of this tutorial I will use artificially generated data set. It is 2D data that form a pattern shown in fig.2. I choose to use 500 data points for training. Because each data points is formed by to numbers between 0 and 1 neurons have to accept continuous values and I use Continuous RBM. I have tried different 2D patterns and this one seems to be relative difficult to be learned by CRBM.
Training a RBM is performed by algorithm known as "Contrastive Divergence Learning".
More info on Contrastive Divergence
Let W be the matrix of IxJ (I - number of visible neurons, J - number of hidden neurons) that represents
weights between neurons. Each neuron input is provided by connections from all neurons in other layer.
Current neuron state S is formed by multiplication of each input by weight, summation over all inputs and application of this sum as a argument of nonlinear sigmoidal function:
Sj = F( Sum( Si x Wij + N(0,sigma)) ) - here Si are all neurons in given layer plus one bias neuron that stays constantly set at 1.0
N(0,1) is random number from normal distribution with mean 0.0 and standard deviation sigma (I use sigma=0.2).
This nonlinear function in my case is:
F = lo + (hi - lo)/(1.0 + exp(-A*Sj))
Where lo and hi are the lower and higher bound of input values (in my case 0,1), so it
becomes: F = 1.0/(1.0 + exp(-A*Sj))
A - is some parameter that is determined during the learning process.
Contrastive divergence is a value that is computed (actually matrix of values) and that is used to adjust values of W matrix. Changing W incrementally lead to training of W values.
Let W0 be the initial matrix of weights that are set to some random small values. I use N(0, 0.1) for this.
Let CD = <Si.Sj>0 - <Si.Sj>n - contrastive divergence
Then on each step (epoch) W is updated to new value W".:
W" = W + alpha*CD
Here alpha is some small step - "learning rate". There exist more complex ways for W update that involve
some "momentum" and "cost" of update to avoid W values to become very large.
There seems to be big confusion what exactly Contrastive Divergence means and how to implement it.
I have spend a lot of time to understand it.
First of all CD as shown in the formula above is a matrix of size IxJ. So this formula have to be
computed for each combination of I and J.
<...> is a average over each data point in the data set.
Si.Sj is just a multiplication of current activation (state) of
neuron I and neuron J (obviously :) ). Where Si is the state of a
visible neuron, and Sj is the state of a hidden neuron.
Indexes after <...> mean that average is taken after 0 or N-th reconstruction step performed.
How is the reconstruction performed?
The algorithm as a whole is:
The value of parameter A for visible units stay constant and for hidden unit is adjusted
by the same CD calculation but instead of formula above the following formula is used:
CD = (<Sj.Sj>0 - <Sj.Sj>n)/(Aj.Aj)
In my code i use n=20 initially and gradually increase it to 50.
Most of the steps in the algorithm can be performed by some simple matrix multiplication.
After the RBM learning process finishes it can be shown how well it performs on new data points.
I use set of 500 data points that are random and uniformly distributed in the 2D interval (0..1, 0..1).
Visible layer neurons states are set with values of each data point and steps 1) 2) 3) 5) are repeated
multiple times. Number of repetitions is the max number of "n". used in the learning process.
At the end of n-th step, neurons states in visible layer represent a data reconstruction. This is repeated for each data point. All the reconstruction points form a 2D image pattern that can be compared with initial image.
At the end of this article I provide some implementation of Continuous Restricted Boltzmann Machine in Python. Most of the steps are performed by matrix operations using NumPy library. I used PyLab and SciPy to make some nice visualizations. Images of data that are shown (fig.4) are interpolated with Gaussian kernel in order to have some "feeling" of the density of data points and reconstructed data points.
Next image shows initial data set and reconstructed data superimposed. Initial data is in blue, reconstructed in red and green line connects each data point with reconstructed one.
Main difficulty I observed is to fine tune the set of learning parameters.
I used the following parameters and if somebody is able to find better values - let me know.
A=0.1 (on visible neurons)
Learning Rate W = 0.5
Learning Rate A = 0.5
Learning Cost = 0.00001
Learning Moment = 0.9
RBM architecture is 2 visible neurons for 2D data points and 8 hidden neurons. On some experiments with simple patterns 4 neurons are enough to reconstruct pattern successfully.
Other software created by me: