Restricted Boltzmann machine

A restricted Boltzmann machine (RBM) (also called a restricted Sherrington–Kirkpatrick model with external field or restricted stochastic Ising–Lenz–Little model) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

RBMs were initially proposed under the name Harmonium by Paul Smolensky in 1986, and rose to prominence after Geoffrey Hinton and collaborators used fast learning algorithms for them in the mid-2000s. RBMs have found applications in dimensionality reduction, classification, collaborative filtering, feature learning, topic modelling, immunology, and even manybody quantum mechanics. They can be trained in either supervised or unsupervised ways, depending on the task.

As their name implies, RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph:


 * a pair of nodes from each of the two groups of units (commonly referred to as the "visible" and "hidden" units respectively) may have a symmetric connection between them; and
 * there are no connections between nodes within a group.

By contrast, "unrestricted" Boltzmann machines may have connections between hidden units. This restriction allows for more efficient training algorithms than are available for the general class of Boltzmann machines, in particular the gradient-based contrastive divergence algorithm.

Restricted Boltzmann machines can also be used in deep learning networks. In particular, deep belief networks can be formed by "stacking" RBMs and optionally fine-tuning the resulting deep network with gradient descent and backpropagation.

Structure
The standard type of RBM has binary-valued (Boolean) hidden and visible units, and consists of a matrix of weights $$W$$ of size $$m\times n$$. Each weight element $$(w_{i,j})$$ of the matrix is associated with the connection between the visible (input) unit $$v_i$$ and the hidden unit $$h_j$$. In addition, there are bias weights (offsets) $$a_i$$ for $$v_i$$ and $$b_j$$ for $$h_j$$. Given the weights and biases, the energy of a configuration (pair of boolean vectors) $(v,h)$ is defined as


 * $$E(v,h) = -\sum_i a_i v_i - \sum_j b_j h_j -\sum_i \sum_j v_i w_{i,j} h_j$$

or, in matrix notation,


 * $$E(v,h) = -a^{\mathrm{T}} v - b^{\mathrm{T}} h -v^{\mathrm{T}} W h.$$

This energy function is analogous to that of a Hopfield network. As with general Boltzmann machines, the joint probability distribution for the visible and hidden vectors is defined in terms of the energy function as follows,


 * $$P(v,h) = \frac{1}{Z} e^{-E(v,h)}$$

where $$Z$$ is a partition function defined as the sum of $$e^{-E(v,h)}$$ over all possible configurations, which can be interpreted as a normalizing constant to ensure that the probabilities sum to 1. The marginal probability of a visible vector is the sum of $$P(v,h)$$ over all possible hidden layer configurations,


 * $$P(v) = \frac{1}{Z} \sum_{\{h\}} e^{-E(v,h)}$$,

and vice versa. Since the underlying graph structure of the RBM is bipartite (meaning there are no intra-layer connections), the hidden unit activations are mutually independent given the visible unit activations. Conversely, the visible unit activations are mutually independent given the hidden unit activations. That is, for m visible units and n hidden units, the conditional probability of a configuration of the visible units $v$, given a configuration of the hidden units $h$, is


 * $$P(v|h) = \prod_{i=1}^m P(v_i|h)$$.

Conversely, the conditional probability of $h$ given $v$ is


 * $$P(h|v) = \prod_{j=1}^n P(h_j|v)$$.

The individual activation probabilities are given by


 * $$P(h_j=1|v) = \sigma \left(b_j + \sum_{i=1}^m w_{i,j} v_i \right)$$ and $$\,P(v_i=1|h) = \sigma \left(a_i + \sum_{j=1}^n w_{i,j} h_j \right)$$

where $$\sigma$$ denotes the logistic sigmoid.

The visible units of Restricted Boltzmann Machine can be multinomial, although the hidden units are Bernoulli. In this case, the logistic function for visible units is replaced by the softmax function


 * $$P(v_i^k = 1|h) = \frac{\exp(a_i^k + \Sigma_j W_{ij}^k h_j)} {\Sigma_{k'=1}^K \exp(a_i^{k'} + \Sigma_j W_{ij}^{k'} h_j)}$$

where K is the number of discrete values that the visible values have. They are applied in topic modeling, and recommender systems.

Relation to other models
Restricted Boltzmann machines are a special case of Boltzmann machines and Markov random fields.

The graphical model of RBMs corresponds to that of factor analysis.

Training algorithm
Restricted Boltzmann machines are trained to maximize the product of probabilities assigned to some training set $$V$$ (a matrix, each row of which is treated as a visible vector $$v$$),


 * $$\arg\max_W \prod_{v \in V} P(v)$$

or equivalently, to maximize the expected log probability of a training sample $$v$$ selected randomly from $$V$$:


 * $$\arg\max_W \mathbb{E} \left[ \log P(v)\right]$$

The algorithm most often used to train RBMs, that is, to optimize the weight matrix $$W$$, is the contrastive divergence (CD) algorithm due to Hinton, originally developed to train PoE (product of experts) models. The algorithm performs Gibbs sampling and is used inside a gradient descent procedure (similar to the way backpropagation is used inside such a procedure when training feedforward neural nets) to compute weight update.

The basic, single-step contrastive divergence (CD-1) procedure for a single sample can be summarized as follows:


 * 1) Take a training sample $v$, compute the probabilities of the hidden units and sample a hidden activation vector $h$ from this probability distribution.
 * 2) Compute the outer product of $v$ and $h$ and call this the positive gradient.
 * 3) From $h$, sample a reconstruction $v'$ of the visible units, then resample the hidden activations $h'$ from this. (Gibbs sampling step)
 * 4) Compute the outer product of $v'$ and $h'$ and call this the negative gradient.
 * 5) Let the update to the weight matrix $$W$$ be the positive gradient minus the negative gradient, times some learning rate: $$\Delta W = \epsilon (vh^\mathsf{T} - v'h'^\mathsf{T})$$.
 * 6) Update the biases $a$ and $b$ analogously: $$\Delta a = \epsilon (v - v')$$, $$\Delta b = \epsilon (h - h')$$.

A Practical Guide to Training RBMs written by Hinton can be found on his homepage.

Stacked Restricted Boltzmann Machine

 * The difference between the Stacked Restricted Boltzmann Machines and RBM is that RBM has lateral connections within a layer that are prohibited to make analysis tractable. On the other hand, the Stacked Boltzmann consists of a combination of an unsupervised three-layer network with symmetric weights and a supervised fine-tuned top layer for recognizing three classes.
 * The usage of Stacked Boltzmann is to understand Natural languages, retrieve documents, image generation, and classification. These functions are trained with unsupervised pre-training and/or supervised fine-tuning. Unlike the undirected symmetric top layer, with a two-way unsymmetric layer for connection for RBM. The restricted Boltzmann's connection is three-layers with asymmetric weights, and two networks are combined into one.
 * Stacked Boltzmann does share similarities with RBM, the neuron for Stacked Boltzmann is a stochastic binary Hopfield neuron, which is the same as the Restricted Boltzmann Machine. The energy from both Restricted Boltzmann and RBM is given by Gibb's probability measure: $$E = -\frac12\sum_{i,j}{w_{ij}{s_i}{s_j}}+\sum_i{\theta_i}{s_i}$$. The training process of Restricted Boltzmann is similar to RBM. Restricted Boltzmann train one layer at a time and approximate equilibrium state with a 3-segment pass, not performing back propagation. Restricted Boltzmann uses both supervised and unsupervised on different RBM for pre-training for classification and recognition. The training uses contrastive divergence with Gibbs sampling: Δwij = e*(pij - p'ij)
 * The restricted Boltzmann's strength is it performs a non-linear transformation so it's easy to expand, and can give a hierarchical layer of features. The Weakness is that it has complicated calculations of integer and real-valued neurons. It does not follow the gradient of any function, so the approximation of Contrastive divergence to maximum likelihood is improvised.