Maximal entropy random walk

Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally (average entropy production) by assuming uniform probability distribution among all paths in a given graph.

MERW is used in various fields of science. A direct application is choosing probabilities to maximize transmission rate through a constrained channel, analogously to Fibonacci coding. Its properties also made it useful for example in analysis of complex networks, like link prediction, community detection, robust transport over networks and centrality measures. Also in image analysis, for example for detecting visual saliency regions, object localization, tampering detection or tractography problem.

Additionally, it recreates some properties of quantum mechanics, suggesting a way to repair the discrepancy between diffusion models and quantum predictions, like Anderson localization.

Basic model


Consider a graph with $$n$$ vertices, defined by an adjacency matrix $$A \in \left\{0, 1\right\}^{n \times n}$$: $$A_{ij}=1$$ if there is an edge from vertex $$i$$ to $$j$$, 0 otherwise. For simplicity assume it is an undirected graph, which corresponds to a symmetric $$A$$; however, MERW can also be generalized for directed and weighted graphs (for example Boltzmann distribution among paths instead of uniform).

We would like to choose a random walk as a Markov process on this graph: for every vertex $$i$$ and its outgoing edge to $$j$$, choose probability $$S_{ij}$$ of the walker randomly using this edge after visiting $$i$$. Formally, find a stochastic matrix $$S$$ (containing the transition probabilities of a Markov chain) such that Assuming this graph is connected and not periodic, ergodic theory says that evolution of this stochastic process leads to some stationary probability distribution $$\rho$$ such that $$\rho S = \rho$$.
 * $$0\leq S_{ij} \leq A_{ij}$$ for all $$i, j$$ and
 * $$\sum_{j=1}^n S_{ij}=1$$ for all $$i$$.

Using Shannon entropy for every vertex and averaging over probability of visiting this vertex (to be able to use its entropy), we get the following formula for average entropy production (entropy rate) of the stochastic process:


 * $$H(S)=\sum_{i=1}^n \rho_i \sum_{j=1}^n S_{ij} \log(1/S_{ij})$$

This definition turns out to be equivalent to the asymptotic average entropy (per length) of the probability distribution in the space of paths for this stochastic process.

In the standard random walk, referred to here as generic random walk (GRW), we naturally choose that each outgoing edge is equally probable:
 * $$S_{ij} = \frac{A_{ij}}{\sum\limits_{k=1}^n A_{ik}}$$.

For a symmetric $$A$$ it leads to a stationary probability distribution $$\rho$$ with
 * $$\rho_i = \frac{\sum\limits_{j=1}^n A_{ij}}{\sum\limits_{i=1}^n \sum\limits_{j=1}^n A_{ij}}$$.

It locally maximizes entropy production (uncertainty) for every vertex, but usually leads to a suboptimal averaged global entropy rate $$H(S)$$.

MERW chooses the stochastic matrix which maximizes $$H(S)$$, or equivalently assumes uniform probability distribution among all paths in a given graph. Its formula is obtained by first calculating the dominant eigenvalue $$\lambda$$ and corresponding eigenvector $$\psi$$ of the adjacency matrix, i.e. the largest $$\lambda \in \mathbb{R}$$ with corresponding $$\psi \in \mathbb{R}^n$$ such that $$\psi A = \lambda \psi$$. Then stochastic matrix and stationary probability distribution are given by
 * $$S_{ij} = \frac{A_{ij}}{\lambda} \frac{\psi_j}{\psi_i}$$

for which every possible path of length $$l$$ from the $$i$$-th to $$j$$-th vertex has probability
 * $$\frac{1}{\lambda^l} \frac{\psi_j}{\psi_i}$$.

Its entropy rate is $$\log(\lambda)$$ and the stationary probability distribution $$\rho$$ is
 * $$\rho_i = \frac{\psi_i^2}{\|\psi\|_2^2}$$.

In contrast to GRW, the MERW transition probabilities generally depend on the structure of the entire graph (are nonlocal). Hence, they should not be imagined as directly applied by the walker – if random-looking decisions are made based on the local situation, like a person would make, the GRW approach is more appropriate. MERW is based on the principle of maximum entropy, making it the safest assumption when we don't have any additional knowledge about the system. For example, it would be appropriate for modelling our knowledge about an object performing some complex dynamics – not necessarily random, like a particle.

Sketch of derivation
Assume for simplicity that the considered graph is indirected, connected and aperiodic, allowing to conclude from the Perron–Frobenius theorem that the dominant eigenvector is unique. Hence $$A^l$$ can be asymptotically ($$l\rightarrow\infty$$) approximated by $$\lambda^l \psi \psi^T$$ (or $$\lambda^l |\psi\rangle \langle \psi|$$ in bra–ket notation).

MERW requires uniform distribution along paths. The number $$m_{il}$$ of paths with length $$2l$$ and vertex $$i$$ in the center is
 * $$m_{il} = \sum_{j=1}^n \sum_{k=1}^n \left(A^l\right)_{ji} \left(A^l\right)_{ik} \approx \sum_{j=1}^n \sum_{k=1}^n \left(\lambda^l \psi \psi^\top\right)_{ji} \left(\lambda^l \psi \psi^\top\right)_{ik} = \sum_{j=1}^n \sum_{k=1}^n \lambda^{2l} \psi_j \psi_i \psi_i \psi_k = \lambda^{2l} \psi_i^2 \underbrace{\sum_{j=1}^n \psi_j \sum_{k=1}^n \psi_k}_{=: b}$$,

hence for all $$i$$,
 * $$\rho_i = \lim_{l \rightarrow \infty} \frac{m_{il}}{\sum\limits_{k=1}^n m_{kl}} = \lim_{l \rightarrow \infty} \frac{\lambda^{2l} \psi_i^2 b}{\sum\limits_{k=1}^n \lambda^{2l} \psi_k^2 b} = \lim_{l \rightarrow \infty} \frac{\psi_i^2}{\sum\limits_{k=1}^n \psi_k^2} = \frac{\psi_i^2}{\sum\limits_{k=1}^n \psi_k^2} = \frac{\psi_i^2}{\|\psi\|_2^2}$$.

Analogously calculating probability distribution for two succeeding vertices, one obtains that the probability of being at the $$i$$-th vertex and next at the $$j$$-th vertex is
 * $$\frac{\psi_i A_{ij} \psi_j}{\sum\limits_{i'=1}^n \sum\limits_{j'=1}^n \psi_{i'} A_{i'j'} \psi_{j'}} = \frac{\psi_i A_{ij} \psi_j}{\psi A \psi^\top} = \frac{\psi_i A_{ij} \psi_j}{\lambda \|\psi\|_2^2}$$.

Dividing by the probability of being at the $$i$$-th vertex, i.e. $$\rho_i$$, gives for the conditional probability $$S_{ij}$$ of the $$j$$-th vertex being next after the $$i$$-th vertex
 * $$S_{ij} = \frac{A_{ij}}{\lambda} \frac{\psi_j}{\psi_i}$$.

Weighted MERW: Boltzmann path ensemble
We have assumed that $$A_{ij} \in \{0,1\} $$ for MERW corresponding to uniform ensemble among paths. However, the above derivation works for real nonnegative $$A$$. Parametrizing $$A_{ij} = \exp(-E_{ij}) $$ and asking for probability of length $$l $$ path $$(\gamma_0, \ldots,\gamma_l) $$, we get:
 * $$\textrm{Pr}(\gamma_0, \ldots,\gamma_l)=\rho_{\gamma_0} S_{\gamma_0 \gamma_1}\ldots S_{\gamma_{l-1}\gamma_l}= \psi_{\gamma_0} \frac{A_{\gamma_0 \gamma_1}\ldots A_{\gamma_{l-1}\gamma_l}}{\lambda^l} \psi_{\gamma_l}=\psi_{\gamma_0}\frac{\exp(-(E_{\gamma_0 \gamma_1}+\ldots +E_{\gamma_{l-1}\gamma_l}))}{\lambda^l} \psi_{\gamma_l} $$

As in Boltzmann distribution of paths for energy defined as sum of $$E_{ij} $$ over given path. For example, it allows to calculate probability distribution of patterns in Ising model.

Examples


Let us first look at a simple nontrivial situation: Fibonacci coding, where we want to transmit a message as a sequence of 0s and 1s, but not using two successive 1s: after a 1 there has to be a 0. To maximize the amount of information transmitted in such sequence, we should assume uniform probability distribution in the space of all possible sequences fulfilling this constraint. To practically use such long sequences, after 1 we have to use 0, but there remains a freedom of choosing the probability of 0 after 0. Let us denote this probability by $$q$$, then entropy coding would allow encoding a message using this chosen probability distribution. The stationary probability distribution of symbols for a given $$q$$ turns out to be $$\rho=(1/(2-q),1-1/(2-q)) $$. Hence, entropy production is $$H(S)=\rho_0 \left(q\log(1/q)+(1-q)\log(1/(1-q))\right)$$, which is maximized for $$q=(\sqrt{5}-1)/2\approx 0.618$$, known as the golden ratio. In contrast, standard random walk would choose suboptimal $$q=0.5$$. While choosing larger $$q$$ reduces the amount of information produced after 0, it also reduces frequency of 1, after which we cannot write any information.

A more complex example is the defected one-dimensional cyclic lattice: let say 1000 nodes connected in a ring, for which all nodes but the defects have a self-loop (edge to itself). In standard random walk (GRW) the stationary probability distribution would have defect probability being 2/3 of probability of the non-defect vertices – there is nearly no localization, also analogously for standard diffusion, which is infinitesimal limit of GRW. For MERW we have to first find the dominant eigenvector of the adjacency matrix – maximizing $$\lambda$$ in:

$$(\lambda \psi)_x = (A\psi)_x = \psi_{x-1}+ (1-V_x) \psi_x +\psi_{x+1}$$

for all positions $$x$$, where $$V_x=1$$ for defects, 0 otherwise. Substituting $$3\psi_x$$ and multiplying the equation by −1 we get:

$$E \psi_x =-(\psi_{x-1} -2 \psi_x +\psi_{x+1}) + V_x \psi_x$$

where $$E = 3-\lambda$$ is minimized now, becoming the analog of energy. The formula inside the bracket is discrete Laplace operator, making this equation a discrete analogue of stationary Schrodinger equation. As in quantum mechanics, MERW predicts that the probability distribution should lead exactly to the one of quantum ground state: $$\rho_x \propto \psi_x^2$$ with its strongly localized density (in contrast to standard diffusion). Taking the infinitesimal limit, we can get standard continuous stationary (time-independent) Schrodinger equation ($$E\psi=-C\psi_{xx}+V\psi$$ for $$C=\hbar^2/2m$$) here.