Nested sampling algorithm

The nested sampling algorithm is a computational approach to the Bayesian statistics problems of comparing models and generating samples from posterior distributions. It was developed in 2004 by physicist John Skilling.

Background
Bayes' theorem can be applied to a pair of competing models $$M_1$$ and $$M_2$$ for data $$D$$, one of which may be true (though which one is unknown) but which both cannot be true simultaneously. The posterior probability for $$M_1$$ may be calculated as:



\begin{align} P(M_1\mid D) & = \frac{P(D\mid M_1) P(M_1)}{P(D)} \\ & = \frac{P(D\mid M_1) P(M_1)}{P(D\mid M_1) P(M_1) + P(D\mid M_2) P(M_2)} \\ & = \frac{1}{1 + \frac{P(D\mid M_2)}{P(D\mid M_1)} \frac{P(M_2)}{P(M_1)} } \end{align} $$

The prior probabilities $$M_1$$ and $$M_2$$ are already known, as they are chosen by the researcher ahead of time. However, the remaining Bayes factor $$P(D\mid M_2)/P(D\mid M_1)$$ is not so easy to evaluate, since in general it requires marginalizing nuisance parameters. Generally, $$M_1$$ has a set of parameters that can be grouped together and called $$\theta$$, and $$M_2$$ has its own vector of parameters that may be of different dimensionality, but is still termed $$\theta$$. The marginalization for $$M_1$$ is


 * $$P(D\mid M_1) = \int d \theta \, P(D\mid \theta,M_1) P(\theta\mid M_1)$$

and likewise for $$M_2$$. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution $$P(\theta\mid D,M_1)$$. It is an alternative to methods from the Bayesian literature such as bridge sampling and defensive importance sampling.

Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density $$Z=P(D\mid M)$$ where $$M$$ is $$M_1$$ or $$M_2$$:

Start with $$N$$ points $$\theta_1,\ldots,\theta_N$$ sampled from prior. for $$i=1$$ to $$j$$ do       % The number of iterations j is chosen by guesswork. $$L_i := \min($$current likelihood values of the points$$)$$; $$X_i := \exp(-i/N);$$ $$w_i := X_{i-1} - X_i$$ $$Z := Z + L_i\cdot w_i;$$ Save the point with least likelihood as a sample point with weight $$w_i$$. Update the point with least likelihood with some Markov chain Monte Carlo steps according to the prior, accepting only steps that keep the likelihood above $$L_i$$. end return $$Z$$;

At each iteration, $$X_i$$ is an estimate of the amount of prior mass covered by the hypervolume in parameter space of all points with likelihood greater than $$\theta_i$$. The weight factor $$w_i$$ is an estimate of the amount of prior mass that lies between two nested hypersurfaces $$\{ \theta \mid P(D\mid\theta,M) = P(D\mid\theta_{i-1},M) \}$$ and $$\{ \theta \mid P(D\mid\theta,M) = P(D\mid\theta_i,M) \}$$. The update step $$Z := Z+L_i w_i$$ computes the sum over $$i$$ of $$L_i w_i$$ to numerically approximate the integral



\begin{align} P(D\mid M) &= \int P(D\mid \theta,M) P(\theta\mid M) \,d \theta \\ &= \int P(D\mid \theta,M) \,dP(\theta\mid M) \end{align} $$

In the limit $$j \to \infty$$, this estimator has a positive bias of order $$ 1 / N$$ which can be removed by using $$(1 - 1/N)$$ instead of the $$\exp (-1/N)$$ in the above algorithm.

The idea is to subdivide the range of $$f(\theta) = P(D\mid\theta,M)$$ and estimate, for each interval $$[f(\theta_{i-1}), f(\theta_i)]$$, how likely it is a priori that a randomly chosen $$\theta$$ would map to this interval. This can be thought of as a Bayesian's way to numerically implement Lebesgue integration.

Implementations
Example implementations demonstrating the nested sampling algorithm are publicly available for download, written in several programming languages.
 * Simple examples in C, R, or Python are on John Skilling's website.
 * A Haskell port of the above simple codes is on Hackage.
 * An example in R originally designed for fitting spectra is described at and is on GitHub.
 * A NestedSampler is part of the Python toolbox BayesicFitting for generic model fitting and evidence calculation. It is on Github
 * An example in C++, named Diamonds, is on GitHub.
 * A highly modular Python parallel example for statistical physics and condensed matter physics uses is on GitHub.
 * pymatnest is a Python package designed for exploring the energy landscape of different materials, calculating thermodynamic variables at arbitrary temperatures and locating phase transitions is on GitHub.
 * The MultiNest software package is capable of performing nested sampling on multi-modal posterior distributions. It has interfaces for C++, Fortran and Python inputs, and is available on GitHub.
 * PolyChord is another nested sampling software package available on GitHub. PolyChord's computational efficiency scales better with an increase in the number of parameters than MultiNest, meaning PolyChord can be more efficient for high dimensional problems.
 * NestedSamplers.jl a Julia package for implementing single- and multi-ellipsoidal nested sampling algorithms is on GitHub.
 * Korali is a high-performance framework for uncertainty quantification, optimization, and deep reinforcement learning, which also implements nested sampling.

Applications
Since nested sampling was proposed in 2004, it has been used in many aspects of the field of astronomy. One paper suggested using nested sampling for cosmological model selection and object detection, as it "uniquely combines accuracy, general applicability and computational feasibility." A refinement of the algorithm to handle multimodal posteriors has been suggested as a means to detect astronomical objects in extant datasets. Other applications of nested sampling are in the field of finite element updating where the algorithm is used to choose an optimal finite element model, and this was applied to structural dynamics. This sampling method has also been used in the field of materials modeling. It can be used to learn the partition function from statistical mechanics and derive thermodynamic properties.

Dynamic nested sampling
Dynamic nested sampling is a generalisation of the nested sampling algorithm in which the number of samples taken in different regions of the parameter space is dynamically adjusted to maximise calculation accuracy. This can lead to large improvements in accuracy and computational efficiency when compared to the original nested sampling algorithm, in which the allocation of samples cannot be changed and often many samples are taken in regions which have little effect on calculation accuracy.

Publicly available dynamic nested sampling software packages include:
 * - a Python implementation of dynamic nested sampling which can be downloaded from GitHub.
 * dyPolyChord: a software package which can be used with Python, C++ and Fortran likelihood and prior distributions. dyPolyChord is available on GitHub.

Dynamic nested sampling has been applied to a variety of scientific problems, including analysis of gravitational waves, mapping distances in space and exoplanet detection.