Multilevel Monte Carlo method

Multilevel Monte Carlo (MLMC) methods in numerical analysis are algorithms for computing expectations that arise in stochastic simulations. Just as Monte Carlo methods, they rely on repeated random sampling, but these samples are taken on different levels of accuracy. MLMC methods can greatly reduce the computational cost of standard Monte Carlo methods by taking most samples with a low accuracy and corresponding low cost, and only very few samples are taken at high accuracy and corresponding high cost.

Goal
The goal of a multilevel Monte Carlo method is to approximate the expected value $$\operatorname{E}[G]$$ of the random variable $$G$$ that is the output of a stochastic simulation. Suppose this random variable cannot be simulated exactly, but there is a sequence of approximations $$G_0, G_1, \ldots, G_L$$ with increasing accuracy, but also increasing cost, that converges to $$G$$ as $$L\rightarrow\infty$$. The basis of the multilevel method is the telescoping sum identity,

$ \operatorname{E}[G_{L}] = \operatorname{E}[G_{0}] + \sum_{\ell=1}^L \operatorname{E}[G_\ell - G_{\ell-1}],$

that is trivially satisfied because of the linearity of the expectation operator. Each of the expectations $$\operatorname{E}[G_\ell - G_{\ell-1}]$$ is then approximated by a Monte Carlo method, resulting in the multilevel Monte Carlo method. Note that taking a sample of the difference $$G_\ell - G_{\ell-1}$$ at level $$\ell$$ requires a simulation of both $$G_{\ell}$$ and $$G_{\ell-1}$$.

The MLMC method works if the variances $$\operatorname{V}[G_\ell - G_{\ell-1}]\rightarrow0$$ as $$\ell\rightarrow\infty$$, which will be the case if both $$G_{\ell}$$ and $$G_{\ell-1}$$ approximate the same random variable $$G$$. By the Central Limit Theorem, this implies that one needs fewer and fewer samples to accurately approximate the expectation of the difference $$G_\ell - G_{\ell-1}$$ as $$\ell\rightarrow\infty$$. Hence, most samples will be taken on level $$0$$, where samples are cheap, and only very few samples will be required at the finest level $$L$$. In this sense, MLMC can be considered as a recursive control variate strategy.

Applications
The first application of MLMC is attributed to Mike Giles, in the context of stochastic differential equations (SDEs) for option pricing, however, earlier traces are found in the work of Heinrich in the context of parametric integration. Here, the random variable $$G=f(X(T))$$ is known as the payoff function, and the sequence of approximations $$G_\ell$$, $$\ell=0,\ldots,L$$ use an approximation to the sample path $$X(t)$$ with time step $$h_\ell=2^{-\ell}T$$.

The application of MLMC to problems in uncertainty quantification (UQ) is an active area of research. An important prototypical example of these problems are partial differential equations (PDEs) with random coefficients. In this context, the random variable $$G$$ is known as the quantity of interest, and the sequence of approximations corresponds to a discretization of the PDE with different mesh sizes.

An algorithm for MLMC simulation
A simple level-adaptive algorithm for MLMC simulation is given below in pseudo-code. $$L\gets0$$ repeat Take warm-up samples at level $$L$$ Compute the sample variance on all levels $$\ell=0,\ldots,L$$ Define the optimal number of samples $$N_\ell$$ on all levels $$\ell=0,\ldots,L$$ Take additional samples on each level $$\ell$$ according to $$N_\ell$$ if $$L\geq2$$ then Test for convergence end if not converged then $$L\gets L+1$$ end until converged

Extensions of MLMC
Recent extensions of the multilevel Monte Carlo method include multi-index Monte Carlo, where more than one direction of refinement is considered, and the combination of MLMC with the Quasi-Monte Carlo method.