Itô diffusion

In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

Overview


A (time-homogeneous) Itô diffusion in n-dimensional Euclidean space $$\boldsymbol{\textbf{R}}^n$$ is a process X : [0, +∞) × Ω → Rn defined on a probability space (Ω, Σ, P) and satisfying a stochastic differential equation of the form


 * $$\mathrm{d} X_{t} = b(X_t) \, \mathrm{d} t + \sigma (X_{t}) \, \mathrm{d} B_{t},$$

where B is an m-dimensional Brownian motion and b : Rn → Rn and σ : Rn → Rn×m satisfy the usual Lipschitz continuity condition


 * $$| b(x) - b(y) | + | \sigma (x) - \sigma (y) | \leq C | x - y |$$

for some constant C and all x, y ∈ Rn; this condition ensures the existence of a unique strong solution X to the stochastic differential equation given above. The vector field b is known as the drift coefficient of X; the matrix field σ is known as the diffusion coefficient of X. It is important to note that b and σ do not depend upon time; if they were to depend upon time, X would be referred to only as an Itô process, not a diffusion. Itô diffusions have a number of nice properties, which include


 * sample and Feller continuity;
 * the Markov property;
 * the strong Markov property;
 * the existence of an infinitesimal generator;
 * the existence of a characteristic operator;
 * Dynkin's formula.

In particular, an Itô diffusion is a continuous, strongly Markovian process such that the domain of its characteristic operator includes all twice-continuously differentiable functions, so it is a diffusion in the sense defined by Dynkin (1965).

Sample continuity
An Itô diffusion X is a sample continuous process, i.e., for almost all realisations Bt(ω) of the noise, Xt(ω) is a continuous function of the time parameter, t. More accurately, there is a "continuous version" of X, a continuous process Y so that


 * $$\mathbf{P} [ X_t = Y_t] = 1 \mbox{ for all } t.$$

This follows from the standard existence and uniqueness theory for strong solutions of stochastic differential equations.

Feller continuity
In addition to being (sample) continuous, an Itô diffusion X satisfies the stronger requirement to be a Feller-continuous process.

For a point x ∈ Rn, let Px denote the law of X given initial datum X0 = x, and let Ex denote expectation with respect to Px.

Let f : Rn → R be a Borel-measurable function that is bounded below and define, for fixed t ≥ 0, u : Rn → R by


 * $$u(x) = \mathbf{E}^{x}[ f(X_t) ].$$


 * Lower semi-continuity: if f is lower semi-continuous, then u is lower semi-continuous.
 * Feller continuity: if f is bounded and continuous, then u is continuous.

The behaviour of the function u above when the time t is varied is addressed by the Kolmogorov backward equation, the Fokker–Planck equation, etc. (See below.)

The Markov property
An Itô diffusion X has the important property of being Markovian: the future behaviour of X, given what has happened up to some time t, is the same as if the process had been started at the position Xt at time 0. The precise mathematical formulation of this statement requires some additional notation:

Let Σ∗ denote the natural filtration of (Ω, Σ) generated by the Brownian motion B: for t ≥ 0,


 * $$\Sigma_{t} = \Sigma_{t}^{B} = \sigma \left \{ B_{s}^{-1} (A) \subseteq \Omega \ : \ 0 \leq s \leq t, A \subseteq \mathbf{R}^{n} \mbox{ Borel} \right\}.$$

It is easy to show that X is adapted to Σ∗ (i.e. each Xt is Σt-measurable), so the natural filtration F∗ = F∗X of (Ω, Σ) generated by X has Ft ⊆ Σt for each t ≥ 0.

Let f : Rn → R be a bounded, Borel-measurable function. Then, for all t and h ≥ 0, the conditional expectation conditioned on the σ-algebra Σt and the expectation of the process "restarted" from Xt satisfy the Markov property:


 * $$\mathbf{E}^{x} \big[ f(X_{t+h}) \big| \Sigma_{t} \big] (\omega) = \mathbf{E}^{X_{t} (\omega)}[ f(X_{h})].$$

In fact, X is also a Markov process with respect to the filtration F∗, as the following shows:


 * $$\begin{align}

\mathbf{E}^{x} \left [ f(X_{t+h}) \big| F_{t} \right ] &= \mathbf{E}^{x} \left [ \mathbf{E}^{x} \left [ f(X_{t+h}) \big| \Sigma_{t} \right] \big| F_{t} \right] \\ &= \mathbf{E}^{x} \left [ \mathbf{E}^{X_{t}} \left [ f(X_{h}) \right] \big| F_{t} \right] \\ &= \mathbf{E}^{X_{t}} \left [ f(X_{h}) \right ]. \end{align}$$

The strong Markov property
The strong Markov property is a generalization of the Markov property above in which t is replaced by a suitable random time τ : Ω → [0, +∞] known as a stopping time. So, for example, rather than "restarting" the process X at time t = 1, one could "restart" whenever X first reaches some specified point p of Rn.

As before, let f : Rn → R be a bounded, Borel-measurable function. Let τ be a stopping time with respect to the filtration Σ∗ with τ < +∞ almost surely. Then, for all h ≥ 0,


 * $$\mathbf{E}^{x} \big[ f(X_{\tau+h}) \big| \Sigma_{\tau} \big] = \mathbf{E}^{X_{\tau}} \big[ f(X_{h}) \big].$$

Definition
Associated to each Itô diffusion, there is a second-order partial differential operator known as the generator of the diffusion. The generator is very useful in many applications and encodes a great deal of information about the process X. Formally, the infinitesimal generator of an Itô diffusion X is the operator A, which is defined to act on suitable functions f : Rn → R by


 * $$A f (x) = \lim_{t \downarrow 0} \frac{\mathbf{E}^{x} [f(X_{t})] - f(x)}{t}.$$

The set of all functions f for which this limit exists at a point x is denoted DA(x), while DA denotes the set of all f for which the limit exists for all x ∈ Rn. One can show that any compactly-supported C2 (twice differentiable with continuous second derivative) function f lies in DA and that


 * $$Af(x) = \sum_{i} b_{i} (x) \frac{\partial f}{\partial x_i} (x) + \tfrac{1}{2} \sum_{i, j} \left( \sigma (x) \sigma (x)^{\top} \right)_{i, j} \frac{\partial^{2} f}{\partial x_i \, \partial x_{j}} (x),$$

or, in terms of the gradient and scalar and Frobenius inner products,


 * $$A f (x) = b(x) \cdot \nabla_{x} f(x) + \tfrac1{2} \left( \sigma(x) \sigma(x)^{\top} \right ) : \nabla_{x} \nabla_{x} f(x).$$

An example
The generator A for standard n-dimensional Brownian motion B, which satisfies the stochastic differential equation dXt = dBt, is given by


 * $$A f (x) = \tfrac1{2} \sum_{i, j} \delta_{ij} \frac{\partial^{2} f}{\partial x_{i} \, \partial x_{j}} (x) = \tfrac1{2} \sum_{i} \frac{\partial^{2} f}{\partial x_{i}^{2}} (x)$$,

i.e., A = Δ/2, where Δ denotes the Laplace operator.

The Kolmogorov and Fokker–Planck equations
The generator is used in the formulation of Kolmogorov's backward equation. Intuitively, this equation tells us how the expected value of any suitably smooth statistic of X evolves in time: it must solve a certain partial differential equation in which time t and the initial position x are the independent variables. More precisely, if f ∈ C2(Rn; R) has compact support and u : [0, +∞) × Rn → R is defined by


 * $$u(t, x) = \mathbf{E}^{x} [ f(X_t)],$$

then u(t, x) is differentiable with respect to t, u(t, ·) ∈ DA for all t, and u satisfies the following partial differential equation, known as Kolmogorov's backward equation:


 * $$\begin{cases} \dfrac{\partial u}{\partial t}(t, x) = A u (t, x), & t > 0, x \in \mathbf{R}^{n}; \\ u(0, x) = f(x), & x \in \mathbf{R}^{n}. \end{cases}$$

The Fokker–Planck equation (also known as Kolmogorov's forward equation) is in some sense the "adjoint" to the backward equation, and tells us how the probability density functions of Xt evolve with time t. Let ρ(t, ·) be the density of Xt with respect to Lebesgue measure on Rn, i.e., for any Borel-measurable set S ⊆ Rn,


 * $$\mathbf{P} \left [ X_t \in S \right ] = \int_{S} \rho(t, x) \, \mathrm{d} x.$$

Let A∗ denote the Hermitian adjoint of A (with respect to the L2 inner product). Then, given that the initial position X0 has a prescribed density ρ0, ρ(t, x) is differentiable with respect to t, ρ(t, ·) ∈ DA* for all t, and ρ satisfies the following partial differential equation, known as the Fokker–Planck equation:


 * $$\begin{cases} \dfrac{\partial \rho}{\partial t}(t, x) = A^{*} \rho (t, x), & t > 0, x \in \mathbf{R}^{n}; \\ \rho(0, x) = \rho_{0} (x), & x \in \mathbf{R}^{n}. \end{cases}$$

The Feynman–Kac formula
The Feynman–Kac formula is a useful generalization of Kolmogorov's backward equation. Again, f is in C2(Rn; R) and has compact support, and q : Rn → R is taken to be a continuous function that is bounded below. Define a function v : [0, +∞) × Rn → R by


 * $$v(t, x) = \mathbf{E}^{x} \left[ \exp \left( - \int_{0}^{t} q(X_{s}) \, \mathrm{d} s \right) f(X_{t}) \right].$$

The Feynman–Kac formula states that v satisfies the partial differential equation


 * $$\begin{cases} \dfrac{\partial v}{\partial t}(t, x) = A v (t, x) - q(x) v(t, x), & t > 0, x \in \mathbf{R}^{n}; \\ v(0, x) = f(x), & x \in \mathbf{R}^{n}. \end{cases}$$

Moreover, if w : [0, +∞) × Rn → R is C1 in time, C2 in space, bounded on K × Rn for all compact K, and satisfies the above partial differential equation, then w must be v as defined above.

Kolmogorov's backward equation is the special case of the Feynman–Kac formula in which q(x) = 0 for all x ∈ Rn.

Definition
The characteristic operator of an Itô diffusion X is a partial differential operator closely related to the generator, but somewhat more general. It is more suited to certain problems, for example in the solution of the Dirichlet problem.

The characteristic operator $$\mathcal{A}$$ of an Itô diffusion X is defined by


 * $$\mathcal{A} f (x) = \lim_{U \downarrow x} \frac{\mathbf{E}^{x} \left [ f(X_{\tau_{U}}) \right ] - f(x)}{\mathbf{E}^{x} [\tau_{U}]},$$

where the sets U form a sequence of open sets Uk that decrease to the point x in the sense that


 * $$U_{k + 1} \subseteq U_{k} \mbox{ and } \bigcap_{k = 1}^{\infty} U_{k} = \{ x \},$$

and


 * $$\tau_{U} = \inf \{ t \geq 0 \ : \ X_{t} \not \in U \}$$

is the first exit time from U for X. $$D_{\mathcal{A}}$$ denotes the set of all f for which this limit exists for all x ∈ Rn and all sequences {Uk}. If Ex[τU] = +∞ for all open sets U containing x, define


 * $$\mathcal{A} f (x) = 0.$$

Relationship with the generator
The characteristic operator and infinitesimal generator are very closely related, and even agree for a large class of functions. One can show that


 * $$D_{A} \subseteq D_{\mathcal{A}}$$

and that


 * $$A f = \mathcal{A} f \mbox{ for all } f \in D_{A}.$$

In particular, the generator and characteristic operator agree for all C2 functions f, in which case


 * $$\mathcal{A} f(x) = \sum_i b_i (x) \frac{\partial f}{\partial x_{i}} (x) + \tfrac1{2} \sum_{i, j} \left( \sigma (x) \sigma (x)^{\top} \right)_{i, j} \frac{\partial^{2} f}{\partial x_{i} \, \partial x_{j}} (x).$$

Application: Brownian motion on a Riemannian manifold


Above, the generator (and hence characteristic operator) of Brownian motion on Rn was calculated to be $1⁄2$Δ, where Δ denotes the Laplace operator. The characteristic operator is useful in defining Brownian motion on an m-dimensional Riemannian manifold (M, g): a Brownian motion on M is defined to be a diffusion on M whose characteristic operator $$\mathcal{A}$$ in local coordinates xi, 1 ≤ i ≤ m, is given by $1⁄2$ΔLB, where ΔLB is the Laplace-Beltrami operator given in local coordinates by


 * $$\Delta_{\mathrm{LB}} = \frac1{\sqrt{\det(g)}} \sum_{i = 1}^{m} \frac{\partial}{\partial x_{i}} \left( \sqrt{\det(g)} \sum_{j = 1}^{m} g^{ij} \frac{\partial}{\partial x_{j}} \right),$$

where [gij] = [gij]−1 in the sense of the inverse of a square matrix.

The resolvent operator
In general, the generator A of an Itô diffusion X is not a bounded operator. However, if a positive multiple of the identity operator I is subtracted from A then the resulting operator is invertible. The inverse of this operator can be expressed in terms of X itself using the resolvent operator.

For α > 0, the resolvent operator Rα, acting on bounded, continuous functions g : Rn → R, is defined by


 * $$R_{\alpha} g (x) = \mathbf{E}^{x} \left[ \int_{0}^{\infty} e^{- \alpha t} g(X_{t}) \, \mathrm{d} t \right].$$

It can be shown, using the Feller continuity of the diffusion X, that Rαg is itself a bounded, continuous function. Also, Rα and αI − A are mutually inverse operators:
 * if f : Rn → R is C2 with compact support, then, for all α > 0,
 * $$R_{\alpha} (\alpha \mathbf{I} - A) f = f;$$


 * if g : Rn → R is bounded and continuous, then Rαg lies in DA and, for all α > 0,
 * $$(\alpha \mathbf{I} - A) R_{\alpha} g = g.$$

Invariant measures
Sometimes it is necessary to find an invariant measure for an Itô diffusion X, i.e. a measure on Rn that does not change under the "flow" of X: i.e., if X0 is distributed according to such an invariant measure μ∞, then Xt is also distributed according to μ∞ for any t ≥ 0. The Fokker–Planck equation offers a way to find such a measure, at least if it has a probability density function ρ∞: if X0 is indeed distributed according to an invariant measure μ∞ with density ρ∞, then the density ρ(t, ·) of Xt does not change with t, so ρ(t, ·) = ρ∞, and so ρ∞ must solve the (time-independent) partial differential equation


 * $$A^{*} \rho_{\infty} (x) = 0, \quad x \in \mathbf{R}^{n}.$$

This illustrates one of the connections between stochastic analysis and the study of partial differential equations. Conversely, a given second-order linear partial differential equation of the form Λf = 0 may be hard to solve directly, but if Λ = A∗ for some Itô diffusion X, and an invariant measure for X is easy to compute, then that measure's density provides a solution to the partial differential equation.

Invariant measures for gradient flows
An invariant measure is comparatively easy to compute when the process X is a stochastic gradient flow of the form


 * $$\mathrm{d} X_{t} = - \nabla \Psi (X_{t}) \, \mathrm{d} t + \sqrt{2 \beta^{-1}} \, \mathrm{d} B_{t},$$

where β > 0 plays the role of an inverse temperature and Ψ : Rn → R is a scalar potential satisfying suitable smoothness and growth conditions. In this case, the Fokker–Planck equation has a unique stationary solution ρ∞ (i.e. X has a unique invariant measure μ∞ with density ρ∞) and it is given by the Gibbs distribution:


 * $$\rho_{\infty} (x) = Z^{-1} \exp ( - \beta \Psi (x) ),$$

where the partition function Z is given by


 * $$Z = \int_{\mathbf{R}^{n}} \exp ( - \beta \Psi (x) ) \, \mathrm{d} x.$$

Moreover, the density ρ∞ satisfies a variational principle: it minimizes over all probability densities ρ on Rn the free energy functional F given by


 * $$F[\rho] = E[\rho] + \frac1{\beta} S[\rho],$$

where


 * $$E[\rho] = \int_{\mathbf{R}^{n}} \Psi(x) \rho(x) \, \mathrm{d} x$$

plays the role of an energy functional, and


 * $$S[\rho] = \int_{\mathbf{R}^{n}} \rho(x) \log \rho(x) \, \mathrm{d} x$$

is the negative of the Gibbs-Boltzmann entropy functional. Even when the potential Ψ is not well-behaved enough for the partition function Z and the Gibbs measure μ∞ to be defined, the free energy F[ρ(t, ·)] still makes sense for each time t ≥ 0, provided that the initial condition has F[ρ(0, ·)] < +∞. The free energy functional F is, in fact, a Lyapunov function for the Fokker–Planck equation: F[ρ(t, ·)] must decrease as t increases. Thus, F is an H-function for the X-dynamics.

Example
Consider the Ornstein-Uhlenbeck process X on Rn satisfying the stochastic differential equation


 * $$\mathrm{d} X_{t} = - \kappa ( X_{t} - m) \, \mathrm{d} t + \sqrt{2 \beta^{-1}} \, \mathrm{d} B_{t},$$

where m ∈ Rn and β, κ > 0 are given constants. In this case, the potential Ψ is given by


 * $$\Psi(x) = \tfrac{1}{2} \kappa |x - m|^2,$$

and so the invariant measure for X is a Gaussian measure with density ρ∞ given by


 * $$\rho_{\infty} (x) = \left( \frac{\beta \kappa}{2 \pi} \right)^{\frac{n}{2}} \exp \left( - \frac{\beta \kappa | x - m |^{2}}{2} \right)$$.

Heuristically, for large t, Xt is approximately normally distributed with mean m and variance (βκ)−1. The expression for the variance may be interpreted as follows: large values of κ mean that the potential well Ψ has "very steep sides", so Xt is unlikely to move far from the minimum of Ψ at m; similarly, large values of β mean that the system is quite "cold" with little noise, so, again, Xt is unlikely to move far away from m.

The martingale property
In general, an Itô diffusion X is not a martingale. However, for any f ∈ C2(Rn; R) with compact support, the process M : [0, +∞) × Ω → R defined by


 * $$M_{t} = f(X_{t}) - \int_{0}^{t} A f(X_{s}) \, \mathrm{d} s,$$

where A is the generator of X, is a martingale with respect to the natural filtration F∗ of (Ω, Σ) by X. The proof is quite simple: it follows from the usual expression of the action of the generator on smooth enough functions f and Itô's lemma (the stochastic chain rule) that


 * $$f(X_{t}) = f(x) + \int_{0}^{t} A f(X_{s}) \, \mathrm{d} s + \int_{0}^{t} \nabla f(X_{s})^{\top} \sigma(X_{s}) \, \mathrm{d} B_{s}.$$

Since Itô integrals are martingales with respect to the natural filtration Σ∗ of (Ω, Σ) by B, for t > s,


 * $$\mathbf{E}^{x} \big[ M_{t} \big| \Sigma_{s} \big] = M_{s}.$$

Hence, as required,


 * $$\mathbf{E}^{x}[M_t | F_s] = \mathbf{E}^{x} \left[ \mathbf{E}^{x} \big[ M_{t} \big| \Sigma_{s} \big] \big| F_{s} \right] = \mathbf{E}^{x} \big[ M_{s} \big| F_{s} \big] = M_{s},$$

since Ms is Fs-measurable.

Dynkin's formula
Dynkin's formula, named after Eugene Dynkin, gives the expected value of any suitably smooth statistic of an Itô diffusion X (with generator A) at a stopping time. Precisely, if τ is a stopping time with Ex[τ] < +∞, and f : Rn → R is C2 with compact support, then


 * $$\mathbf{E}^{x} [f(X_{\tau})] = f(x) + \mathbf{E}^{x} \left[ \int_{0}^{\tau} A f (X_{s}) \, \mathrm{d} s \right].$$

Dynkin's formula can be used to calculate many useful statistics of stopping times. For example, canonical Brownian motion on the real line starting at 0 exits the interval (−R, +R) at a random time τR with expected value


 * $$\mathbf{E}^{0} [\tau_{R}] = R^{2}.$$

Dynkin's formula provides information about the behaviour of X at a fairly general stopping time. For more information on the distribution of X at a hitting time, one can study the harmonic measure of the process.

The harmonic measure
In many situations, it is sufficient to know when an Itô diffusion X will first leave a measurable set H ⊆ Rn. That is, one wishes to study the first exit time


 * $$\tau_{H} (\omega) = \inf \{ t \geq 0 | X_{t} \not \in H \}.$$

Sometimes, however, one also wishes to know the distribution of the points at which X exits the set. For example, canonical Brownian motion B on the real line starting at 0 exits the interval (−1, 1) at −1 with probability $1⁄2$ and at 1 with probability $1⁄2$, so Bτ (−1, 1) is uniformly distributed on the set {−1, 1}.

In general, if G is compactly embedded within Rn, then the harmonic measure (or hitting distribution) of X on the boundary ∂G of G is the measure μGx defined by


 * $$\mu_{G}^{x} (F) = \mathbf{P}^{x} \left [ X_{\tau_{G}} \in F \right ]$$

for x ∈ G and F ⊆ ∂G.

Returning to the earlier example of Brownian motion, one can show that if B is a Brownian motion in Rn starting at x ∈ Rn and D ⊂ Rn is an open ball centred on x, then the harmonic measure of B on ∂D is invariant under all rotations of D about x and coincides with the normalized surface measure on ∂D.

The harmonic measure satisfies an interesting mean value property: if f : Rn → R is any bounded, Borel-measurable function and φ is given by


 * $$\varphi (x) = \mathbf{E}^{x} \left [ f(X_{\tau_{H}}) \right],$$

then, for all Borel sets G ⊂⊂ H and all x ∈ G,


 * $$\varphi (x) = \int_{\partial G} \varphi (y) \, \mathrm{d} \mu_{G}^{x} (y).$$

The mean value property is very useful in the solution of partial differential equations using stochastic processes.

The Green measure and Green formula
Let A be a partial differential operator on a domain D ⊆ Rn and let X be an Itô diffusion with A as its generator. Intuitively, the Green measure of a Borel set H is the expected length of time that X stays in H before it leaves the domain D. That is, the Green measure of X with respect to D at x, denoted G(x, ·), is defined for Borel sets H ⊆ Rn by


 * $$G(x, H) = \mathbf{E}^{x} \left[ \int_{0}^{\tau_{D}} \chi_{H} (X_{s}) \, \mathrm{d} s \right],$$

or for bounded, continuous functions f : D → R by


 * $$\int_{D} f(y) \, G(x, \mathrm{d} y) = \mathbf{E}^{x} \left[ \int_{0}^{\tau_{D}} f(X_{s}) \, \mathrm{d} s \right].$$

The name "Green measure" comes from the fact that if X is Brownian motion, then


 * $$G(x, H) = \int_{H} G(x, y) \, \mathrm{d} y,$$

where G(x, y) is Green's function for the operator $1⁄2$Δ on the domain D.

Suppose that Ex[τD] < +∞ for all x ∈ D. Then the Green formula holds for all f ∈ C2(Rn; R) with compact support:


 * $$f(x) = \mathbf{E}^{x} \left[ f \left( X_{\tau_{D}} \right) \right] - \int_{D} A f (y) \, G(x, \mathrm{d} y).$$

In particular, if the support of f is compactly embedded in D,


 * $$f(x) = - \int_{D} A f (y) \, G(x, \mathrm{d} y).$$