Young measure

In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization (or optimal control problems). They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942.

Young measures provide a solution to Hilbert’s twentieth problem, as a broad class of problems in the calculus of variations have solutions in the form of Young measures.

Intuition
Young constructed the Young measure in order to complete sets of ordinary curves in the calculus of variations. That is, Young measures are "generalized curves".

Consider the problem of $$\min_u I(u) = \int_0^1 (u'(x)^2-1)^2 +u(x)^2 dx$$, where $$u$$ is a function such that $$u(0) = u(1) = 0$$, and continuously differentiable. It is clear that we should pick $$u$$ to have value close to zero, and its slope close to $$\pm 1$$. That is, the curve should be a tight jagged line hugging close to the x-axis. No function can reach the minimum value of $$I = 0$$, but we can construct a sequence of functions $$u_1, u_2, \dots$$ that are increasingly jagged, such that $$I(u_n) \to 0$$.

The pointwise limit $$\lim u_n$$ is identically zero, but the pointwise limit $$\lim_n u_n'$$ does not exist. Instead, it is a fine mist that has half of its weight on $$+1$$, and the other half on $$-1$$.

Suppose that $$F$$ is a functional defined by $$F(u) = \int_0^1 f(t, u(t), u'(t))dt$$, where $$f$$ is continuous, then $$\lim_n F(u_n) = \frac 12 \int_0^1 f(t, 0, -1)dt + \frac 12 \int_0^1 f(t, 0, +1)dt$$so in the weak sense, we can define $$\lim_n u_n$$ to be a "function" whose value is zero and whose derivative is $$\frac 12 \delta_{-1} + \frac 12 \delta_{+1}$$. In particular, it would mean that $$I(\lim_n u_n ) = 0$$.

Motivation
The definition of Young measures is motivated by the following theorem: Let m, n be arbitrary positive integers, let $$U$$ be an open bounded subset of $$\mathbb{R}^n$$ and $$\{ f_k \}_{k=1}^\infty$$ be a bounded sequence in $$L^p (U,\mathbb{R}^m)$$. Then there exists a subsequence $$\{ f_{k_j} \}_{j=1}^\infty \subset \{ f_k \}_{k=1}^\infty$$ and for almost every $$x \in U$$ a Borel probability measure $$\nu_x$$ on $$\mathbb{R}^m$$ such that for each $$F \in C(\mathbb{R}^m)$$ we have
 * $$F \circ f_{k_j}(x) {\rightharpoonup} \int_{\mathbb{R}^m} F(y)d\nu_x(y)$$

weakly in $$L^p(U)$$ if the limit exists (or weakly* in $$L^\infty (U)$$ in case of $$p=+\infty$$). The measures $$\nu_x$$ are called the Young measures generated by the sequence $$\{ f_{k_j} \}_{j=1}^\infty$$.

A partial converse is also true: If for each $$x\in U$$ we have a Borel measure $$\nu_x$$ on $$\mathbb R^m$$ such that $$\int_U\int_{\R^m}\|y\|^pd\nu_x(y)dx<+\infty$$, then there exists a sequence $$\{f_k\}_{k=1}^\infty\subseteq L^p(U,\mathbb R^m)$$, bounded in $$ L^p(U,\mathbb R^m)$$, that has the same weak convergence property as above.

More generally, for any Carathéodory function $$G(x,A) : U\times R^m \to R$$, the limit
 * $$\lim_{j\to \infty} \int_{U} G(x,f_j(x)) \ d x,$$

if it exists, will be given by
 * $$\int_{U} \int_{\R^m} G(x,A) \ d \nu_x(A) \ dx$$.

Young's original idea in the case $$G\in C_0(U \times \R^m) $$ was to consider for each integer $$j\ge1$$ the uniform measure, let's say $$\Gamma_j:= (id ,f_j)_\sharp L ^d\llcorner U,$$ concentrated on graph of the function $$f_j.$$ (Here, $$L ^d\llcorner U$$is the restriction of the Lebesgue measure on $$U.$$) By taking the weak* limit of these measures as elements of $$C_0(U \times \R^m)^\star,$$ we have
 * $$\langle\Gamma_j, G\rangle = \int_{U} G(x,f_j(x)) \ d x \to  \langle\Gamma ,G\rangle,$$

where $$\Gamma$$ is the mentioned weak limit. After a disintegration of the measure $$\Gamma$$ on the product space $$\Omega \times \R^m,$$ we get the parameterized measure $$\nu_x$$.

General definition
Let $$m,n$$ be arbitrary positive integers, let $$U$$ be an open and bounded subset of $$\mathbb R^n$$, and let $$p\geq 1$$. A Young measure (with finite p-moments) is a family of Borel probability measures $$\{\nu_x : x\in U\}$$ on $$\mathbb R^m$$ such that $$\int_U\int_{\R^m} \|y\|^p d\nu_x(y)dx<+\infty$$.

Pointwise converging sequence
A trivial example of Young measure is when the sequence $$f_n$$ is bounded in $$L^\infty(U, \mathbb{R}^n )$$ and converges pointwise almost everywhere in $$U$$ to a function $$f$$. The Young measure is then the Dirac measure
 * $$\nu_x = \delta_{f(x)}, \quad x \in U.$$

Indeed, by dominated convergence theorem, $$F(f_n(x))$$ converges weakly* in $$ L^\infty (U) $$ to
 * $$ F(f(x)) = \int F(y) \, \text{d} \delta_{f(x)} $$

for any $$ F \in C(\mathbb{R}^n)$$.

Sequence of sines
A less trivial example is a sequence
 * $$ f_n(x) = \sin (n x), \quad x \in (0,2\pi). $$

The corresponding Young measure satisfies
 * $$ \nu_x(E) = \frac{1}{\pi} \int_{E\cap [-1,1]} \frac{1}{\sqrt{1-y^2}} \, \text{d}y, $$

for any measurable set $$ E $$, independent of $$x \in (0,2\pi)$$. In other words, for any $$ F \in C(\mathbb{R}^n)$$:
 * $$F(f_n) {\rightharpoonup}^* \frac{1}{\pi} \int_{-1}^1 \frac{F(y)}{\sqrt{1-y^2}} \, \text{d}y $$

in $$L^\infty((0,2\pi)) $$. Here, the Young measure does not depend on $$x$$ and so the weak* limit is always a constant.

To see this intuitively, consider that at the limit of large $$n$$, a rectangle of $$[x, x+\delta x] \times [y, y + \delta y]$$ would capture a part of the curve of $$f_n$$. Take that captured part, and project it down to the x-axis. The length of that projection is $$\frac{2\delta x \delta y}{\sqrt{1-y^2}}$$, which means that $$\lim_n f_n$$ should look like a fine mist that has probability density $$\frac{1}{\pi\sqrt{1-y^2}}$$ at all $$x$$.

Minimizing sequence
For every asymptotically minimizing sequence $$u_n$$ of
 * $$I(u) = \int_0^1 (u'(x)^2-1)^2 +u(x)^2 dx$$

subject to $$u(0)=u(1)=0$$ (that is, the sequence satisfies $$\lim_{n\to+\infty} I(u_n)=\inf_{u\in C^1([0,1])}I(u)$$), and perhaps after passing to a subsequence, the sequence of derivatives $$u'_n$$ generates Young measures of the form $$\nu_x= \frac 12 \delta_{-1} + \frac 12 \delta_1$$. This captures the essential features of all minimizing sequences to this problem, namely, their derivatives $$u'_k(x)$$ will tend to concentrate along the minima $$\{-1,1\}$$ of the integrand $$(u'(x)^2-1)^2 +u(x)^2$$.

If we take $$\lim_n \frac{\sin(2\pi n t)}{2\pi n}$$, then its limit has value zero, and derivative $$\nu( dy ) = \frac{1}{\pi\sqrt{1-y^2}}dy$$, which means $$\lim I = \frac{1}{\pi} \int_{-1}^{+1} (1-y^2)^{3/2}dy$$.