Riesz–Fischer theorem

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

For many authors, the Riesz–Fischer theorem refers to the fact that the Lp spaces $$L^p$$ from Lebesgue integration theory are complete.

Modern forms of the theorem
The most common form of the theorem states that a measurable function on $$[-\pi, \pi]$$ is square integrable if and only if the corresponding Fourier series converges in the Lp space $$L^2.$$ This means that if the Nth partial sum of the Fourier series corresponding to a square-integrable function f is given by $$S_N f(x) = \sum_{n=-N}^{N} F_n \, \mathrm{e}^{inx},$$ where $$F_n,$$ the nth Fourier coefficient, is given by $$F_n =\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\, \mathrm{e}^{-inx}\, \mathrm{d}x,$$ then $$\lim_{N \to \infty} \left\Vert S_N f - f \right\|_2 = 0,$$ where $$\|\,\cdot\,\|_2$$ is the $$L^2$$-norm.

Conversely, if $$ \{a_n\} \,$$ is a two-sided sequence of complex numbers (that is, its indices range from negative infinity to positive infinity) such that $$\sum_{n=-\infty}^\infty \left|a_n\right\vert^2 < \infty,$$ then there exists a function f such that f is square-integrable and the values $$a_n$$ are the Fourier coefficients of f.

This form of the Riesz–Fischer theorem is a stronger form of Bessel's inequality, and can be used to prove Parseval's identity for Fourier series.

Other results are often called the Riesz–Fischer theorem. Among them is the theorem that, if A is an orthonormal set in a Hilbert space H, and $$x \in H,$$ then $$\langle x, y\rangle = 0$$ for all but countably many $$y \in A,$$ and $$\sum_{y\in A} |\langle x,y\rangle|^2 \le \|x\|^2.$$ Furthermore, if A is an orthonormal basis for H and x an arbitrary vector, the series $$\sum_{y\in A} \langle x,y\rangle \, y$$ converges (or ) to x. This is equivalent to saying that for every $$\varepsilon > 0,$$ there exists a finite set $$B_0$$ in A such that $$\|x - \sum_{y\in B} \langle x,y\rangle y \| < \varepsilon$$ for every finite set B containing B0. Moreover, the following conditions on the set A are equivalent:
 * the set A is an orthonormal basis of H
 * for every vector $$x \in H,$$
 * $$\|x\|^2 = \sum_{y\in A} |\langle x,y\rangle|^2.$$

Another result, which also sometimes bears the name of Riesz and Fischer, is the theorem that $$L^2$$ (or more generally $$L^p, 0 < p \leq \infty$$) is complete.

Example
The Riesz–Fischer theorem also applies in a more general setting. Let R be an inner product space consisting of functions (for example, measurable functions on the line, analytic functions in the unit disc; in old literature, sometimes called Euclidean Space), and let $$\{\varphi_n\}$$ be an orthonormal system in R (e.g. Fourier basis, Hermite or Laguerre polynomials, etc. – see orthogonal polynomials), not necessarily complete (in an inner product space, an orthonormal set is complete if no nonzero vector is orthogonal to every vector in the set). The theorem asserts that if the normed space R is complete (thus R is a Hilbert space), then any sequence $$\{c_n\}$$ that has finite $$\ell^2$$ norm defines a function f in the space R.

The function f is defined by $$f = \lim_{n \to \infty} \sum_{k=0}^n c_k \varphi_k,$$ limit in R-norm.

Combined with the Bessel's inequality, we know the converse as well: if f is a function in R, then the Fourier coefficients $$(f,\varphi_n)$$ have finite $$\ell^2$$ norm.

History: the Note of Riesz and the Note of Fischer (1907)
In his Note, states the following result (translated here to modern language at one point: the notation $$L^2([a, b])$$ was not used in 1907).


 * Let $$\left\{\varphi_n\right\}$$ be an orthonormal system in $$L^2([a, b])$$ and $$\left\{a_n\right\}$$ a sequence of reals. The convergence of the series $$ \sum a_n^2 $$ is a necessary and sufficient condition for the existence of a function f such that $$\int_a^b f(x) \varphi_n(x) \, \mathrm{d}x = a_n \quad \text{ for every } n.$$

Today, this result of Riesz is a special case of basic facts about series of orthogonal vectors in Hilbert spaces.

Riesz's Note appeared in March. In May, states explicitly in a theorem (almost with modern words) that a Cauchy sequence in $$L^2([a, b])$$ converges in $$L^2$$-norm to some function $$f \in L^2([a, b]).$$  In this Note, Cauchy sequences are called "sequences converging in the mean" and  $$L^2([a, b])$$ is denoted by $$\Omega.$$ Also, convergence to a limit in $$L^2$$–norm is called "convergence in the mean towards a function". Here is the statement, translated from French:
 * Theorem. If a sequence of functions belonging to $$\Omega$$ converges in the mean, there exists in $$\Omega$$ a function f towards which the sequence converges in the mean.

Fischer goes on proving the preceding result of Riesz, as a consequence of the orthogonality of the system, and of the completeness of $$L^2.$$

Fischer's proof of completeness is somewhat indirect. It uses the fact that the indefinite integrals of the functions gn in the given Cauchy sequence, namely $$G_n(x) = \int_a^x g_n(t) \, \mathrm{d}t,$$ converge uniformly on $$[a, b]$$ to some function G, continuous with bounded variation. The existence of the limit $$g \in L^2$$ for the Cauchy sequence is obtained by applying to G differentiation theorems from Lebesgue's theory.

Riesz uses a similar reasoning in his Note, but makes no explicit mention to the completeness of $$L^2,$$ although his result may be interpreted this way. He says that integrating term by term a trigonometric series with given square summable coefficients, he gets a series converging uniformly to a continuous function F&thinsp; with bounded variation. The derivative f&thinsp; of F, defined almost everywhere, is square summable and has for Fourier coefficients the given coefficients.

Completeness of Lp, 0 < p &le; ∞
For some authors, notably Royden, the Riesz-Fischer Theorem is the result that $$L^p$$ is complete: that every Cauchy sequence of functions in $$L^p$$ converges to a function in $$L^p,$$ under the metric induced by the p-norm. The proof below is based on the convergence theorems for the Lebesgue integral; the result can also be obtained for $$p \in [1,\infty]$$ by showing that every Cauchy sequence has a rapidly converging Cauchy sub-sequence, that every Cauchy sequence with a convergent sub-sequence converges, and that every rapidly Cauchy sequence in $$L^p$$ converges in $$L^p.$$

When $$1 \leq p \leq \infty,$$ the Minkowski inequality implies that the Lp space $$L^p$$ is a normed space. In order to prove that $$L^p$$ is complete, i.e. that $$L^p$$ is a Banach space, it is enough (see e.g. Banach space) to prove that every series $$\sum u_n$$ of functions in $$L^p(\mu)$$ such that $$\sum \|u_n\|_p < \infty$$ converges in the $$L^p$$-norm to some function $$f \in L^p(\mu).$$ For $$p < \infty,$$ the Minkowski inequality and the monotone convergence theorem imply that $$\int \left(\sum_{n=0}^\infty |u_n|\right)^p \, \mathrm{d}\mu \le \left(\sum_{n=0}^{\infty} \|u_n\|_p\right)^p< \infty, \ \ \text{ hence } \ \ f = \sum_{n=0}^\infty u_n$$ is defined $$\mu$$–almost everywhere and $$f \in L^p(\mu).$$ The dominated convergence theorem is then used to prove that the partial sums of the series converge to f in the $$L^p$$-norm, $$\int \left|f - \sum_{k=0}^{n} u_k\right|^p \, \mathrm{d}\mu \le \int \left( \sum_{\ell > n} |u_\ell| \right)^p \, \mathrm{d}\mu \rightarrow 0 \text{ as } n \rightarrow \infty.$$

The case $$0 < p < 1$$ requires some modifications, because the p-norm is no longer subadditive. One starts with the stronger assumption that $$\sum \|u_n\|_p^p < \infty$$ and uses repeatedly that $$\left|\sum_{k=0}^n u_k \right|^p \le \sum_{k=0}^n |u_k|^p \text{ when } p < 1$$ The case $$p = \infty$$ reduces to a simple question about uniform convergence outside a $$\mu$$-negligible set.