User:Rst123ful/sandbox/practicearticle

Cylindrical σ-algebra
In mathematics &mdash; specifically, in measure theory and functional analysis &mdash; the cylindrical σ-algebra is a σ-algebra often used in the study either product measure or probability measure of random variables on Banach spaces.

For a product space, the cylinder σ-algebra is the one which is generated by cylinder sets. As for products of countable length, the cylindrical σ-algebra is the product σ-algebra.

In the context of Banach space X, the cylindrical σ-algebra Cyl(X) is defined to be the coarsest σ-algebra (i.e. the one with the fewest measurable sets) such that every continuous linear function on X is a measurable function. In general, Cyl(X) is not the same as the Borel σ-algebra on X, which is the coarsest σ-algebra that contains all open subsets of X.

Covariance operator
In probability theory, for a probability measure P on a Hilbert space H with inner product $$\langle \cdot,\cdot\rangle $$, the covariance of P is the bilinear form Cov: H &times; H → R given by


 * $$\mathrm{Cov}(x, y) = \int_{H} \langle x, z \rangle \langle y, z \rangle \, \mathrm{d} \mathbf{P} (z)$$

for all x and y in H. The covariance operator C is then defined by


 * $$\mathrm{Cov}(x, y) = \langle Cx, y \rangle$$

(from the Riesz representation theorem, such operator exists if Cov is bounded). Since Cov is symmetric in its arguments, the covariance operator is self-adjoint (the infinite-dimensional analogy of the transposition symmetry in the finite-dimensional case). When P is a centred Gaussian measure, C is also a nuclear operator. In particular, it is a compact operator of trace class, that is, it has finite trace.

Even more generally, for a probability measure P on a Banach space B, the covariance of P is the bilinear form on the algebraic dual B#, defined by


 * $$\mathrm{Cov}(x, y) = \int_{B} \langle x, z \rangle \langle y, z \rangle \, \mathrm{d} \mathbf{P} (z)$$

where $$ \langle x, z \rangle $$ is now the value of the linear functional x on the element z.

Quite similarly, the covariance function of a function-valued random element (in special cases is called random process or random field) z is


 * $$\mathrm{Cov}(x, y) = \int z(x) z(y) \, \mathrm{d} \mathbf{P} (z) = E(z(x) z(y))$$

where z(x) is now the value of the function z at the point x, i.e., the value of the linear functional $$ u \mapsto u(x) $$ evaluated at z.

Trace class
A bounded linear operator A over a separable Hilbert space H is said to be in the trace class if for some (and hence all) orthonormal bases {ek}k of H, the sum of positive terms


 * $$\|A\|_1 = \operatorname{Tr}|A| := \sum_k \Big\langle (A^*A)^\frac{1}{2} \, e_k, e_k \Big\rangle$$

is finite. In this case, the trace of A, which is given by the sum
 * $$\operatorname{Tr} A := \sum_k \langle A e_k, e_k \rangle$$

is absolutely convergent and is independent of the choice of the orthonormal basis. When H is finite-dimensional, every operator is trace class and this definition of trace of A coincides with the definition of the trace of a matrix.

By extension, if A is a non-negative self-adjoint operator, we can also define the trace of A as an extended real number by the possibly divergent sum
 * $$\sum_k \langle A e_k, e_k \rangle.$$

Kolmogorov continuity theorem
Let $$(S,d)$$ be some complete metric space, and let $$X : [0, + \infty) \times \Omega \to S$$ be a stochastic process. Suppose that for all times $$T > 0$$, there exist positive constants $$\alpha, \beta,  K$$ such that


 * $$\mathbb{E} [d(X_t, X_s)^\alpha] \leq K | t - s |^{1 + \beta}$$

for all $$0 \leq s, t \leq T$$. Then there exists a modification $$\tilde{X}$$ of $$X$$ that is a continuous process, i.e. a process $$\tilde{X} : [0, + \infty) \times \Omega \to S$$ such that


 * $$\tilde{X}$$ is sample-continuous;
 * for every time $$t \geq 0$$, $$\mathbb{P} (X_t = \tilde{X}_t) = 1.$$

Furthermore, the paths of $$\tilde{X}$$ are locally $\gamma$-Hölder-continuous for every $$0<\gamma<\tfrac\beta\alpha$$.

Kolmogorov extension theorem
Let $$T$$ denote some interval (thought of as "time"), and let $$n \in \mathbb{N}$$. For each $$k \in \mathbb{N}$$ and finite sequence of distinct times $$t_{1}, \dots, t_{k} \in T$$, let $$\nu_{t_{1} \dots t_{k}}$$ be a probability measure on $$(\mathbb{R}^{n})^{k}$$. Suppose that these measures satisfy two consistency conditions:

1. for all permutations $$\pi$$ of $$\{ 1, \dots, k \}$$ and measurable sets $$F_{i} \subseteq \mathbb{R}^{n}$$,
 * $$\nu_{t_{\pi (1)} \dots t_{\pi (k)}} \left( F_{\pi (1)} \times \dots \times F_{ \pi(k)} \right) = \nu_{t_{1} \dots t_{k}} \left( F_{1} \times \dots \times F_{k} \right);$$

2. for all measurable sets $$F_{i} \subseteq \mathbb{R}^{n}$$,$$m \in \mathbb{N}$$
 * $$\nu_{t_{1} \dots t_{k}} \left( F_{1} \times \dots \times F_{k} \right) = \nu_{t_{1} \dots t_{k}, t_{k + 1}, \dots, t_{k+m}} \left( F_{1} \times \dots \times F_{k} \times \underbrace{\mathbb{R}^{n} \times \dots \times \mathbb{R}^{n}}_{m} \right).$$

Then there exists a probability space $$(\Omega, \mathcal{F}, \mathbb{P})$$ and a stochastic process $$X : T \times \Omega \to \mathbb{R}^{n}$$ such that
 * $$\nu_{t_{1} \dots t_{k}} \left( F_{1} \times \dots \times F_{k} \right) = \mathbb{P} \left( X_{t_{1}} \in F_{1}, \dots, X_{t_{k}} \in F_{k} \right)$$

for all $$t_{i} \in T$$, $$k \in \mathbb{N}$$ and measurable sets $$F_{i} \subseteq \mathbb{R}^{n}$$, i.e. $$X$$ has $$\nu_{t_{1} \dots t_{k}}$$ as its finite-dimensional distributions relative to times $$t_{1} \dots t_{k}$$.

In fact, it is always possible to take as the underlying probability space $$\Omega = (\mathbb{R}^n)^T$$ and to take for $$X$$ the canonical process $$X\colon (t,Y) \mapsto Y_t$$. Therefore, an alternative way of stating Kolmogorov's extension theorem is that, provided that the above consistency conditions hold, there exists a (unique) measure $$\nu$$ on $$(\mathbb{R}^n)^T$$ with marginals $$\nu_{t_{1} \dots t_{k}}$$ for any finite collection of times $$t_{1} \dots t_{k}$$. Kolmogorov's extension theorem applies when $$T$$ is uncountable, but the price to pay for this level of generality is that the measure $$\nu$$ is only defined on the product σ-algebra of $$(\mathbb{R}^n)^T$$, which is not very rich.

Bochner integral
Let (X, Σ, μ) be a measure space and B a Banach space. The Bochner integral is defined in much the same way as the Lebesgue integral. First, a simple function is any finite sum of the form


 * $$s(x) = \sum_{i=1}^n \chi_{E_i}(x) b_i$$

where the Ei are disjoint members of the σ-algebra Σ, the bi are distinct elements of B, and χE is the characteristic function of E. If μ(Ei) is finite whenever bi ≠ 0, then the simple function is integrable, and the integral is then defined by


 * $$\int_X \left[\sum_{i=1}^n \chi_{E_i}(x) b_i\right]\, d\mu = \sum_{i=1}^n \mu(E_i) b_i$$

exactly as it is for the ordinary Lebesgue integral.

A measurable function ƒ : X → B is Bochner integrable if there exists a sequence of integrable simple functions sn such that


 * $$\lim_{n\to\infty}\int_X \|f-s_n\|_B\,d\mu = 0,$$

where the integral on the left-hand side is an ordinary Lebesgue integral.

In this case, the Bochner integral is defined by


 * $$\int_X f\, d\mu = \lim_{n\to\infty}\int_X s_n\, d\mu.$$

It can be shown that a function is Bochner integrable if and only if it lies in the Bochner space $$L^1$$.

Banach space
If $X$ and $Y$ are normed spaces over the same ground field $K$, the set of all continuous $K$-linear maps $T : X → Y$ is denoted by $B(X, Y)$. In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space $X$ to another normed space is continuous if and only if it is bounded on the closed unit ball of $X$. Thus, the vector space $B(X, Y)$ can be given the operator norm


 * $$\|T\| = \sup \left \{\|Tx\|_Y \mid x\in X,\ \|x\|_X\le 1 \right \}.$$

For $Y$ a Banach space, the space $B(X, Y)$ is a Banach space with respect to this norm.

If $X$ is a Banach space, the space $B(X) = B(X, X)$ forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps.

If $X$ and $Y$ are normed spaces, they are isomorphic normed spaces if there exists a linear bijection $T : X → Y$ such that $T$ and its inverse $T^{&thinsp;−1}$ are continuous. If one of the two spaces $X$ or $Y$ is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces $X$ and $Y$ are isometrically isomorphic if in addition, $T$ is an isometry, i.e., $T(x) = x$ for every $x$ in $X$. The Banach–Mazur distance $d(X, Y)$ between two isomorphic but not isometric spaces $X$ and $Y$ gives a measure of how much the two spaces $X$ and $Y$ differ.

Uniform boundedness principle
Let X be a Banach space and Y a normed vector space. Suppose that F is a collection of continuous linear operators from X to Y. If


 * $$\sup\nolimits_{T \in F} \|T(x)\|_Y < \infty, $$

for all x in X, then


 * $$\sup\nolimits_{T \in F,\|x\|=1} \|T(x)\|_Y=\sup\nolimits_{T \in F} \|T\|_{B(X,Y)} < \infty.$$