Regular conditional probability

In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. The resulting conditional probability distribution is a parametrized family of probability measures called a Markov kernel.

Conditional probability distribution
Consider two random variables $$X, Y : \Omega \to \mathbb{R}$$. The conditional probability distribution of Y given X is a two variable function $$\kappa_{Y\mid X}: \mathbb{R} \times \mathcal{B}(\mathbb{R}) \to [0,1]$$

If the random variable X is discrete
 * $$\kappa_{Y\mid X}(x, A) = P(Y \in A \mid X = x) = \begin{cases}

\frac{P(Y \in A, X = x)}{P(X=x)} & \text{ if } P(X = x) > 0 \\[3pt] \text{arbitrary value} & \text{ otherwise}. \end{cases}$$

If the random variables X, Y are continuous with density $$f_{X,Y}(x,y)$$.
 * $$\kappa_{Y\mid X}(x, A) = \begin{cases}

\frac{\int_A f_{X,Y}(x, y) \, \mathrm{d}y}{\int_\mathbb{R} f_{X,Y}(x, y) \mathrm{d}y} & \text{ if } \int_\mathbb{R} f_{X,Y}(x, y) \, \mathrm{d}y > 0 \\[3pt] \text{arbitrary value} & \text{ otherwise}. \end{cases}$$

A more general definition can be given in terms of conditional expectation. Consider a function $$ e_{Y \in A} : \mathbb{R} \to [0,1]$$ satisfying
 * $$e_{Y \in A}(X(\omega)) = \operatorname E[1_{Y \in A} \mid X](\omega)$$

for almost all $$\omega$$. Then the conditional probability distribution is given by
 * $$\kappa_{Y\mid X}(x, A) = e_{Y \in A}(x).$$

As with conditional expectation, this can be further generalized to conditioning on a sigma algebra $$\mathcal{F}$$. In that case the conditional distribution is a function $$\Omega \times \mathcal{B}(\mathbb{R}) \to [0, 1]$$:
 * $$ \kappa_{Y\mid\mathcal{F}}(\omega, A) = \operatorname E[1_{Y \in A} \mid \mathcal{F}]$$

Regularity
For working with $$\kappa_{Y\mid X}$$, it is important that it be regular, that is: In other words $$\kappa_{Y\mid X}$$ is a Markov kernel.
 * 1) For almost all x, $$A \mapsto \kappa_{Y\mid X}(x, A)$$ is a probability measure
 * 2) For all A,  $$x \mapsto \kappa_{Y\mid X}(x, A)$$ is a measurable function

The second condition holds trivially, but the proof of the first is more involved. It can be shown that if Y is a random element $$\Omega \to S$$ in a Radon space S, there exists a $$\kappa_{Y\mid X}$$ that satisfies the first condition. It is possible to construct more general spaces where a regular conditional probability distribution does not exist.

Relation to conditional expectation
For discrete and continuous random variables, the conditional expectation can be expressed as

\begin{aligned} \operatorname E[Y\mid X=x] &= \sum_y y \, P(Y=y\mid X=x) \\ \operatorname E[Y\mid X=x] &= \int y \, f_{Y\mid X}(x, y) \, \mathrm{d}y \end{aligned} $$ where $$f_{Y\mid X}(x, y)$$ is the conditional density of $Y$ given $X$.

This result can be extended to measure theoretical conditional expectation using the regular conditional probability distribution:
 * $$\operatorname E[Y\mid X](\omega) = \int y \, \kappa_{Y\mid\sigma(X)}(\omega, \mathrm{d}y).$$

Formal definition
Let $$(\Omega, \mathcal F, P)$$ be a probability space, and let $$T:\Omega\rightarrow E$$ be a random variable, defined as a Borel-measurable function from $$\Omega$$ to its state space $$(E, \mathcal E)$$. One should think of $$T$$ as a way to "disintegrate" the sample space $$\Omega$$ into $$\{ T^{-1}(x) \}_{x \in E}$$. Using the disintegration theorem from the measure theory, it allows us to "disintegrate" the measure $$P$$ into a collection of measures, one for each $$x \in E$$. Formally, a regular conditional probability is defined as a function $$\nu:E \times\mathcal F \rightarrow [0,1],$$ called a "transition probability", where:
 * For every $$x \in E$$, $$\nu(x, \cdot)$$ is a probability measure on $$\mathcal F$$. Thus we provide one measure for each $$x \in E$$.
 * For all $$A\in\mathcal F$$, $$\nu(\cdot, A)$$ (a mapping $$E \to [0,1]$$) is $$\mathcal E$$-measurable, and
 * For all $$A\in\mathcal F$$ and all $$B\in\mathcal E$$
 * $$P\big(A\cap T^{-1}(B)\big) = \int_B \nu(x,A) \,(P\circ T^{-1})(\mathrm{d}x)$$

where $$P\circ T^{-1}$$ is the pushforward measure $$T_*P$$ of the distribution of the random element $$T$$, $$x\in\operatorname{supp}T,$$ i.e. the support of the $$T_* P$$. Specifically, if we take $$B=E$$, then $$A \cap T^{-1}(E) = A$$, and so
 * $$P(A) = \int_E \nu(x,A) \, (P\circ T^{-1})(\mathrm{d}x),$$

where $$\nu(x, A)$$ can be denoted, using more familiar terms $$P(A\ |\ T=x)$$.

Alternate definition
Consider a Radon space $$ \Omega $$ (that is a probability measure defined on a Radon space endowed with the Borel sigma-algebra) and a real-valued random variable T. As discussed above, in this case there exists a regular conditional probability with respect to T. Moreover, we can alternatively define the regular conditional probability for an event A given a particular value t of the random variable T in the following manner:


 * $$ P (A\mid T=t) = \lim_{U\supset \{T= t\}} \frac {P(A\cap U)}{P(U)},$$

where the limit is taken over the net of open neighborhoods U of t as they become smaller with respect to set inclusion. This limit is defined if and only if the probability space is Radon, and only in the support of T, as described in the article. This is the restriction of the transition probability to the support of T. To describe this limiting process rigorously:

For every $$\varepsilon > 0,$$ there exists an open neighborhood U of the event {T = t}, such that for every open V with $$\{T=t\} \subset V \subset U,$$
 * $$\left|\frac {P(A\cap V)}{P(V)}-L\right| < \varepsilon,$$

where $$L = P (A\mid T=t)$$ is the limit.