Probabilistic metric space

In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers $R_{≥{{space|hair}}0}$, but in distribution functions.

Let D+ be the set of all probability distribution functions F such that F(0) = 0 (F is a nondecreasing, left continuous mapping from R into [0,&thinsp;1] such that max(F) = 1).

Then given a non-empty set S and a function F: S&thinsp;×&thinsp;S → D+ where we denote F(p,&thinsp;q) by Fp,q for every (p,&thinsp;q) ∈ S × S, the ordered pair (S,&thinsp;F) is said to be a probabilistic metric space if:
 * For all u and v in S, $u&thinsp;=&thinsp;v$ if and only if $F_{u,v}(x) = 1$ for all x > 0.
 * For all u and v in S, $F_{u,v} = F_{v,u}$.
 * For all u, v and w in S, $F_{u,v}(x) = 1$ and $F_{v,w}(y) = 1 ⇒ F_{u,w}(x&thinsp;+&thinsp;y) = 1$ for $x, y > 0$.

History
Probabilistic metric spaces are initially introduced by Menger, which were termed statistical metrics. Shortly after, Wald criticized the generalized triangle inequality and proposed an alternative one. However, both authors had come to the conclusion that in some respects the Wald inequality was too stringent a requirement to impose on all probability metric spaces, which is partly included in the work of Schweizer and Sklar. Later, the probabilistic metric spaces found to be very suitable to be used with fuzzy sets and further called fuzzy metric spaces

Probability metric of random variables
A probability metric D between two random variables X and Y may be defined, for example, as $$D(X, Y) = \int_{-\infty}^\infty \int_{-\infty}^\infty |x-y| F(x, y) \, dx \, dy$$ where F(x, y) denotes the joint probability density function of the random variables X and Y. If X and Y are independent from each other then the equation above transforms into $$D(X, Y) = \int_{-\infty}^\infty \int_{-\infty}^\infty |x-y| f(x) g(y) \, dx \, dy$$ where f(x) and g(y) are probability density functions of X and Y respectively.

One may easily show that such probability metrics do not satisfy the first metric axiom or satisfies it if, and only if, both of arguments X and Y are certain events described by Dirac delta density probability distribution functions. In this case: $$D(X, Y) = \int_{-\infty}^\infty \int_{-\infty}^\infty |x-y| \delta(x-\mu_x) \delta(y-\mu_y) \, dx \, dy = |\mu_x - \mu_y|$$ the probability metric simply transforms into the metric between expected values $$\mu_x$$, $$\mu_y$$ of the variables X and Y.

For all other random variables X, Y the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric space, that is: $$D\left(X, X\right) > 0.$$



Example
For example if both probability distribution functions of random variables X and Y are normal distributions (N) having the same standard deviation $$\sigma$$, integrating $$D\left(X, Y\right)$$ yields: $$D_{NN}(X, Y) = \mu_{xy} + \frac{2\sigma}{\sqrt\pi} \exp\left(-\frac{\mu_{xy}^2}{4\sigma^2}\right) - \mu_{xy} \operatorname{erfc} \left(\frac{\mu_{xy}}{2\sigma}\right)$$ where $$\mu_{xy} = \left|\mu_x - \mu_y\right|,$$ and $$\operatorname{erfc}(x)$$ is the complementary error function.

In this case: $$\lim_{\mu_{xy} \to 0} D_{NN}(X, Y) = D_{NN}(X, X) = \frac{2\sigma}{\sqrt\pi}.$$

Probability metric of random vectors
The probability metric of random variables may be extended into metric D(X, Y) of random vectors X, Y by substituting $$|x-y|$$ with any metric operator d(x, y): $$D(\mathbf{X}, \mathbf{Y}) = \int_\Omega \int_\Omega d(\mathbf{x}, \mathbf{y}) F(\mathbf{x}, \mathbf{y}) \, d\Omega_x d\Omega_y$$ where F(X, Y) is the joint probability density function of random vectors X and Y. For example substituting d(x, y) with Euclidean metric and providing the vectors X and Y are mutually independent would yield to: $$D(\mathbf{X}, \mathbf{Y}) = \int_{\Omega} \int_{\Omega} \sqrt{\sum_i |x_i - y_i|^2} F(\mathbf{x}) G(\mathbf{y}) \, d\Omega_x d\Omega_y.$$