Asymptotic distribution

In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing approximations to the cumulative distribution functions of statistical estimators.

Definition
A sequence of distributions corresponds to a sequence of random variables Zi for i = 1, 2, ..., I. In the simplest case, an asymptotic distribution exists if the probability distribution of Zi converges to a probability distribution (the asymptotic distribution) as i increases: see convergence in distribution. A special case of an asymptotic distribution is when the sequence of random variables is always zero or Zi = 0 as i approaches infinity. Here the asymptotic distribution is a degenerate distribution, corresponding to the value zero.

However, the most usual sense in which the term asymptotic distribution is used arises where the random variables Zi are modified by two sequences of non-random values. Thus if
 * $$Y_i=\frac{Z_i-a_i}{b_i}$$

converges in distribution to a non-degenerate distribution for two sequences {ai} and {bi} then Zi is said to have that distribution as its asymptotic distribution. If the distribution function of the asymptotic distribution is F then, for large n, the following approximations hold
 * $$P\left(\frac{Z_n-a_n}{b_n} \le x \right) \approx F(x) ,$$
 * $$P(Z_n \le z) \approx F\left(\frac{z-a_n}{b_n}\right) .$$

If an asymptotic distribution exists, it is not necessarily true that any one outcome of the sequence of random variables is a convergent sequence of numbers. It is the sequence of probability distributions that converges.

Central limit theorem
Perhaps the most common distribution to arise as an asymptotic distribution is the normal distribution. In particular, the central limit theorem provides an example where the asymptotic distribution is the normal distribution.


 * Central limit theorem:
 * Suppose $$\{X_1, X_2, \dots\}$$ is a sequence of i.i.d. random variables with $$\mathrm{E}[X_i] = \mu$$ and $$\operatorname{Var}[X_i] = \sigma^2 < \infty$$. Let $$S_n$$ be the average of $$\{X_1, \dots, X_n\}$$. Then as $$n$$ approaches infinity, the random variables $$\sqrt{n}(S_n - \mu)$$ converge in distribution to a normal $$N(0, \sigma^2)$$:

The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.

Local asymptotic normality
Local asymptotic normality is a generalization of the central limit theorem. It is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of independent and identically distributed sampling from a regular parametric model; this is just the central limit theorem.

Barndorff-Nielson & Cox provide a direct definition of asymptotic normality.