Heavy-tailed distribution

In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distribution that is of interest, but a distribution may have a heavy left tail, or both tails may be heavy.

There are three important subclasses of heavy-tailed distributions: the fat-tailed distributions, the long-tailed distributions, and the subexponential distributions. In practice, all commonly used heavy-tailed distributions belong to the subexponential class, introduced by Jozef Teugels.

There is still some discrepancy over the use of the term heavy-tailed. There are two other definitions in use. Some authors use the term to refer to those distributions which do not have all their power moments finite; and some others to those distributions that do not have a finite variance. The definition given in this article is the most general in use, and includes all distributions encompassed by the alternative definitions, as well as those distributions such as log-normal that possess all their power moments, yet which are generally considered to be heavy-tailed. (Occasionally, heavy-tailed is used for any distribution that has heavier tails than the normal distribution.)

Definition of heavy-tailed distribution
The distribution of a random variable X with distribution function F is said to have a heavy (right) tail if the moment generating function of X, MX(t), is infinite for all t > 0.

That means

\int_{-\infty}^\infty e^{t x} \,dF(x) = \infty \quad \mbox{for all } t>0. $$

This is also written in terms of the tail distribution function


 * $$\overline{F}(x) \equiv \Pr[X>x] \, $$

as



\lim_{x \to \infty} e^{t x}\overline{F}(x) = \infty \quad \mbox{for all } t >0.\, $$

Definition of long-tailed distribution
The distribution of a random variable X with distribution function F is said to have a long right tail if for all t > 0,



\lim_{x \to \infty} \Pr[X>x+t\mid X>x] =1, \, $$

or equivalently



\overline{F}(x+t) \sim \overline{F}(x) \quad \mbox{as } x \to \infty. \, $$

This has the intuitive interpretation for a right-tailed long-tailed distributed quantity that if the long-tailed quantity exceeds some high level, the probability approaches 1 that it will exceed any other higher level.

All long-tailed distributions are heavy-tailed, but the converse is false, and it is possible to construct heavy-tailed distributions that are not long-tailed.

Subexponential distributions
Subexponentiality is defined in terms of convolutions of probability distributions. For two independent, identically distributed random variables $$ X_1,X_2$$ with a common distribution function $$F$$, the convolution of $$F$$ with itself, written $$F^{*2}$$ and called the convolution square, is defined using Lebesgue–Stieltjes integration by:



\Pr[X_1+X_2 \leq x] = F^{*2}(x) = \int_{0}^x F(x-y)\,dF(y), $$ and the n-fold convolution $$F^{*n}$$ is defined inductively by the rule:

F^{*n}(x) = \int_{0}^x F(x-y)\,dF^{*n-1}(y). $$

The tail distribution function $$\overline{F}$$ is defined as $$\overline{F}(x) = 1-F(x)$$.

A distribution $$F$$ on the positive half-line is subexponential if



\overline{F^{*2}}(x) \sim 2\overline{F}(x) \quad \mbox{as } x \to \infty. $$

This implies that, for any $$n \geq 1$$,



\overline{F^{*n}}(x) \sim n\overline{F}(x) \quad \mbox{as } x \to \infty. $$

The probabilistic interpretation of this is that, for a sum of $$n$$ independent random variables $$X_1,\ldots,X_n$$ with common distribution $$F$$,



\Pr[X_1+ \cdots +X_n>x] \sim \Pr[\max(X_1, \ldots,X_n)>x] \quad \text{as } x \to \infty. $$

This is often known as the principle of the single big jump or catastrophe principle.

A distribution $$F$$ on the whole real line is subexponential if the distribution $$F I([0,\infty))$$ is. Here $$I([0,\infty))$$ is the indicator function of the positive half-line. Alternatively, a random variable $$X$$ supported on the real line is subexponential if and only if $$X^+ = \max(0,X)$$ is subexponential.

All subexponential distributions are long-tailed, but examples can be constructed of long-tailed distributions that are not subexponential.

Common heavy-tailed distributions
All commonly used heavy-tailed distributions are subexponential.

Those that are one-tailed include:
 * the Pareto distribution;
 * the Log-normal distribution;
 * the Lévy distribution;
 * the Weibull distribution with shape parameter greater than 0 but less than 1;
 * the Burr distribution;
 * the log-logistic distribution;
 * the log-gamma distribution;
 * the Fréchet distribution;
 * the q-Gaussian distribution;
 * the log-Cauchy distribution, sometimes described as having a "super-heavy tail" because it exhibits logarithmic decay producing a heavier tail than the Pareto distribution.

Those that are two-tailed include:
 * The Cauchy distribution, itself a special case of both the stable distribution and the t-distribution;
 * The family of stable distributions, excepting the special case of the normal distribution within that family. Some stable distributions are one-sided (or supported by a half-line), see e.g. Lévy distribution. See also financial models with long-tailed distributions and volatility clustering.
 * The t-distribution.
 * The skew log-normal cascade distribution.

Relationship to fat-tailed distributions
A fat-tailed distribution is a distribution for which the probability density function, for large x, goes to zero as a power $$x^{-a}$$. Since such a power is always bounded below by the probability density function of an exponential distribution, fat-tailed distributions are always heavy-tailed. Some distributions, however, have a tail which goes to zero slower than an exponential function (meaning they are heavy-tailed), but faster than a power (meaning they are not fat-tailed). An example is the log-normal distribution. Many other heavy-tailed distributions such as the log-logistic and Pareto distribution are, however, also fat-tailed.

Estimating the tail-index
There are parametric and non-parametric approaches to the problem of the tail-index estimation.

To estimate the tail-index using the parametric approach, some authors employ GEV distribution or Pareto distribution; they may apply the maximum-likelihood estimator (MLE).

Pickand's tail-index estimator
With $$(X_n, n \geq 1)$$ a random sequence of independent and same density function $$F \in D(H(\xi))$$, the Maximum Attraction Domain  of the generalized extreme value density $$ H $$, where $$\xi \in \mathbb{R}$$. If $$\lim_{n\to\infty} k(n) = \infty $$ and  $$\lim_{n\to\infty} \frac{k(n)}{n}= 0$$, then the Pickands tail-index estimation is

\xi^\text{Pickands}_{(k(n),n)} =\frac{1}{\ln 2} \ln \left( \frac{X_{(n-k(n)+1,n)} - X_{(n-2k(n)+1,n)}}{X_{(n-2k(n)+1,n)} - X_{(n-4k(n)+1,n)}}\right), $$ where $$X_{(n-k(n)+1,n)}=\max \left(X_{n-k(n)+1},\ldots ,X_{n}\right)$$. This estimator converges in probability to $$\xi$$.

Hill's tail-index estimator
Let $$(X_t, t \geq 1)$$ be a sequence of independent and identically distributed random variables with distribution function $$F \in D(H(\xi))$$, the maximum domain of attraction of the generalized extreme value distribution $$ H $$, where $$\xi \in \mathbb{R}$$. The sample path is $${X_t: 1 \leq t \leq n}$$ where $$n$$ is the sample size. If $$\{k(n)\}$$ is an intermediate order sequence, i.e. $$k(n) \in \{1,\ldots,n-1\}, $$, $$k(n) \to \infty$$ and $$k(n)/n \to 0$$, then the Hill tail-index estimator is



\xi^\text{Hill}_{(k(n),n)} = \left(\frac 1 {k(n)} \sum_{i=n-k(n)+1}^n \ln(X_{(i,n)}) - \ln (X_{(n-k(n)+1,n)})\right)^{-1}, $$

where $$X_{(i,n)}$$ is the $$i$$-th order statistic of $$X_1, \dots, X_n$$. This estimator converges in probability to $$\xi$$, and is asymptotically normal provided $$k(n) \to \infty $$ is restricted based on a higher order regular variation property . Consistency and asymptotic normality extend to a large class of dependent and heterogeneous sequences, irrespective of whether $$X_t$$ is observed, or a computed residual or filtered data from a large class of models and estimators, including mis-specified models and models with errors that are dependent. Note that both Pickand's and Hill's tail-index estimators commonly make use of logarithm of the order statistics.

Ratio estimator of the tail-index
The ratio estimator (RE-estimator) of the tail-index was introduced by Goldie and Smith. It is constructed similarly to Hill's estimator but uses a non-random "tuning parameter".

A comparison of Hill-type and RE-type estimators can be found in Novak.

Software

 * aest, C tool for estimating the heavy-tail index.

Estimation of heavy-tailed density
Nonparametric approaches to estimate heavy- and superheavy-tailed probability density functions were given in Markovich. These are approaches based on variable bandwidth and long-tailed kernel estimators; on the preliminary data transform to a new random variable at finite or infinite intervals, which is more convenient for the estimation and then inverse transform of the obtained density estimate; and "piecing-together approach" which provides a certain parametric model for the tail of the density and a non-parametric model to approximate the mode of the density. Nonparametric estimators require an appropriate selection of tuning (smoothing) parameters like a bandwidth of kernel estimators and the bin width of the histogram. The well known data-driven methods of such selection are a cross-validation and its modifications, methods based on the minimization of the mean squared error (MSE) and its asymptotic and their upper bounds. A discrepancy method which uses well-known nonparametric statistics like Kolmogorov-Smirnov's, von Mises and Anderson-Darling's ones as a metric in the space of distribution functions (dfs) and quantiles of the later statistics as a known uncertainty or a discrepancy value can be found in. Bootstrap is another tool to find smoothing parameters using approximations of unknown MSE by different schemes of re-samples selection, see e.g.