User:Econdeck/sandbox

In statistics and econometrics, the generalized beta distribution (or GB) is a family of continuous probability distributions of positive random variables with five parameters. The GB distribution includes as special or limiting cases such popular distributions as the beta (both first and second kinds), chi-squared, gamma, F, half-normal, and uniform distributions, among others. The generalized beta distribution allows for considerable flexibility in statistical modeling and testing.

Probability density function
The probability density function of the generalized beta distribution is:



\begin{align} f(x;\alpha,\beta) & = \frac{|a|x^{ap-1}(1-(1-c)(x/b)^a)^{q-1}} {b^{ap}\mathrm{B}(p,q)(1+c(x/b)^a)^{p+q}} \end{align} $$

where $$\mathrm{B}(p,q)$$ is the beta function.

A random variable X that is distributed generalized beta with parameters $$a, b, c, p, q$$ is denoted


 * $$X \sim \textrm{GB}(a, b, c, p, q)$$

Cumulative distribution function
There does not exist a closed-form expression for the cumulative distribution function, so it must be computed numerically. However, the cumulative distribution functions of most of the GB's special cases can be derived analytically. For example, by setting $$c=0$$ one obtains the GB1 distribution with CDF:


 * $$F(x;a,b,c=0,p,q) = \frac{_2F_1[p,1-q;p+1;z]z^p}{p\mathrm{B}(p,q)} = \mathrm{B}_z(p,q) \!$$

where $$z=(\frac{y}{b})^a$$, $$_2F_1$$ is a hypergeometric series, and $$\mathrm{B}_z(p,q)$$ is the incomplete beta function. Conversely, by setting $$c=1$$ one obtains the GB2 distribution with CDF:


 * $$F(x;a,b,c=0,p,q) = \frac{_2F_1[p,1-q;p+1;z]z^p}{p\mathrm{B}(p,q)} = \mathrm{B}_z(p,q) \!$$

where $$z=\frac{(y/b)^a}{1+(y/b)^a}$$.

Properties
The mode of a Beta distributed random variable X with parameters α > 1 and β > 1 is:
 * $$\begin{align}

\frac{\alpha - 1}{\alpha + \beta - 2} \\ \end{align}$$

The $$k$$th raw moment of a GB-distributed random variable X is:
 * $$\operatorname{E}[X^k] = \frac{b^k\mathrm{B}(p+k/a,q)}{\mathrm{B}(p,q)}{}_2F_1 \left[\begin{matrix} p+k/a & k/a \\ p+q+k/a \end{matrix} ; c \right]$$

where $$_2F_1$$ is the hypergeometric series. Therefore, the expected value (mean) ($$\mu $$), variance (second central moment), skewness (third central moment), and kurtosis excess (fourth central moment) of a GB-distributed random variable X are:
 * $$\mu = \operatorname{E}(X) = \frac{b\mathrm{B}(p+1/a,q)}{\mathrm{B}(p,q)}{}_2F_1 \left[\begin{matrix} p+1/a & 1/a \\ p+q+1/a \end{matrix} ; c \right]$$

The variance is:
 * $$\sigma^2 = \operatorname{E}(X - \mu)^2 = \frac{b^2\mathrm{B}(p+2/a,q)}{\mathrm{B}(p,q)}{}_2F_1 \left[\begin{matrix} p+2/a & 2/a \\ p+q+2/a \end{matrix} ; c \right]-\mu^2$$

The skewness is:
 * $$\gamma_1 = \frac{\operatorname{E}(X - \mu)^3}{[\operatorname{E}(X - \mu)^2]^{3/2}} = \frac{1}{\sigma^3}\left(\frac{b^3\mathrm{B}(p+3/a,q)}{\mathrm{B}(p,q)}{}_2F_1 \left[\begin{matrix} p+3/a & 3/a \\ p+q+3/a \end{matrix} ; c \right]-3\mu\sigma^2-\mu^3\right)$$

The kurtosis excess is:
 * $$\gamma_2 = \frac{\operatorname{E}(X - \mu)^4}{[\operatorname{E}(X - \mu)^2]^{2}}-3 = \frac{1}{\sigma^4}\left(\frac{b^4\mathrm{B}(p+4/a,q)}{\mathrm{B}(p,q)}{}_2F_1 \left[\begin{matrix} p+4/a & 4/a \\ p+q+4/a \end{matrix} ; c \right]-4\mu\gamma_1\sigma^3-6\mu^{2}\sigma^2-\mu^4\right)-3$$

Quantities of information
Given two beta distributed random variables, X ~ Beta(α, β) and Y ~ Beta(α', β'), the differential entropy of X is
 * $$\begin{align}

h(X) &= \ln\mathrm{B}(\alpha,\beta)-(\alpha-1)\psi(\alpha)-(\beta-1)\psi(\beta)+(\alpha+\beta-2)\psi(\alpha+\beta) \end{align}$$ where $$\psi$$ is the digamma function.

The cross entropy is
 * $$H(X,Y) = \ln\mathrm{B}(\alpha',\beta')-(\alpha'-1)\psi(\alpha)-(\beta'-1)\psi(\beta)+(\alpha'+\beta'-2)\psi(\alpha+\beta).\,$$

It follows that the Kullback–Leibler divergence between these two beta distributions is



D_{\mathrm{KL}}(X,Y) = \ln\frac{\mathrm{B}(\alpha',\beta')} {\mathrm{B}(\alpha,\beta)} - (\alpha'-\alpha)\psi(\alpha) - (\beta'-\beta)\psi(\beta) + (\alpha'-\alpha+\beta'-\beta)\psi(\alpha+\beta). $$

Shapes
The beta density function can take on different shapes depending on the values of the two parameters:
 * $$\alpha = 1,\ \beta = 1$$ is the uniform [0,1] distribution
 * $$\alpha < 1,\ \beta < 1$$ is U-shaped (blue plot)
 * $$\alpha = \tfrac{1}{2},\ \beta = \tfrac{1}{2}$$ is the arcsine distribution
 * $$\alpha < 1,\ \beta \geq 1$$ or $$\alpha = 1,\ \beta > 1$$ is strictly decreasing (red plot)
 * $$\alpha = 1,\ \beta > 2$$ is strictly convex
 * $$\alpha = 1,\ \beta = 2$$ is a straight line
 * $$\alpha = 1,\ 1 < \beta < 2$$ is strictly concave
 * $$\alpha = 1,\ \beta < 1$$ or $$\alpha > 1,\ \beta \leq 1$$ is strictly increasing (green plot)
 * $$\alpha > 2,\ \beta = 1$$ is strictly convex
 * $$\alpha = 2,\ \beta = 1$$ is a straight line
 * $$1 < \alpha < 2,\ \beta = 1$$ is strictly concave
 * $$\alpha > 1,\ \beta > 1$$ is unimodal (magenta & cyan plots)

Moreover, if $$\alpha = \beta$$ then the density function is symmetric about 1/2 (blue & teal plots).

Parameter estimation
Let


 * $$\bar{x} = \frac{1}{N}\sum_{i=1}^N x_i$$

be the sample mean and


 * $$v = \frac{1}{N-1}\sum_{i=1}^N (x_i - \bar{x})^2$$

be the sample variance. The method-of-moments estimates of the parameters are


 * $$\hat{\alpha} = \bar{x} \left(\frac{\bar{x} (1 - \bar{x})}{v} - 1 \right),$$


 * $$\hat{\beta} = (1-\bar{x}) \left(\frac{\bar{x} (1 - \bar{x})}{v} - 1 \right).$$

When the distribution is required over an interval other than [0, 1], say $$\scriptstyle [\ell,h] $$, then replace $$\bar{x}$$ with $$\frac{\bar{x}-\ell}{h-\ell} ,$$ and $$\ v $$ with $$\frac{v}{(h-\ell)^2} $$ in the above equations.

There is no closed-form of the maximum likelihood estimates for the parameters.

Generating beta-distributed random variates
If $$X$$ and $$Y$$ are independent, with $$X \sim {\rm \Gamma}(\alpha, \theta)\,$$ and $$Y \sim {\rm \Gamma}(\beta, \theta)\,$$ then $$\tfrac{X}{X+Y} \sim {\rm Beta}(\alpha, \beta)\,$$, so one algorithm for generating beta variates is to generate X/(X+Y), where X is a gamma variate with parameters ($$\alpha, 1$$) and Y is an independent gamma variate with parameters ($$\beta, 1$$).

Also, the kth order statistic of $$n$$ uniformly distributed variates is $${\rm Beta}(k, n+1-k)$$, so an alternative if $$\alpha$$ and $$\beta$$ are small integers is to generate $$\alpha + \beta - 1$$ uniform variates and choose the $$\alpha$$-th largest.

Transformations

 * If $$X \sim {\rm Beta}(a, b)\,$$ then $$ 1-X \sim {\rm Beta}(b, a) \,$$
 * If $$X \sim {\rm Beta}(a,b)\,$$ then $$ \tfrac{X}{1-X} \sim {\rm BetaPrime}(a,b) \,$$. The beta prime distribution, also called "beta distribution of the second kind".
 * If $$X \sim {\rm Beta}(\tfrac{n}{2}, \tfrac{m}{2})\,$$ then $$\tfrac{mX}{n(1-X)} \sim F(n,m)$$ (assuming n>0 and m>0)
 * If $$X \sim {\rm Beta}\left(1+\lambda\tfrac{c-min}{max-min},1+\lambda\tfrac{max-c}{max-min}\right) \!\! \,$$ then  $$\!\! min+X(max-min) \sim PERT(min,max,c,\lambda) \,,$$ where PERT denotes a distribution used in PERT analysis.
 * If $$X \sim {\rm Beta}(1, \beta)\,$$ then $$X \sim \,$$ Kumaraswamy distribution with parameters $$(1,\beta)\,$$
 * If $$X \sim {\rm Beta}(\alpha, 1)\,$$ then $$X \sim \,$$ Kumaraswamy distribution with parameters $$(\alpha,1)\,$$
 * If $$ X \sim {\rm Beta}(\alpha, 1)\,$$ then $$ -ln(X) \sim \textrm{Exponential}(\alpha)\,$$

Special and limiting cases

 * $${\rm Beta}(1, 1) \sim {\rm U}(0,1) \,$$ the standard uniform distribution.
 * If $$X \sim {\rm Beta}(\tfrac{3}{2}, \tfrac{3}{2})\,$$ and $$r>0\,$$ then $$2rX-r \sim \,$$ Wigner semicircle distribution.
 * $${\rm Beta}(\tfrac{1}{2},\tfrac{1}{2})\ $$ is the Jeffreys prior for a proportion and is equivalent to arcsine distribution.
 * $$\lim_{n \to \infty}n{\rm Beta}(1,n) = {\rm Exp}(1) \,$$  the exponential distribution
 * $$\lim_{n \to \infty} n {\rm Beta}(k,n) = \textrm{Gamma}(k,1 ) \,$$ the gamma distribution

Derived from other distributions

 * The kth order statistic of a sample of size n from the uniform distribution is a beta random variable, $$U_{(k)} \sim B(k,n+1-k).$$
 * If $$X \sim {\rm \Gamma}(\alpha, \theta)\,$$ and $$Y \sim {\rm \Gamma}(\beta, \theta)\,$$ then $$\tfrac{X}{X+Y} \sim {\rm Beta}(\alpha, \beta)\,$$
 * If $$X \sim \chi^2(\alpha)\,$$ and $$Y \sim \chi^2(\beta)\,$$ then $$\tfrac{X}{X+Y} \sim {\rm Beta}(\tfrac{\alpha}{2}, \tfrac{\beta}{2})\,$$
 * If $$X \sim \operatorname{Unif}(0,1)$$ and $$\alpha\,>0$$ then $$X^{\frac{1}{\alpha}}\sim\operatorname{Beta}(\alpha, 1)$$.
 * If $$X \sim {\rm U}(0, 1]\,$$, then $$X^2 \sim {\rm Beta}(\tfrac{1}{2},1) \ $$, which is a special case of the Beta distribution called the power-function distribution.

Combination with other distributions

 * $$X \sim {\rm Beta}(\alpha, \beta)\,$$ and $$Y \sim F(2\alpha,2\beta)\,$$ then $$\Pr(X \leq \tfrac{\alpha}{\alpha+\beta x}) = \Pr(Y \geq x)\,$$ for all x > 0.

Compounding with other distributions

 * If $$ p \sim \mathrm{Beta}(\alpha,\beta)\,$$ and $$ X \sim \operatorname{Bin}(k,p)\,$$ then $$ X \sim \,$$ beta-binomial distribution
 * If $$ p \sim \mathrm{Beta}(\alpha,\beta)\,$$ and $$ X \sim \operatorname{NB}(r,p)\,$$ then $$ X \sim \,$$ beta negative binomial distribution

Generalisations

 * The Dirichlet distribution is a multivariate generalization of the beta distribution. Univariate marginals of the Dirichlet distribution have a beta distribution.
 * The beta distribution is a special case of the Pearson type I distribution
 * $$ {\rm Beta}(\alpha, \beta) = \lim_{\delta \to 0}{\rm NonCentralBeta}(\alpha,\beta,\delta)\,$$ the noncentral beta distribution

Other

 * Binomial opinions in subjective logic are equivalent to Beta distributions.

Order statistics
The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the k'th largest of a sample of size n from a continuous uniform distribution has a beta distribution. This result is summarized as:
 * $$U_{(k)} \sim B(k,n+1-k).$$

From this, and application of the theory related to the probability integral transform, the distribution of any individual order statistic from any continuous distribution can be derived.

Rule of succession
A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent Bernoulli trials with probability p, that p should be estimated as $$\frac{s+1}{n+2}$$. This estimate may be regarded as the expected value of the posterior distribution over p, namely Beta(s + 1, n &minus; s + 1), which is given by Bayes' rule if one assumes a uniform prior over p (i.e., Beta(1, 1)) and then observes that p generated s successes in n trials.

Bayesian inference
Beta distributions are used extensively in Bayesian inference, since beta distributions provide a family of conjugate prior distributions for binomial (including Bernoulli) and geometric distributions. The Beta(0,0) distribution is an improper prior and sometimes used to represent ignorance of parameter values.

The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of an unknown probability value &mdash; typically, as the prior distribution over a probability parameter, such as the probability of success in a binomial distribution or Bernoulli distribution. In fact, the beta distribution is the conjugate prior of the binomial distribution and Bernoulli distribution.

The beta distribution is the special case of the Dirichlet distribution with only two parameters, and the beta is conjugate to the binomial and Bernoulli distributions in exactly the same way as the Dirichlet distribution is conjugate to the multinomial distribution and categorical distribution.

In Bayesian inference, the beta distribution can be derived as the posterior probability of the parameter p of a binomial distribution after observing α &minus; 1 successes (with probability p of success) and β &minus; 1 failures (with probability 1 &minus; p of failure). Another way to express this is that placing a prior distribution of $Beta(α,β)$ on the parameter p of a binomial distribution is equivalent to adding α pseudo-observations of "success" and β pseudo-observations of "failure" to the actual number of successes and failures observed, then estimating the parameter p by the proportion of successes over both real- and pseudo-observations. If α and β are greater than 0, this has the effect of smoothing out the distribution of the parameters by ensuring that some positive probability mass is assigned to all parameters even when no actual observations corresponding to those parameters is observed. Values of α and β less than 1 favor sparsity, i.e. distributions where the parameter p is close to either 0 or 1. In effect, α and β, when operating together, function as a concentration parameter; see that article for more details.

Task duration modeling
The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM) and other project management / control systems to describe the time to completion of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution:


 * $$ \begin{align}

\mu(X) & {} = \frac{a + 4b + c}{6} \\ \sigma(X) & {} = \frac{c-a}{6} \end{align} $$

where a is the minimum, c is the maximum, and b is the most likely value.

Using this set of approximations is known as three-point estimation and are exact only for particular values of α and β, specifically when :


 * $$\alpha = 3 - \sqrt2 \, $$
 * $$\beta = 3 + \sqrt2 \, $$

or vice versa.

These are notably poor approximations for most other beta distributions exhibiting average errors of 40% in the mean and 549% in the variance

Mean and sample size
The beta distribution may also be reparameterized in terms of its mean μ (0 ≤ μ ≤ 1) and sample size ν = α + β (ν > 0). This is useful in Bayesian parameter estimation if one wants to place an unbiased (uniform) prior over the mean. For example, one may administer a test to a number of individuals. If it is assumed that each person's score (0 ≤ θ ≤ 1) is drawn from a population-level Beta distribution, then an important statistic is the mean of this population-level distribution. The mean and sample size parameters are related to the shape parameters α and β via


 * $$ \begin{align}

\alpha & {} = \mu \nu ,\\ \beta & {} = (1 - \mu) \nu. \end{align} $$

Under this parameterization, one can place a uniform prior over the mean, and a vague prior (such as an exponential or gamma distribution) over the positive reals for the sample size.

The Balding–Nichols model is a similar two-parameter reparameterization of the beta distribution.

Four parameters
A beta distribution with the two shape parameters α and β is supported on the range [0,1]. It is possible to alter the location and scale of the distribution by introducing two further parameters representing the minimum and maximum values of the distribution.

The probability density function of the four parameter beta distribution is given by

f(y; \alpha, \beta, a, b) = \frac{1}{B(\alpha, \beta)} \frac{ (y-a)^{\alpha-1} (b-y)^{\beta-1} }{(b-a)^{\alpha+\beta-1}}. $$

The mean, mode and variance of the four parameters Beta distribution are:


 * $$ \text{mean} = \frac{\alpha b+ \beta a}{\alpha+\beta}\ $$


 * $$ \text{mode} = \frac{(\alpha-1) b+(\beta-1) a}{\alpha+\beta-2} \qquad \text{for} \ \alpha>1, \beta>1\ $$


 * $$ \text{variance} = \frac{\alpha\beta (b-a)^2}{(\alpha+\beta)^2(\alpha+\beta+1)}\ $$

The standard form can be obtained by letting

x = \frac{y-a}{b-a}. $$