Modified half-normal distribution

In probability theory and statistics, the modified half-normal distribution (MHN)    is a three-parameter family of continuous probability distributions supported on the positive part of the real line. It can be viewed as a generalization of multiple families, including the half-normal distribution, truncated normal distribution, gamma distribution, and square root of the gamma distribution, all of which are special cases of the MHN distribution. Therefore, it is a flexible probability model for analyzing real-valued positive data. The name of the distribution is motivated by the similarities of its density function with that of the half-normal distribution.

In addition to being used as a probability model, MHN distribution also appears in Markov chain Monte Carlo (MCMC)-based Bayesian procedures, including Bayesian modeling of the directional data, Bayesian binary regression, and Bayesian graphical modeling.

In Bayesian analysis, new distributions often appear as a conditional posterior distribution; usage for many such probability distributions are too contextual, and they may not carry significance in a broader perspective. Additionally, many such distributions lack a tractable representation of its distributional aspects, such as the known functional form of the normalizing constant. However, the MHN distribution occurs in diverse areas of research, signifying its relevance to contemporary Bayesian statistical modeling and the associated computation.

The moments (including variance and skewness) of the MHN distribution can be represented via the Fox–Wright Psi functions. There exists a recursive relation between the three consecutive moments of the distribution; this is helpful in developing an efficient approximation for the mean of the distribution, as well as constructing a moment-based estimation of its parameters.

Definitions
The probability density function of the modified half-normal distribution is $$ f(x)= \frac{2\beta^{\alpha/2} x^{\alpha-1} \exp(-\beta x^2+ \gamma x )}{\Psi\left(\frac \alpha 2, \frac \gamma {\sqrt\beta}\right)} \text{ for } x>0 $$ where $$ \Psi\left(\frac \alpha 2, \frac \gamma {\sqrt\beta}\right) = {}_1 \Psi_1 \left[\begin{matrix}(\frac \alpha 2 ,\frac 1 2) \\ (1,0) \end{matrix}; \frac \gamma {\sqrt\beta} \right]$$ denotes the Fox–Wright Psi function. The connection between the normalizing constant of the distribution and the Fox–Wright function in provided in Sun, Kong, Pal.

The cumulative distribution function (CDF) is $$ F_{_{\text{MHN}}}(x\mid \alpha, \beta, \gamma)= \frac{2\beta^{\alpha/2}}{\Psi \left(\frac \alpha 2, \frac \gamma {\sqrt\beta} \right)} \sum_{i=0}^\infty \frac{\gamma^i}{2 i!} \beta^{-(\alpha+i)/2} \gamma \left(\frac{\alpha +i}{2}, \beta x^2\right) \text{ for } x\ge0,$$ where $$\gamma (s,y)=\int_0^y t^{s-1} e^{-t} \, dt$$ denotes the lower incomplete gamma function.

Properties
The modified half-normal distribution is an exponential family of distributions, and thus inherits the properties of exponential families.

Moments
Let $$X\sim \text{MHN}(\alpha, \beta, \gamma)$$. Choose a real value $$k\geq 0$$ such that $$\alpha+k>0$$. Then the $$k$$th moment is$$ E(X^k)= \frac{\Psi\left(\frac{\alpha+k}2, \frac \gamma {\sqrt\beta}\right) }{ \beta^{k/2} \Psi\left(\frac\alpha2, \frac \gamma{\sqrt\beta}\right)}.$$Additionally,$$ E(X^{k+2}) =\frac{\alpha+k}{2\beta} E(X^k) +\frac \gamma {2 \beta} E(X^{k+1}).$$The variance of the distribution is $$\operatorname{Var}(X)= \frac \alpha{2\beta} + E(X) \left( \frac\gamma{2\beta}-E(X)\right). $$ The moment generating function of the MHN distribution is given as$$ M_{X}(t) = \frac{\Psi\left(\frac{\alpha}{2}, \frac{ \gamma+t}{\sqrt{\beta}}\right) }{ \left(\frac{\alpha}{2}, \frac{ \gamma}{\sqrt{\beta}}\right)}.$$

Modal characterization
Consider $$\text{MHN} (\alpha, \beta, \gamma)$$ with $$\alpha>0$$, $$\beta >0$$, and $$\gamma \in \mathbb{R}$$.


 * If $$\alpha\geq 1$$, then the probability density function of the distribution is log-concave.
 * If $$\alpha>1$$, then the mode of the distribution is located at $$\frac{\gamma + \sqrt{\gamma^2+8\beta(\alpha-1)}}{4 \beta} .$$
 * If $$\gamma>0$$ and $$1- \frac{\gamma^2}{8 \beta} \leq \alpha < 1$$, then the density has a local maximum at $$\frac{\gamma + \sqrt{\gamma^2+8\beta(\alpha-1)}}{4 \beta}$$ and a local minimum at $$\frac{\gamma - \sqrt{\gamma^2+8\beta(\alpha-1)}}{4 \beta}.$$
 * The density function is gradually decreasing on $$\mathbb{R}_{+}$$ and mode of the distribution does not exist, if either $$\gamma>0$$, $$0 <\alpha <1-\frac{\gamma^2}{8 \beta}$$ or $$\gamma<0, \alpha\leq 1$$.

Additional properties involving mode and expected values
Let $$X\sim \text{MHN}(\alpha,\beta,\gamma)$$ for $$\alpha \geq 1$$, $$\beta>0$$, and $$\gamma\in \R{}$$, and let the mode of the distribution be denoted by $$X_{\text{mode}}=\frac{\gamma+\sqrt{\gamma^2+8\beta(\alpha-1)}}{4 \beta}.$$

If $$\alpha>1$$, then $$X_{\text{mode}} \leq E(X)\leq  \frac{\gamma+\sqrt{\gamma^2+8 \alpha\beta}}{4 \beta}$$for all $$\gamma\in \mathbb{R}$$. As $$\alpha$$ gets larger, the difference between the upper and lower bounds approaches zero. Therefore, this also provides a high precision approximation of $$E(X)$$ when $$\alpha$$ is large.

On the other hand, if $$\gamma>0$$ and $$\alpha\geq 4$$, then $$\log(X_{\text{mode}}) \leq E(\log(X))\leq \log\left( \frac{\gamma+\sqrt{\gamma^2+8  \alpha\beta}}{4 \beta} \right). $$For all $$\alpha>0$$, $$\beta>0$$, and $$\gamma\in \mathbb{R}$$, $$\text{Var}(X)\leq \frac{1}{2\beta}$$. Also, the condition $$\alpha\geq 4$$ is a sufficient condition for its validity. The fact that $$X_{\text{mode}}\leq E(X)$$ implies the distribution is positively skewed.

Mixture representation
Let $$X\sim \operatorname{MHN}(\alpha, \beta, \gamma)$$. If $$\gamma>0$$, then there exists a random variable $$V$$ such that $$V\mid X\sim \operatorname{Poisson}(\gamma X)$$ and $$X^2\mid V \sim \operatorname{Gamma} \left( \frac{\alpha+V}2, \beta \right)$$. On the contrary, if $$\gamma<0$$ then there exists a random variable $$U$$ such that $$U\mid X\sim \text{GIG}\left(\frac 1 2, 1, \gamma^2 X^2\right)$$ and $$X^2\mid U \sim \text{Gamma}\left(\frac \alpha 2, \left( \beta + \frac{\gamma^2} U \right) \right)$$, where $$\text{GIG} $$ denotes the generalized inverse Gaussian distribution.