Talk:Normal-inverse-gamma distribution

The CDF does not look right, if sigma^2 to Infty, the CDF goes to zero if I am not mistaken, which does not make sense, no? DoubleMatchPoint (talk) 19:57, 8 November 2017 (UTC)

We probably ought to make the parameters match with those listed in the Conjugate prior article or vice-versa. --Rhaertel80 (talk) 16:56, 29 May 2008 (UTC)

Notation is inconsistent with regard to the distribution of $$\mu$$. In the definition, mean is $$\lambda$$ and variance is $$\sigma^2 / v$$, but in the section about generating values from the distribution, mean is $$v$$ and variance is $$\sigma^2 / \lambda$$. I don't have a reference handy so I'm not sure which one is correct. Ksimek (talk) 23:40, 12 June 2009 (UTC)

User:rewtnode: I'm trying to verify the formula for the Expectation of \sigma^2 E[sigma^2] which is given now as \beta/(\alpha-1/2), but was just recently changed. Just a few days ago it was \beta/(2 (\alpha-1)). But I tried to carry out the integral myself and come to the old result. The issue is whether we assume this is a density function of (x, \sigma^2) or of (x, \sigma). If I assume it's a function of (x,\sigma) I get the old result E[\sigma^2] = \beta/(2(\alpha-1)). However if I assume that it is a function of (x,\sigma^2) I need to do a different variable substitution in the integral and get the result E[\sigma^2] = \beta^{1/2} \Gamma(\alpha-3/2)/\Gamma(\alpha). Very confusing. So I wonder if this distribution should be indeed seen as function of (x, \sigma), that is, a joint density over the domain (x, \sigma) — Preceding unsigned comment added by Rewtnode (talk • contribs) 00:40, 8 June 2018 (UTC)

Derivation of expected value of $$\sigma^2$$
There seems to be some disagreement about whether $$E[\sigma^2]$$ is $$\beta/(\alpha-1)$$ (correct) or $$\beta/(\alpha-0.5)$$ (incorrect). To get this right, you have to remember to integrate with respect to $$\sigma^2$$ and not $$\sigma$$ since the support of the distribution is defined in terms of $$x$$ and $$\sigma^2$$.

To make the above clear, all instances of $$\sigma^2$$ below are written as $$[\sigma^2]$$ to indicate that $$\sigma^2$$ is our variable and not $$\sigma$$.

From the definition of $$E[\sigma^2]$$:

$$ E\sigma^2 = \int_0^{\infty} \int_{-\infty}^{\infty} [\sigma^2]\ \text{Normal-inverse-gamma}\left(x, [\sigma^2]\ |\ \mu, \lambda, \alpha, \beta\right)\ \text{d}x\,\text{d}[\sigma^2] $$

From the definition of the normal-inverse-gamma distribution:

$$ = \int_0^{\infty} \int_{-\infty}^{\infty} [\sigma^2] \frac{\sqrt{\lambda}}{\sqrt{2\pi[\sigma^2]}} \frac{\beta^{\alpha}}{\Gamma(\alpha)} [\sigma^2]^{-(\alpha+1)} \exp\left(-\frac{2\beta + \lambda(x-\mu)^2}{2[\sigma^2]}\right) \ \text{d}x\,\text{d}[\sigma^2] $$

Rearrange:

$$ = \frac{\sqrt{\lambda}\beta^{\alpha}}{\sqrt{2\pi}\Gamma(\alpha)} \int_0^{\infty} [\sigma^2] \frac{1}{\sqrt{[\sigma^2]}} [\sigma^2]^{-(\alpha+1)} \exp\left(-\frac{\beta}{[\sigma^2]}\right) \int_{-\infty}^{\infty} \exp\left(-\frac{(x-\mu)^2}{2\lambda^{-1}[\sigma^2]}\right) \ \text{d}x\,\text{d}[\sigma^2] $$

Integrate out $$x$$, which appears as a squared exponential function (proportional to the pdf of the normal distribution):

$$ = \frac{\sqrt{\lambda}\beta^{\alpha}}{\sqrt{2\pi}\Gamma(\alpha)} \int_0^{\infty} [\sigma^2] \frac{1}{\sqrt{[\sigma^2]}} [\sigma^2]^{-(\alpha+1)} \exp\left(-\frac{\beta}{[\sigma^2]}\right) \sqrt{2\pi\lambda^{-1}[\sigma^2]} \ \text{d}[\sigma^2] $$

Simplify:

$$ = \frac{\beta^{\alpha}}{\Gamma(\alpha)} \int_0^{\infty} [\sigma^2]^{-\alpha} \exp\left(-\frac{\beta}{[\sigma^2]}\right) \ \text{d}[\sigma^2] $$

Integrate out $$[\sigma^2]$$, which appears in the same form as the pdf of an inverse-gamma distribution with argument $$[\sigma^2]$$:

$$ = \frac{\beta^{\alpha}}{\Gamma(\alpha)} \frac{\Gamma(\alpha-1)}{\beta^{\alpha-1}} $$

$$ = \frac{\beta}{\alpha-1} $$

We can further confirm this result by generating samples from the normal-inverse-gamma distribution (the sampling procedure is described in the article) and estimating the expected value of the $$\sigma^2$$ argument empirically. It converges to $$\frac{\beta}{\alpha-1}$$.

--CarlS (talk) 14:05, 27 September 2018 (UTC)

On the marginal distribution of x in the univariate case
The formula for the marginal distribution of $$ x $$ is wrong (which can be readily observed by comparison with the result of the proof reported for the case $$ \lambda = 1 $$ below) and is taken from the posterior predictive formula that one can find in the table of Conjugate prior.

I think the correct formula is $$ t_{2\alpha}\left( \mu, \frac{\beta}{\alpha\lambda} \right) $$. A derivation with a different notation for the parameters can be found in https://bookdown.org/aramir21/IntroductionBayesianEconometricsGuidedTour/sec42.html#sec42:~:text=%2C%20respectively.-,The%20marginal%20posterior%20of,is,-%CF%80 Br1 Ursino (talk) 19:05, 21 September 2023 (UTC)