Talk:Gamma distribution/Archive 1

Relation to maxwell-bolzmann
I removed the following text which doesn't make any sense: "$$Y \sim \mathrm{Maxwell}(\beta)$$ is a Maxwell-Boltzmann distribution if $$X \sim \mathrm{Gamma}(\alpha = 3/2, \beta)$$." I couldn't figure out how to fix it by reading Maxwell-Boltzmann distribution A5 23:13, 16 April 2006 (UTC)


 * It's fixed (but check it, please). PAR 00:30, 17 April 2006 (UTC)

Cite? Approximation when some x = 0?
The approximation provided for k is very useful in general, but what is one to do when $$x_i = 0$$, for some i?

Also, it would be nice to have a cite, here, but neither of the listed references has this formula, AFAICT.

Ken K 01:28, 30 October 2006 (UTC)

Theta or 1/theta ?
To me it seems that the in the probability density function we should have $$\theta\,^{-k}$$ instead of $$\theta\,^k$$ since all the other characteristics that are shown seem to be calculated with $$1/\theta\,$$. I checked http://mathworld.wolfram.com/GammaDistribution.html to verify this but I would be happier if a more experienced wiki-editor/mathematician changed the article. Sorry if I am incorrect about this.

Artagas 20 Nov 2006


 * Both parameterizations exist, and both are already covered in the article (someone introduced a mistake recently, now corrected). In the first version, with parameters $$(k,\theta)$$, the parameter &theta; is a scale parameter.  The second parameterization, in terms of $$(\alpha,\beta)$$, uses an inverse scale parameter and has advantages when the Gamma distribution is used as a conjugate prior (see e.g. exponential distribution). --MarkSweep (call me collect) 02:59, 20 November 2006 (UTC)

Graphs of pdf misleading
The graph presented on the main page for k=1, $$theta = 2$$ is misleading. The function actually diverges to infinitiy as x tends to zero under these parameters. This is an interesting property of the gamma distrubution and should be indicated in the graph.
 * isn't it just an exponential distribution in the case of k=1? (in other words, what you say is not true.) MisterSheik 02:07, 27 March 2007 (UTC)

Real-world examples
It would be interesting to see some real-world examples of gamma distributions; the article is a bit technical at the moment. It would also be nice to learn why they might be distributed in that way. I know for instance that reaction times in psychological experiments are usually gamma-distributed (rather than the normal distribution that is assumed in the statistical tests based on them) but I'm not sure why. Junes 10:30, 25 May 2006 (UTC)
 * Yes, more insight into the function in plain language (vs formulae) would be very nice!


 * Gamma distribution is used to model claim severity in the general insurance industry 195.28.231.13 11:41, 20 February 2007 (UTC)
 * I also would like more plain language on what generates such distributions and when they are found. Some can be found in the Wikipedia article on the exponential distribution which is a special case of the gamma. It is dscribed as the natutal distribution of intervals between events that occur at a constant random rate(for example, phone calls within a certain period where the rate is steady. Timothy Mak on the AllStat list has also told me that gamma is the expected distribution of time until Bold textnBold text events have occurred.


 * Also on the AllStat list(in the archive) I have read that on "theoretical grounds" the distribution of rates of return across companies could be Bold textexpectedBold text to be gamma distributed.


 * I also have read James V. Bradley's article on the "L-shaped" distribution of response times, among other things, and the implications for normality-assuming statistics. Of course some just assume that the Bold textsamplingBold text distribution, the distribution of sample means around the population one, is normal, which is often true even when the actual data are distributed nonnormally. I have also heard of the insurance example.


 * I am currently working with an L-shaped distribution of the number of times a court case is "distinguished" to those it is "folllowed". I think it might be a gamma, and specifically an exonential, distribution, but am having trouble finding a way to test this hypothesis.


 * Yours Sincerely,
 * Alan E. Dunne24.235.165.89 15:32, 26 March 2007 (UTC)


 * Kotz and Norman Lloyd Johnson in Continuous Univariate Distributions 1970, chapter 17, give several examples Of the "time-to-event" and "insured casualty" types and also fibre-diameter measurements of wool tops, "internal comparisons in multipurpose experiments" and unspecified "medical applications"
 * Yours Sincerely,
 * Alan E. Dunne24.235.165.89 15:45, 2 April 2007 (UTC), With Respect


 * Further work with my ratio of court citation types has shown that it does not approximate the exponential but rather a gamma distribution with k less than 1 (a bit more than 0.5) I would be interested to hear what this might mean
 * Yours Sincerely
 * Alan E. Dunne

infinitely divisible
It would be nice to add that the Gamma distribution is infinitely divisible, and to provide its L\'evy measure


 * Yes... MisterSheik 17:12, 17 April 2007 (UTC)

Confusion about parameter names
With Respect


 * In the article gamma is a function of x, k, and theta, or alternatively paramtrized by alpha and beta but there seem to be many other names floating around. Gammma is sometimes said to have parameters r and lambda, or n and lambda. I have seen alpha and beta called A and B. There is also a 1/lambda parameter. Which of alpha or beta is 1/theta is and which parameter equals one are also sources of confusion. I have also seem k(I think) called kappa.


 * There are also references on the Allstat list and elsewhere to "three-parameter gamma" and a constant a.


 * Yours Sincerely,
 * Alan E. Dunne

24.235.165.89 15:55, 2 April 2007 (UTC)


 * They're just different names of the same things. Except, three-parameter gamma, which just has a location parameter, and is a trivial modification.  I think it would be more confusing to include it. MisterSheik 17:44, 17 April 2007 (UTC)

exponential family
In the article it says

The Gamma distribution is a two-parameter exponential family ...

Shouldn't it rather be "... is a one-parameter exponential family ..." ? I am aware that the Gamma distribution itself has two parameters, but in the context of exponential families, the number of parameters has a different meaning, in my opinion. Unfortunately this distinction is not made in the article on exponential families (maybe it should be?). Can another mathematician/statistician verify this? 134.60.66.52 14:37, 17 April 2007 (UTC)


 * No, the exponential family also has two parameters :) MisterSheik 17:06, 17 April 2007 (UTC)


 * Yes, meanwhile I noticed that too. I apologize for my mistake, I got confused by a particular setting where one of the parameters was considered a nuisance parameter, effectively making it a one parameter exponential family. Sorry again. --134.60.66.52 12:30, 18 April 2007 (UTC)


 * No worries... :) MisterSheik 12:41, 18 April 2007 (UTC)

Proof of some of the basic stuff
I've made this little example on the page concerning charactericstic functions
 * The Gamma distribution with scale parameter θ and a shape parameter k has the characteristic function
 * $$(1 - \theta\,i\,t)^{-k}\,\!$$
 * Now suppose that we have
 * $$X \sim \Gamma(k_1,\theta) \mbox{ and } Y \sim \Gamma(k_2,\theta)$$
 * with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The characteristic functions are
 * $$\varphi_X(t)=(1 - \theta\,i\,t)^{-k_1},\,\qquad \varphi_Y(t)=(1 - \theta\,i\,t)^{-k_2}$$

which by indedendence and the basic properties of characteristic function leads to
 * $$\varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=(1 - \theta\,i\,t)^{-k_1}(1 - \theta\,i\,t)^{-k_2}=\left(1 - \theta\,i\,t\right)^{-(k_1+k_2)}$$
 * This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2, and we therefore conclude
 * $$X+Y \sim \Gamma(k_1+k_2,\theta)$$
 * The result can be expanded to n independent gamma distributed random variables with the same cale parameter and we get
 * $$\forall i \in \{1,\ldots, n\} : X_i \sim \Gamma(k_i,\theta) \qquad \Rightarrow \qquad \sum_{i=1}^n X_i \sim \Gamma\left(\sum_{i=1}^nk_i,\theta\right)$$

I think it might be nice to have on the gamma districution page a well. Any thoughts? - Aastrup 12:00, 28 July 2007 (UTC)

sir i need some information about statistical distributions
sir please tell me the applications from real life and some solved examples of {Gamma Distribution,Weibul Dist,and exponential distribution}sir i will b very thank full to you.u can send me these information on "a_smile4me@yahoo.com".i will wait ur response. —Preceding unsigned comment added by 58.65.201.212 (talk) 18:48, 20 October 2007 (UTC)

Parameter Estimation
Is the MLE parameter estimation for theta wrong? should it be k*xbar instead of xbar/k? --67.109.70.3 (talk) 01:41, 19 June 2008 (UTC)
 * Nah, it seems correct. Recall that the mean = k * theta and is estimated by xbar. So it makes sense that theta is estimated by xbar/k. --Fangz (talk) 11:27, 19 June 2008 (UTC)

I think have found a mistake of the same kind in Chow, V. T. Maidment, D. and Mays, L. W. "Applied Hydrology" 1988. Mc Graw Hill, International Edition. In table 11.5.1 the scale parameter is named lambda and corresponds to the inverse of theta, the scale parameter described in this article. It is stated that the estimator for lamba is sx/sqrt(k), where sx is the standard deviation and k is the shape parameter. Being lambda the inverse of theta, it estimation should be sqrt(k)/sx. Could anyone confirm this? J. (talk) 12:33, 19 December 2008

Right margin too near to tables in some math articles
(Question moved from MediaWiki talk:Common.css; this would have been best asked at the Help desk)

With Firefox 3.0.7 on Ubuntu, when I look at Gamma distribution, I see that the right margin of the left text column is too near to the right column where the picture and the table are. There should be some more margin between the two columns. --Pot (talk) 08:46, 11 March 2009 (UTC)


 * This article uses templateProbability distribution. Look at the documentation for the template and you will see it has a parameter for . I suggest you start with a value of   and fudge from there. The default margin is 1em; if you think that should be changed, discuss it on the template talk. --——  Gadget850 (Ed)  talk  -  10:47, 11 March 2009 (UTC)

Relation to dirac distribution
I'm pretty sure it shouldn't be in the article, but it's interesting anyway. The Gamma(α,c/α) distribution tends to a dirac distribution centered around c as α->infinity. This is easily visible by taking β=4/α (so that αβ=4), for large α it tends to dirac distribution. (or you can see that the variance is c^2/α, which tends to zero in such a process). —Preceding unsigned comment added by 131.155.212.223 (talk) 13:49, 4 November 2010 (UTC)

Gamma distribution in rainfall analysis
There is a comment on the main page that a citation is needed in relation to the use of the gamma distribution for rainfall analysis.

There is a good discussion of the use of the gamma distribution to fit monthly rainfall data on p303-307 of Jones et al. (2009)

Jones, O., Maillardet, R. and Robinson, A. (2009) Introduction to scientific programming and simulation using R. CRC Press.

Tony.ladson (talk) 01:29, 1 September 2011 (UTC)

KL Divergence
Anyone mind if I rephrase the Kullback-Leibler Divergence? The pdf is defined above in terms of shape k and scale theta, but the KL divergence uses theta as shape, and inverse scale beta = theta^-1. Pretty confusing. — Preceding unsigned comment added by 152.3.196.217 (talk) 20:45, 21 September 2011 (UTC)

Reference for entropy
Can someone post a reference for the entropy computation for the gamma density ? It's mentioned on the side table without a reference. —Preceding unsigned comment added by 128.95.224.31 (talk) 17:55, 13 October 2008 (UTC)


 * as in the reference from the Differential_entropy article.
 * Lazo, A. and P. Rathie. On the entropy of continuous probability distributions Information Theory, IEEE Transactions on, 1978. 24(1): p. : 120-122 129.242.167.37 (talk) 15:54, 26 October 2011 (UTC)

Generating variables
Can anyone review the changes made by ClaudeLo? I wrote the original algorithm (rather adapted and fixed it based on some book) but no longer study math and my work is not math-related, so I do not quite trust my skills. -- Paul Pogonyshev (talk) 00:04, 22 November 2007 (UTC)


 * I'm not sure if the current version is these changes, but the current algorithm does not work for me. It consistently generates too many extremely high values. I also can't tell what the point of V1 is in that algorithm; it chooses between two branches, but there should be no need to do that in an acceptance-rejection method.


 * Here is a quickly hacked together C implementation. It's easily verified due to the simple rejection criterion:

--Dyfrgi (talk) 09:26, 6 January 2010 (UTC)

This code above is bogus for the simple reason that this samples from the area below the probability distribution but assumes that the density lives with the unit square but that's very false. Close to zero the PDF far exceeds 1 and appreciable mass is distributed on x>1. Sorry! —Preceding unsigned comment added by 140.247.239.43 (talk) 05:04, 6 March 2010 (UTC)

Four standard references to reliable algorithms for generating gamma variates have been inserted above the explicit algorithm that is given without proof or source. It does not appear to be the algorithm in Ahrens and Dieter. Can anyone identify a source for the algorithm given? Does the article need to include an explicit (but unsourced) algorithm when one is available in the linked PDF of Ahrens and Dieter? Mathstat (talk) 11:49, 26 February 2012 (UTC)

Distinguishing between k and alpha
What's the purpose of using two different letters for the shape parameter in the two different representations of the distribution? I think it just adds clutter. Domminico (talk) 19:12, 25 April 2012 (UTC)

MLE
Rather than waving our swords around showing how clever we are at solving for the MLE under the standard parameterisation, would it not be simpler to present the alpha, mu parameterisation and solve that instead? — Preceding unsigned comment added by 90.218.192.173 (talk) 19:56, 11 October 2012 (UTC)

Gamma or &Gamma;?
I must admit that most of my Statistics books aren't written in English, but when it comes to the two ways of notation
 * $$X \sim \Gamma(k, \theta) \,\,\mathrm{ or }\,\, X \sim \textrm{Gamma}(k, \theta) $$

it is clearly the first which is used most often. This is why I'm changing the notation in the article. Aastrup 21:58, 18 July 2007 (UTC)

The use of \Gamma is confusing as it refers to the gamma function either Gamma(a,b) or Ga(a,b) are more common.

Gentry White —Preceding unsigned comment added by 152.1.95.168 (talk) 19:20, 15 July 2008 (UTC)

Do not use the gamma function to refer to a gamma distribution. There are to many dang gamma symbols on the page which makes it confusing which is being referred to. Gammma(a, b) is obviously the most clear. —Preceding unsigned comment added by 69.204.243.36 (talk) 22:08, 7 December 2008 (UTC)


 * Another plea to restrict $$\Gamma$$ to the gamma function (as it's universal throughout mathematics) and use Gamma for the distribution.


 * A similar thing happens with the beta distribution. Since its parameters are conventionally $$\alpha$$ and $$\beta$$, it's just unacceptable to refer to the distribution as $$\beta(\alpha, \beta)$$! Much better to write $$\mathrm{Beta}(\alpha, \beta)$$, and similarly $$\mathrm{Gamma}(\cdot, \cdot)$$ (choose your parameters). --88.109.216.145 (talk) 23:02, 5 November 2009 (UTC)

The top of the webpage says not to confuse the "Gamma" distribution with the Gamma function. Throughout the webpage, however, the single-parameter $$\Gamma$$ function (greek symbol) is used but nowhere is this explained as the "Gamma" function referred to at the top. This should be explained, and a link to the wikipedia page for the Gamma function should be provided. Craniator (talk) 23:34, 17 March 2013 (UTC) — Preceding unsigned comment added by Craniator (talk • contribs) 23:31, 17 March 2013 (UTC)

Relation with Chi Squared
There is a typo in relations with Chi-Squared distribution, as \theta = \frac 12, but for cX \theta = 2c, which give a contradiction at c=1. Correct version is \theta = 2 — Preceding unsigned comment added by Bitlemon (talk • contribs) 19:15, 3 December 2013 (UTC)

Scaling section
The Scaling section currently reads:

If


 * $$X \sim \mathrm{Gamma}(k, \theta),$$

then for any c > 0,


 * $$cX \sim \mathrm{Gamma}( k, c\theta).$$

Hence the use of the term "scale parameter" to describe θ.

Equivalently, if


 * $$X \sim \mathrm{Gamma}(\alpha, \beta),$$

then for any c > 0,


 * $$cX \sim \mathrm{Gamma}( \alpha, \beta/c).$$

Hence the use of the term "inverse scale parameter" to describe β.

The second set of equations contradicts the first: the final maths line should read


 * $$cX \sim \mathrm{Gamma}( \alpha, \beta c).$$

I'm not sure what correction should be made, though, because I don't know what the point of the "inverse scale parameter" thing was.

Smaug123 (talk) 13:59, 4 February 2014 (UTC)

Image
It would be useful if the images at the top of the page would include variations of θ for constant k. Currently, no two curves have the same k value. --EyrianAtWork 13:50, 11 July 2007 (UTC)
 * I disagree. θ is a scale parameter, and the curve doesn't change shape with θ. A plot in the scale parameter article might be useful though. -- Aastrup 20:19, 18 July 2007 (UTC)


 * I agree with @EvrianAtWork, I think at least on of the curves having the same k value would be helpful since a beginner might not exactly know what's meant by a "scale parameter", but will easily see the effect of changing θ if it was shown. Monsterman222 (talk) 06:53, 29 July 2014 (UTC)

Notation
I find the inconsistency of notation throughout this long article quite confusing at some places. The most significant problem is the mixing of the two different sets of parametrization in different places. For example, in the Related Distribution section, it is often not clear which parametrization is used (e.g. "If X ~ Gamma(1, λ), then X has an exponential distribution with rate parameter λ"). As this article belongs to the area of Probability, I would suggest to preferentially use the α, β parametrization that is much more common in probability and statistics literature and more consistent with other Wikipedia articles on probability distributions (e.g. exponential distribution). Also, α and k are both shape parameter, it will be much clearer to stick with one notation. I would also suggest using α throughout and only use k where the shape parameter is assumed to be an integer (i.e. where Erlang distribution is mentioned). Beside this, log and ln are also used interchangeably in different places. The greek letter \Gamma is used for both the Gamma function and the Gamma distribution, which can be potentially confusing as well. — Preceding unsigned comment added by 131.111.20.117 (talk) 14:43, 8 October 2014 (UTC)

alternative parameterization
Isn't there an altertanative parameterization for the gamma? Is there a general method wikipedia deals with these? --Pdbailey 17:47, 16 Apr 2005 (UTC)


 * Okay, foot in mouth... I was confused. --Pdbailey 18:04, 16 Apr 2005 (UTC)

There is in fact an alternative parametrization ... but I'm too lazy to look it up now. Rp 01:46, 6 May 2006 (UTC)

Too bad. C++ manual uses alpha and beta in place of $k$ and $\theta$. — Preceding unsigned comment added by 90.229.159.222 (talk) 10:34, 26 June 2015 (UTC)

Main image of probability distributions is incorrect
the legend of the main image is incorrect - the thetas do not generate the graphed distributions. one of the following corrections need to be made (but not both): - change theta to beta - change theta value to equal the reciprocal of current theta value — Preceding unsigned comment added by 2602:306:33DA:45D0:E5D9:115F:4571:C00D (talk) 07:47, 20 March 2016 (UTC)

Assessment comment
Substituted at 19:58, 1 May 2016 (UTC)

No explanation or introduction to what the distribution is.
This piece seems incredibly technical and targeted only at people who already know everything. Nowhere in the article does it mention what the gamma distribution is or what it's used to model. It might be useful to even say what it is before talking about how to parametrize it. There's a tiny applications section at the bottom but it hasn't been updated in 7 years. 129.21.141.232 (talk) 21:23, 12 March 2016 (UTC)


 * The lede does mention one place it comes up: "If k is a positive integer, then the distribution represents an Erlang distribution; i.e., the sum of k independent exponentially distributed random variables, each of which has a mean of θ."  Dicklyon (talk) 01:24, 13 March 2016 (UTC)


 * I agree. I think it would be helpful to place this fact, and any other simple uses of the distribution, up front before the three ways of parametrizing. Nonstandard (talk) 21:02, 15 May 2016 (UTC)

Domain of PDF
On the article it's written
 * $$f(x;\alpha,\beta) = \frac{\beta^{\alpha} x^{\alpha-1} e^{-x\beta}}{\Gamma(\alpha)} \quad \text{ for } x \geq 0 \text{ and } \alpha, \beta > 0$$

But f is not defined at $$x=0$$ for $$0<\alpha<1$$. Wouldn't this
 * $$f(x;\alpha,\beta) = \frac{\beta^{\alpha} x^{\alpha-1} e^{-x\beta}}{\Gamma(\alpha)} \quad \text{ for } x, \alpha, \beta > 0$$

be a better definition? 189.32.102.151 (talk) 09:52, 18 June 2016 (UTC)

Important Expectations
I've found it important to know some expectations which I couldn't find on this page. For example, for the $$g(x;\alpha,\beta)$$ inverse scale parametrization,


 * $$E[\ln{x}]=\psi(\alpha)-\ln{\beta}$$

Where should this expectation go? Bazugb07 (talk) 16:58, 20 May 2009 (UTC)

According to this derivation the correct result should be:


 * $$E[\ln{x}]=\psi(\alpha)+\ln{\beta}$$

Note the sign change of the latter term. I also found the result with the addition to be the right expression. Can anyone confirm this? — Preceding unsigned comment added by 134.2.118.242 (talk) 14:19, 22 May 2017 (UTC)

The above derivation used the other parametrization. Hence the result:


 * $$E[\ln{x}]=\psi(\alpha)-\ln{\beta}$$

is correct. — Preceding unsigned comment added by 134.2.118.242 (talk) 14:28, 22 May 2017 (UTC)

Moments in Info box
Should expectations of the log of the variates be included in the infobox? They are included in the Dirichlet distribution and Beta distribution. However there seems to be a bit of debate on the edits about this. — Preceding unsigned comment added by 108.179.159.162 (talk) 18:48, 23 March 2018 (UTC)
 * No, and they shouldn't really be included at those other articles, either. The infobox is for fairly specific information.  The logarithm of the random variable follows a different distribution and its expected value shouldn't be listed along with that of the main variable.  This information is already included in the body of the article, so nothing is really lost by cutting it out of the infobox.  –Deacon Vorbis (carbon &bull; videos) 19:12, 23 March 2018 (UTC)
 * Yes they should be. They provide relevant intuition to the parameters, expectations of transformations of random variables are present in many info boxes in many different distributions, and have been for many years. All information in the infobox should be present in the body of the article anyway, so that is not an argument to exclude something from the infobox. If these really should be excluded from all probability distributions, that is a larger discussion but until then, the status quo should be maintained. — Preceding unsigned comment added by 205.175.119.182 (talk) 17:56, 28 March 2018 (UTC)
 * "...expectations of transformations of random variables are present in many info boxes in many different distributions, and have been for many years." Other than the two other distributions you mentioned above, I couldn't find any articles that include this additional information in the infobox.  That would seem like the standard is not to include.  In any case, a fairly common sense reading is that the infobox entry for "mean" includes the mean, and not other information.  –Deacon Vorbis (carbon &bull; videos) 20:42, 28 March 2018 (UTC)
 * They do not belong in the infobox. If they "provide relevant intuition to the parameters" then those expectations can be discussed in the body of the article with a discussion of the relevant intuition they provide.  But not everything in the article belongs in the infobox. Rlendog (talk) 21:26, 28 March 2018 (UTC)
 * The inverse Gaussian and generalized inverse Gaussian also have this in the info box. Not every distribution article has expectations of transformations in the text, but when they are included in the text, it appears that the standard is to include them in the infobox as well. — Preceding unsigned comment added by 205.175.119.182 (talk) 22:55, 28 March 2018 (UTC)
 * If that is being done in other articles it should not be and that should be removed from the infoboxes of the other articles. Rlendog (talk) 14:18, 29 March 2018 (UTC)
 * On what basis? Deacon Vorbis suggested that we should look to other articles to establish standard practice. I agree with this practice. This meets the guidelines for the infobox of being comparable, materially relevant, concise, and already cited elsewhere in the article. — Preceding unsigned comment added by 205.175.119.182 (talk) 17:52, 30 March 2018 (UTC)
 * I see you went ahead and changed the articles. This appears inappropriate and a bit sloppy, since you left in the E[x] bits that appear no longer relevant. If you feel strongly that this information should not be included in the mean field, perhaps there is another way to include it? The information has been there for 5+ years without a problem and has become fairly standard, so I don't see why you are rushing to strip it out now. — Preceding unsigned comment added by 205.175.119.182 (talk) 22:07, 30 March 2018 (UTC)
 * There is an element in the infobox for a mean. The mean is E[X].  E[1/X] is not the mean so including that within the mean item of the infobox is misleading and incorrect.  The information about E[1/X] or other such moments can absolutely be discussed in the article but that doesn't mean they need to be included in the infobox.  And if they are included in the infobox they need to be included in an appropriate section, which would mean adding elements to the infobox for these types of moments, not just throwing them into an infobox section where they clearly do not belong. Rlendog (talk) 12:50, 3 April 2018 (UTC)
 * While it may be clear to you that they don't belong there, someone put them there and they have remained in that prominent position for 5+ years and been viewed millions of times, so it's hardly as cut and dry as you claim. Anyway, it does seem like the infobox could be reorganized to make this clearer, either by adding sections or renaming existing sections, while maintaining the standard of keeping these additional moments in the infobox. — Preceding unsigned comment added by 205.175.119.182 (talk) 04:11, 5 April 2018 (UTC)
 * On a similar note, some distributions have other properties in their infobox, such as Fisher information, and this should stay in even though it hasn't yet been added to all of the distributions. Not that I am advocating keeping everything in all the infoboxes. The "Notation" field in the lognormal distribution should probably go. — Preceding unsigned comment added by 205.175.119.182 (talk) 21:39, 5 April 2018 (UTC)

Please, sign your comments with 4 tildes like it says when you click to add a comment, thanks. How long they've been there is kind of irrelevant; stuff that shouldn't be around can persist for quite a while if no one ever takes a close look at it. Adding another item to an infobox isn't totally off-the-wall, but in this case, I still don't think it's a good idea. We're talking about adding the expected values of functions of the random variables. There are already examples with $$\log X,\ X\log X,$$ and $$1/X.$$ There doesn't seem to be any sort of end to what people could add here. I'm really at a loss why you think it's so vital to keep these things in infoboxes. –Deacon Vorbis (carbon &bull; videos) 22:10, 5 April 2018 (UTC)
 * Sure, I made an account to continue the discussion. This is a piece of information I consult regularly, more so than other aspects of the infobox. If you had taken out the moment generating function for example, I probably would not have noticed. I didn't start this because I was watching edits on the page and this raised a red flag. I came to wikipedia looking for that information, and was surprised that it had been removed. That it was noticed just two weeks after the edit gives an indication about the frequency of its use. It's true that stuff can persist if no one takes a look at it, but being in the infobox is a fairly prominent place on the page and this page has been viewed literally millions of times since that was put there, and had hundreds of edits so I would argue that the length of time it has been there combined with the number of page views and edits suggests that it belongs in the infobox.
 * While in theory, there are infinitely many quantities that could be included. In practice, these are only included when they reflect something meaningful about the distribution. For example, for example for the gamma distribution $$E[X]$$ and $$E[ln(X)]$$ are sufficient statistics for the distribution. for the inverse Gaussian $$E[x]$$ and $$E[1/x]$$ are sufficient statistics. If it is inappropriate to include these in the Mean infobox, perhaps a new section could be created for sufficient statistics. This would allow the information to retain its prominence in the infobox, provide clear criteria for which quantities should be included and prevents it from growing endlessly. Altzarisal (talk) 01:36, 6 April 2018 (UTC)
 * I don't see why sufficient statistic(s) couldn't be added as another infobox parameter. If no one else feels like doing so, I can add it, but it would probably be a few days.  In the mean time, it might be worth thinking about a standard way that infoboxes should display this information.  –Deacon Vorbis (carbon &bull; videos) 02:48, 6 April 2018 (UTC)

Implementing cdf
So I'm trying to get the cdf into gnuplot to generate a graph except it's not working right

cgamma(x,k,t) = igamma(k, x/t) / gamma(k) set xtics 0,2 set ytics 0,0.1 set samples 1001 set terminal postscript enhanced color solid lw 2 "Times-Roman" 27 set output plot [0:100] \ cgamma(x,1,2) title "{/Times-Italic k} = 1, {/Symbol q} = 2", \ cgamma(x,2,2) title "{/Times-Italic k} = 2, {/Symbol q} = 2", \ cgamma(x,3,2) title "{/Times-Italic k} = 3, {/Symbol q} = 2", \ cgamma(x,4,2) title "{/Times-Italic k} = 4, {/Symbol q} = 2", \ cgamma(x,5,2) title "{/Times-Italic k} = 5, {/Symbol q} = 2", \ cgamma(x,5,.5) title "{/Times-Italic k} = 5, {/Symbol q} = 0.5"

but this doesn't give me the correct plots
 * the first two tend to 1
 * third tends to 0.5
 * fourth tends to like .175
 * fifth & sixth tend to the same value at about 0.05

The cgamma function above is the incomplete gamma function over the gamma function...as shown in the article. Which is wrong: the cdf in the article or my gnuplot setup? Cburnett 09:00, 10 Mar 2005 (UTC)


 * I use the following gnuplot definitions (using names similar to those used by R/Splus):

_ln_dgamma(x, a, b) = a*log(b) - lgamma(a) + (a-1)*log(x) - b*x dgamma(x, shape, rate) =\ (x<0)? 0 :\ (x==0)? ((shape<1)? 1/0 : (shape==1)? rate : 0) :\ (rate==0)? 0 :\ exp(_ln_dgamma(x, shape, rate)) pgamma(x, shape, rate) = (x<0)? 0 : igamma(shape, x*rate)


 * The problem is that "incomplete gamma function" is ambiguous, referring sometimes to the regularized incomplete gamma function. --MarkSweep 16:50, 10 March 2005 (UTC)

PDF/CDF confusion
Ok, so I wrote gnuplot code taken directly from the pdf listed in the article. It's under commons:Image:Gamma distribution pdf.png (see the cdf as well that uses MarkSweep's code from above) but if I take your pdf implementation I get much different curves. The CDF drastically does not agree (see yellow line).

If I can't take the PDF & CDF from the article and get correct plots then I think we need to change the article (even if it's gnuplot that's wrong and explain how some plotters could implement, say, the incomplete gamma function differently). Cburnett 19:47, 10 Mar 2005 (UTC)


 * I was just about to point out that the PDFs visually don't integrate to the CDFs. I also realize that the gnuplot snippet I posted above uses the same parameterization that R uses, which I suspect is different from the first parameterization used in the article. There is always the issue whether one should use a scale parameter directly or use its inverse instead. I suspect that's the underlying confusion here. --MarkSweep 20:07, 10 Mar 2005 (UTC)


 * Ah, there we go. I was using theta and you were using beta.  I inverted the rate parameter and got the matching CDF.  Uploading new one. Cburnett 20:13, 10 Mar 2005 (UTC)


 * There. *sigh* Finally. :) Cburnett 20:23, 10 Mar 2005 (UTC)


 * Aargh. I was working on the same plots and just replaced both your versions with new matching PDFs and CDFs. Note that the width is now 1300px, so that scaling it down to 325px will be easier or look better. I used fewer examples to avoid cyan-on-white and yellow-on-white. --MarkSweep 20:40, 10 Mar 2005 (UTC)


 * Related to this, the mgf is expressed in terms of alpha and beta too. What do you think of adding a note about alternative parameterizations and their uses (see e.g. the more convenient parameterization in Exponential distribution)? --MarkSweep 20:47, 10 Mar 2005 (UTC)


 * mgf updated to match the rest. I don't really care what parameterization we use as long as it's consistent.  I prefer greek letters for paramters (just cuz I guess....) so I'd go for the alpha/beta notation over the k/theta.  Either way....  More plots to generate if we change it. Cburnett 21:44, 10 March 2005 (UTC)