Talk:Cumulant

(d/dt) log E(etX) as cumulant generation function
The formula


 * $$E\left(e^{tX}\right)=\exp\left(\sum_{n=1}^\infty\kappa_n t^n/n!\right)\,$$

might be written
 * $$\log E\left(e^{tX}\right)=\sum_{n=1}^\infty\kappa_n t^n/n!\,$$

The constant term is found by setting t = 0:
 * $$\log E\left(e^0\right)=0$$

Zero is not a cumulant, and so the function


 * $$\frac{d}{dt}\log E\left(e^{tX}\right)=\sum_{n=0}^\infty\kappa_{n+1} t^n/n!=\mu+\sigma^2t+\cdots$$

better deserves the name 'cumulant generation function'.

Bo Jacoby 12:55, 4 January 2006 (UTC)


 * I agree that there's no zeroth-order cumulant. But I don't think that's a reason to change the convention to what you've given here.  In that version, the coefficient of tn/n! is not the nth cumulant, and that is potentially confusing.  Besides, to speak of a zeroth cumulant and say that it's zero regardless of the probability distribution seems harmless at worst. Michael Hardy 00:03, 9 January 2006 (UTC)

I understand your reservations against changing conventions. Note, however, the tempting simplification obtained by differentiation.


 * The (new cumulant generation function of the) degenerate distribution is 0;
 * The (..) normal distribution is t.
 * The (..) bernoulli distribution is (1+(p&minus;1&minus;1)e&minus;t)&minus;1
 * The (..) binomial distribution is n(1+(p&minus;1&minus;1)e&minus;t)&minus;1
 * The (..) geometric distribution is (&minus;1+(1&minus;p)&minus;1e&minus;t)&minus;1
 * The (..) negative binomial distribution is n(&minus;1+(1&minus;p)&minus;1e&minus;t)&minus;1
 * The (..) poisson distribution is &lambda;et

Bo Jacoby 12:21, 31 January 2006 (UTC)


 * The last point kills your proposal: one wants to be able to speak not only of compositions of cumulant-generating functions, but of compositional inverses in cases where the expected value is not 0. So one wants the graph of the function to pass through (0, 0), with nonzero slope when the expected value is not 0. Michael Hardy 22:54, 31 January 2006 (UTC)

What do you mean? Please explain first and conclude later. The two definitions allow the same operations. The new definition just does not contain a superfluous zero constant term. The graph of the new cumulant-generating function passes through (0, &mu;) having the slope &sigma;2. Curvature shows departure from normality: &mu;+&sigma;2t. Bo Jacoby 09:14, 1 February 2006 (UTC)


 * I don't like that kind of definition for the cumulant generating function. Imagine you have a random variable X and a constant a. If K(t) is the cumulant generating function for X, then what is the cumulant generating function for aX? Using the standard definition it's K(at) whereas using your definition it would be aK(at) which is more complicated. Ossi 22:41, 4 April 2006 (UTC)

I'll comment further on Bo Jacoby's comments some day. But for now, let's note that what is in the article has been the standard convention in books and articles for more than half a century, and Wikipedia is not the place to introduce novel ideas. Michael Hardy 22:55, 4 April 2006 (UTC)

cumulant (encyclopedia)
I came randomly to see the article : no explanation about a cumulant were in view. A TOC was followed by formulas.

We math people love what we do. Let us try to do more : explain what we do (for this, I need help).
 * What class of math object is that
 * Who uses it and for what
 * Are there plain related concepts to invoke, &c. ? Thanks --DLL 18:47, 9 June 2006 (UTC)

P.S. Wolfram, for example, gives links to : Characteristic Function, Cumulant-Generating Function, Fourier Transform, k-Statistic, Kurtosis, Mean, Moment, Sheppard's Correction, Skewness, Unbiased Estimator, Variance. [Pages Linking Here]. Though I cannot tell if it is pertinent here, maybe a little check might be done ? Thanks again. --DLL 18:56, 9 June 2006 (UTC)


 * The very first sentence in the article says what cumulants are. Michael Hardy 21:20, 9 June 2006 (UTC)


 * It only describes what cumulants are, I do not see a formal definition anywhere. As a general rule, shouldn't the first thing in such an article be the definition? Can someone please add the definition? --Innerproduct (talk) 20:45, 2 April 2010 (UTC)
 * No, that's a very very bad proposed general rule. Sometimes the first sentence should be a definition; more often it should not.  One must begin by acquainting the lay reader with the fact that this is a concept in mathematics; sometimes stating the general definition in that same sentence conflicts with that goal. Michael Hardy (talk) 22:27, 2 April 2010 (UTC)

"Innerproduct", I see that you commented nearly four years after the comment you're replying to. It should be perfectly obvious that that comment was about the article as it existed in 2006, and is no longer relevant to the article in its present form.

Your proposed general rule is very bad (even though in some particular cases it makes sense); following it extensively would require people to clean up after you. Michael Hardy (talk) 22:36, 2 April 2010 (UTC)

joint cumulant
(please forgive my english)

I think that in the formula

$$\kappa(X_1,\dots,X_n) =\sum_\pi\prod_{B\in\pi}(|B|-1)!(-1)^{|B|-1}E\left(\prod_{i\in B}X_i\right)$$

the number |B| of elements in B should be replaced by the number $$|\pi|$$ of blocks in $$\pi$$.

For example, in the given case n=3


 * $$\kappa(X,Y,Z)=E(XYZ)-E(XY)E(Z)-E(XZ)E(Y)-E(YZ)E(X)+2E(X)E(Y)E(Z).\,$$

the constant before the term E(XYZ) (which corresponds to $$\pi=\{\{1,2,3\}\}$$ : only one block of 3 items, i.e. |\pi|=1 and \pi={B} with |B|=3) is $$1=(|\pi|-1)!$$ and not $$2=(|B|-1)!$$.


 * That is certainly correct in this case, and I think probably more generally. I've changed it; I'll come back and look more closely later. Michael Hardy 17:41, 19 September 2006 (UTC)

Intro
We need an intro --dudzcom 04:52, 24 December 2006 (UTC)


 * Seconded. I'm a stats n00b and without a basic intro, the encyclopaedic content lacks a basic context. Jddriessen 14:12, 3 March 2007 (UTC)

k-statistics
I think we should have a section on k-statistics. Could someone knowledgeable write a section describing them and explaining why they are unbiased estimators for the cumulants. Ossi 18:04, 30 December 2006 (UTC)

I have been trying to find information about unbiased and other estimators for ratios of powers of cummulants. In particular, I am interested in estimators for the particular ratio $$\frac {{\kappa_}^{2}}{\kappa_}$$. I can use k-statistics to estimate this ratio as simply $$\frac {{k_}^{2}}{k_}$$ where $$k_{n}$$ is the $$n^{th}$$ k-statistic. This should work, but it is a biased estimator. Are there better, unbiased estimators? User:155.101.22.76 (Talk) 28 Oct 2008


 * You may find something about this in Kendall&Stuart Vol 1. If there is nothing there, it may not be possible to do anything straightforwardly. Unbiassedness may not necessarily be particularly relevant to you, but if it is then you might try the jackknifing and bootstrapping methods for reducing bias. Melcombe (talk) 09:44, 29 October 2008 (UTC)


 * The library at my university does not have Kendall&Stuart Vol 1. However, they do have K&S Vol. 2. The first chapter of Vol. 2 is on estimation. Problem 17.10 turns out to be very close to what I need. Assuming normally distributed r.v. and some limits on the allowed values for m and r the Minimum Variance Unbiased Estimator (MVUE) for $$\kappa_{1}^{r}\,\kappa_{2}^{m}$$ is given by:

$$\sum _{i=0}^{1/2\,r}{\frac { \left( -1 \right) ^{i}r!\,\Gamma \left(\frac{\left(n-1\right)}{2}\right) {k_{{1}}}^{r-2\,i} \left( \frac{\left( n-1 \right)}{2} k_ \right) ^{m+i}}{i!\, \left( r-2\,i \right) !\,\Gamma\left( \frac{\left(n-1\right)}{2}+m+i \right) \left( 2\,n \right) ^{i}}} $$

In my case $$\kappa_{1}^{2}\,\kappa_{2}^{-1}, r =1, m = -1$$, and this reduces to:

$$ \left(\frac{\left(n-3\right)}{\left(n-1\right)}\frac{k_{1}^2}{k_{2}} - \frac{1}{n}\right) $$

Which is precisely the same estimator I derived using other methods also assuming a normal distribution. I did not know that you could reduce bias using jackknifing and bootstrapping methods. Doh! That is great. The distributions I am working with are close to normal. I should be able to use the above MVUE and then reduce any remaining bias using jackknifing or bootstrapping. Thanks. --Stanthomas (talk) 17:34, 30 October 2008 (UTC)

Cumulant "basis"?
It appears that you can reconstruct a function from its cumulants; that is, it seems like the cumulants define a "basis" of sorts the same way the sin and cos functions define a Fourier basis. Of course, a function isn't a linear combination of its cumulants, so it's not a linear basis, but in some sense it still seems like a basis. Comments? 155.212.242.34 (talk) 22:23, 11 December 2007 (UTC)
 * If two finite multisets of numbers have the same cumulant generating function, they are equal. The concept of a random variable is somewhat more general than just a multiset of numbers, and complications arise. It is worth while to understand multisets before trying to understand random variables. The derivative of the cumulant distribution function of a continuous random variable should be considered the limiting case of a series of derivatives of cumulant distribution functions of finite multisets of numbers. As probability density functions are nonnegative, they do not make vector spaces, and so the concept of basis does not immediately apply. Bo Jacoby (talk) 16:33, 19 March 2008 (UTC).

Error in formula
Quote: Some writers prefer to define the cumulant generating function, via the characteristic function, as h(t) where
 * $$   h(t)=\log(E (e^{i t X}))=\sum_{n=1}^\infty\kappa_n \cdot\frac{(it)^n}{n!}=\mu\cdot t - \sigma^2\cdot\frac{ t^2}{2} +\cdots\,.$$

I suppose the formula should be:
 * $$   h(t)=\log(E (e^{i t X}))=\sum_{n=1}^\infty\kappa_n \cdot\frac{(it)^n}{n!}=\mu\cdot i\cdot t - \sigma^2\cdot\frac{ t^2}{2}

+\cdots\,.$$ Is there a reference? Bo Jacoby (talk) 00:43, 20 March 2008 (UTC).


 * You're right; the factor of i was missing. I don't think it should be too hard to find references.  I wouldn't be surprised if this is in McCullagh's book. Michael Hardy (talk) 18:06, 21 March 2008 (UTC)
 * References added to article, for this point at least. The Kendall and Stuart ref would be good for many other of the results quoted (but a later edition might be sought out?). Melcombe (talk) 09:18, 17 April 2008 (UTC)

Improve intro ?
At the end of the intro, the final sentance says: "This characterization of cumulants is valid even for distributions whose higher moments do not exist." This seems to dangle somewhat... Melcombe (talk) 09:30, 17 April 2008 (UTC)
 * exactly what is refered to by "this characterisation"?
 * it seems to imply there are other characterisations?
 * it seems to imply that cumulants might exist even if higher moments do not exist?


 * Probably could be improved; I'll think about it. When higher moments do not exist, then neither do higher cumulants.  In that case, the characterization of cumulants that says the cumulant-generating function is the logarithm of the moment-generating function is problematic.  That is what is meant.  As far as other characterizations go, yes of course there are. Michael Hardy (talk) 21:10, 17 April 2008 (UTC)

Joint cumulants
I want to know more about Joint Cumulants, but this section made no reference to any books or papers. Any suggestions? Thanks! Yongtwang (talk) 13:47, 13 May 2010 (UTC)
 * You could try the existing Kendall&Stuart reference. It is old but covers the multivariate case, both for theoretical and sample versions of the joint cumulants. Melcombe (talk) 14:52, 14 May 2010 (UTC)
 * Hey, thanks for the information. I am reading it. Yongtwang (talk) 12:05, 16 May 2010 (UTC)

Some properties of the cumulant-generating function
The article states that the cumulant-generating function is always convex (not too hard to prove). I wonder if the converse holds: any convex function (+ maybe some regularity conditions) can be a cumulant-generating function of some random variable. //  st pasha  »  20:03, 2 March 2011 (UTC)

Applicability to Quantum Mechanics
I was reading this article to get a more broad background on the cumulant expansion, which is useful in quantum mechanical simulations of spectroscopic signals (absorption, pump-probe, raman, etc). I was somewhat surprised not to see quantum mechanics mentioned at all in the article. The source that I'm currently following on this topic:

Shaul Mukamel's "Principles of Nonlinear Optical Spectroscopy" (ISBN: 0-19-513291-2).

The expansions are debuted in Ch2, "Magnus Expansion". Ch 8 is also devoted entirely to their practical use.

Side note: It amused me that there were "citation needed" marks on the phrase, "Note that expectation values are sometimes denoted by angle brackets". This notation is so ubiquitous in quantum mechanics that one could literally pick up any quantum textbook and insert it as a "source" to verify that this is common practice. Certainly the book I just mentioned could count as such a source. —Preceding unsigned comment added by 24.11.171.13 (talk) 23:26, 20 May 2011 (UTC)

I looked in the comments specifically to discuss the "citation needed" marks for angle bracket denotation of expectation values. It's like asking for a citation that addition is sometimes denoted with a plus sign. I'm going to remove it and someone can put it back in if they feel it's really necessary. Gregarobinson (talk) 16:14, 17 June 2011 (UTC)

Should "Relation to statistical physics" be deleted?
Some formulas in section "Relation to statistical physics" are wrong. Instead of:
 * $$Z(\beta) = \langle\exp(-\beta E)\rangle$$

the formula should read:
 * $$Z(\beta) = \sum_i\exp(-\beta E_i)$$

as found in any relevant textbook or the wiki page for the partition function itself. This breaks the following argument linking F(\beta) to the cumulant generating function for the energy \log Z, as Z is no longer an average. The same critic holds for the grand potential at the end of the section, which is also a sum, not an average.

The equations linking E and C to the corresponding cumulants of the energy are still valid, since the cumulants equal the moments (section "Some properties of cumulants"). However, the interpretation in terms of moments is quite widespread, and in fact the equation:
 * $$E = \langle E_i \rangle,$$

is considered a postulate, in which the energy is linked to an average. The addition of usage of cumulants in stat. mech. that can't be expressed more naturally in terms of moments should be made, if such an usage exists.

Futhermore, the section doesn't cite any source, and none of the article's sources seems relevant at first sight.

These three points make me feel the whole section is rather weak. I suggest it should be deleted. --Palatosa (talk) 19:53, 19 April 2013 (UTC)

I very much liked the section, except that was very disappointed that it did not actually included the definition of the CGF for this system. Chris2crawford (talk) 11:44, 17 July 2020 (UTC)

Minor error in the section "Some properties of the cumulant generating function" ?
There is something strange with the statement "The cumulant-generating function will have vertical asymptote(s) at the infimum of such c, if such an infimum exists etc". Note that x is a negative number here. Something being O(exp(2x)) is a tougher requirement than being O(exp(x)) when x tends to minus infinity. You get the toughest requirement possible by finding the supremeum over c. But changing infimum to supremum doesnt seem right either. Should there be some sign change also? — Preceding unsigned comment added by 89.236.1.222 (talk) 22:12, 17 October 2014 (UTC)

Given definition is only a special case
The definition given requires the moment generating function to exist. Rather than change the definition to use the characteristic function, we just need a note that the relation between moments and cumulants given later, can be used as the definition. TerryM--re (talk) 04:18, 12 February 2015 (UTC)
 * If the definition given apply to the examples given, then generalizations may be postponed or omitted in order not to confuse the readers unnecessarily. Bo Jacoby (talk) 19:26, 25 May 2015 (UTC).

Problems with statistics
This article is very good and informative but someone has a flag on it to improve citations. Standard approaches do not apply to mathematical subjects, where inline citations are not as frequent, and usually one cites a theorem or a result, with plenty of references in back. Limit-theorem (talk)

Raw vs. standardized cumulants
The article on moments includes a table mapping named properties of distributions (mean, variance, skewness, etc.) to the various kinds of moments and cumulants. This table suggests there is a distinction between "raw" and "standardized" cumulants, though I can find no specific explanation of how these two concepts differ. I gather that raw cumulants are the ordinary cumulants discussed in this article, while standardized cumulants are those computed for distributions normalized to have zero mean and unit variance. Accordingly, the first two standardized cumulants are 0 and 1, but all following standardized cumulants appear to be something like $$\frac{\kappa_i}{\kappa_2^{i/2}}$$ for i &gt; 2 (extrapolated from the formulae for skewness and kurtosis when expressed in terms of raw cumulants). Rriegs (talk) 21:58, 1 April 2016 (UTC)

As an aside, the value $$\frac{\kappa_1^2}{\kappa_2}$$ (i.e. the mean squared divided by the variance) discussed elsewhere in this talk page is just the above formula squared for i = 1, though this is not a standardized cumulant. Is there a name or special significance for this value? Rriegs (talk) 21:58, 1 April 2016 (UTC)

On the Multilinearity of Joint Cumulants
Can anyone point to a reference which proves the multi-linearity property for joint cumulants? Or if the possible give a short argument? I find it very intriguing. Manoguru (talk) 06:21, 25 August 2017 (UTC)

Duplicated content
thank you for your recent edits. However, they seem to duplicate sections already present later in the article. I leave it to you how best to condense things so that there isn't that duplication. For example the discussion about the relationship of cumulants to central moments is later done in terms of $κ$ and $μ$ relationships. — Q uantling (talk &#124; contribs) 02:48, 9 November 2021 (UTC)

Wrong formula
The section Cumulants_of_some_discrete_probability_distributions contains the wrong formula
 * $$K''(t)=g'(t)\cdot(1+e^t\cdot (\varepsilon^{-1}-1))^{-1}$$

Replacing it with
 * $$K''(t)=\mu\varepsilon e^t(\varepsilon-(\varepsilon-1)e^t)^{-2}$$

https://www.wolframalpha.com/input/?i2d=true&i=differentiate%5C%2840%29m*Power%5B%5C%2840%291%2Bg*%5C%2840%29Power%5Be%2C-t%5D-1%5C%2841%29%5C%2841%29%2C-1%5D%5C%2841%29

Bo Jacoby (talk) 16:38, 10 November 2021 (UTC)

Definition of joint cumulant(s?)
Currently it starts by mentioning a (joint) cumulant generating function $$K(t_1,t_2,\dots,t_n)=\log E(\mathrm e^{\sum_{j=1}^n t_j X_j})$$, and then it jumps to: A consequence is that $$\kappa(X_1,\dots,X_n) =\sum_\pi (|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}X_i\right)$$ ...

I don't think, this is very clear. At least I don't understand it. My confusion starts from the fact that the cumulants forms a sequence of numbers $$\kappa_1, \kappa_2, ... $$ which are the coefficients of a univariate power series. Here $$K(t_1,t_2,\dots,t_n)$$ is multivariate. So I would expect not one single joint cumulant but a collection of joint cumulants $$\kappa_{{i_1},...,{i_n}}(X_1,...,X_n)$$.

If someone could clarify this, it would be fantastic.

Also a (book/article) reference for the joint cumulants would be very welcome, as in many books only the (simple) cumulants are treated. Bongilles (talk) 08:42, 24 November 2023 (UTC)


 * That definitely is confusing. It appears that $$\kappa(X_1, \ldots, X_n)$$ means the cumulant with indices $$i_1 = \ldots = i_n = 1$$ and index $$i_j = 0$$ for $$j > n$$ if there additional dimensions beyond n.  If I have that right, it should be clarified or changed. — Q uantling (talk &#124; contribs) 18:14, 28 November 2023 (UTC)
 * I made an attempt to address this issue. Your feedback and/or edits are welcome. — Q uantling (talk &#124; contribs) 19:53, 28 November 2023 (UTC)
 * Thank you. From what I read, I agree tat the joint cumulant is the coefficient with all indices equal to 1. See Section 3.1 in [Peccati, Taqqu, 2011].
 * I have not found anywhere a terminology for the other coefficients of the the multivariate Maclaurin series, so it seems that the convention is that there is only one joint cumulant for a given collection of random variable (coefficient with all indices equal to 1). The other coefficients do not seem to have a name.
 * I made changes accordingly. I also added a tiny bit of structure by adding the subscetion title Relation with mixed moments after the definition.
 * In this subsection, the first equation and its description is very unclear (to me). I'm not sure what the indices 1_1, and i_2 reffer to in the description. In the left hand side I have replaced A_1,... A_n by X_1,...,X_n, but that might have been a mistake. Bongilles (talk) 10:09, 30 November 2023 (UTC)

I have done further changes. This should be more clear and consistant now. I also added references. Bongilles (talk) 10:09, 30 November 2023 (UTC)


 * Thank you for the efforts. Overall, this is a much needed improvement.  However, I don't like that the first definition is for $$\kappa_{11\dots}$$ rather than for $$\kappa_{k_1 k_2 \dots}$$.  All of the latter are cumulants.  Instead of defining $$\kappa(X_1, \dots, X_n)$$ (as $$\kappa_{11\dots}$$), I'd jump straight to defining $$\kappa_{k_1 k_2 \dots}$$.  Later, we can introduce the $$\kappa(X_1, \dots, X_n)$$ notation; there we will indicate that its simplest meaning is $$\kappa_{11\dots}$$ but, as later explained, it is also useful when some variable is repeated, as in the $$\kappa(X, X, Z)$$ example.  Also, I would go back to using $$\kappa(A_1, \dots, A_n)$$ because we then have the freedom to write $$A_1 = A_2 = X_1$$ and $$A_3 = X_3$$ to be equivalent to $$\kappa_{201}$$.  — Q uantling (talk &#124; contribs) 18:20, 30 November 2023 (UTC)
 * I understand the logic in calling all the coefficients joint cumulants. Have you seen this terminology used in some books? If yes which one? Otherwise, I would refrain from giving a more general meaning to the term joined cumulant than what is standard in the literature. Bongilles (talk) 19:30, 30 November 2023 (UTC)
 * I have Kendall and Stuart at home; I'll take a look. — Q uantling (talk &#124; contribs) 21:26, 30 November 2023 (UTC)
 * Concerning A vs X. The point you make about equivalences has a better place in the other paragraph I created where I explained that all the coefficients are joint cumulants, if we repeat variables.
 * I find that including this within the mixed moments formula is rather confusing. Bongilles (talk) 19:42, 30 November 2023 (UTC)