Wikipedia:Reference desk/Archives/Mathematics/2006 September 24

Random Variable X and its Mean Cancelling
If I have a random variable X with a mean mu(X), and both are in the same equation, one being negative, can the two simply cancel out?

Basically, how do the two relate?


 * Like in X – mu(X)? No, they won't simply cancel out. This is a new random variable, say Z. Have you read the articles Random variables and Expected value? Take for example that X is: the result of throwing two dice and adding the number of eyes. The outcomes of X may range from 2 to 12, with an expected value (arithmetic population mean) of 7. Suppose you throw the dice 10 times and observe for X this sample: [3, 7, 10, 9, 9, 7, 2, 7, 2, 7]. (I actually threw two dice ten times here.) That means for Z = X − 7 this sample: [−4, 0, 3, 2, 2, 0, −5, 0, −5, 0]. Occasionally the outcome of Z is 0, but the random variable Z itself is clearly not the constant 0. However, mu(Z) = 0. To see this, we need three facts: mu(V) = E(V) for a random variable V – which is true by definition of mu(.), E(X − Y) = E(X) − E(Y), and E(C) = C for constant C. Then mu(Z) = E(Z) = E(X – mu(X)) = E(X) – E(mu(X)) = mu(X) – mu(X) = 0. Indeed, the sample mean for our sample of 10 outcomes for Z is −0.7 – not quite 0 due to the random fluctuations, but fairly close. --Lambiam Talk 01:13, 24 September 2006 (UTC)

Okay, but in the long run, they will cancel? Like if I have a function Y equal to the equation you said, Y = X - mu(X), the E(Y) will be 0, simply due to the rule that X can be transformed into mu (X)?


 * Yes, E(Y)=0, but I'm not sure what you meant by "X can be transformed into mu (X)". -- Meni Rosenfeld (talk) 16:38, 24 September 2006 (UTC)

Well in order for E(Y)=0, X must be equal to mu (X) to cancel...how does this come about? How do you show that E(Y)=0?


 * That's what I showed above, except that I called it Z instead of Y. --Lambiam Talk 16:59, 24 September 2006 (UTC)

Wow Sorry you are completely correct, that makes perfect sense, thank you. Just as a final clarification, the E(muX) is the part that pertains to E(C) = C for constant C?


 * Exactly. E(X) (aka &mu;(X)) is a constant, therefore E(E(X)) = E(X). -- Meni Rosenfeld (talk) 17:11, 24 September 2006 (UTC)

the other delta function?
OK, so I am trying to understand how the fourier transform of a pulse train in the time domain gives also a pulse train in the frequency domain. Actually i'm stuck on a little detail. I get that: $$F[\delta(t)]= \int\delta(t)\exp^{-jwt}dt = {1}$$, that $$F^{-1}{[1]}= \int\exp^{jwt}dw = \frac{1}{jt}\exp^{jwt}$$, and that $$F^{-1}F[\delta(t)]= \int(\int\delta(t)\exp^{-jwt}dt)\exp^{jwt}dw=\frac{1}{jt}\exp^{jwt}$$. And that in the end, you make the jump and say that $$\delta(t)= \frac{1}{jt}\exp^{jwt}$$...

I don't see it. Did I miss something? Anybody has some intuition (hopefully on physical grounds) on how the expression $$\frac{1}{jt}\exp^{jwt}$$ is equivalent to a delta function? please? --crj


 * What makes you think a pulse train in the time domain should give a pulse train in the frequency domain? On the contrary, it should be all over the place in the frequency domain (see "Localization property" in the article Continuous Fourier transform). This should be intuitively obvious: you can't assign any one frequency to a single spike. It also fits with your result: 1, which is hardly a spike. In the expression you give for $$F^{-1}{[1]}$$, there is an occurrence of the variable $$w$$ (the traditional notation is $$\omega$$). That can't be right; the result should only depend on $$t$$ . This is not an indefinite integral (antiderivative; primitive function) but a definite integral for $$w$$ from $$-\infty$$ to $$+\infty$$. --Lambiam Talk 21:30, 24 September 2006 (UTC)
 * Ooops. I meant to type t instead of w after the integration. There are some notes on the fourier transform of pulse train in the article Dirac comb. Maybe I am doing the problem the wrong way but something smells fishy here... because what happens at t=0? oh dear. Thanks anyway. --crj 00:48, 25 September 2006 (UTC)


 * Whoa! Careful there. The transform of a single impulse is a constant; the transform of a periodic sequence of impulses is again a periodic sequence of impulses, whose period is inversely proportional to the original period. --KSmrqT 22:34, 24 September 2006 (UTC)


 * So the Dirac delta function is a tiny bit confusing? Since it's not really a normal function at all, that's not surprising. One way we approach it is through its properties within an integral. That is,
 * $$\int_{-\infty}^{+\infty} \delta(t-t_0) f(t) dt \,\!$$
 * selects the value f(t0). We can be more formal by using a limiting process. For example, we know that a Gaussian bell curve,
 * $$ \frac{1}{\sigma \sqrt{2\pi}} \exp(-\frac{x^2}{2\sigma^2}), \,\!$$
 * integrates to 1, and can be made as narrow as we like by letting &sigma; approach zero. We also know (and can easily verify) that the Fourier transform of a Gaussian is again a Gaussian, but with width inversely proportional to &sigma;. As we pinch the Gaussian to its delta function limit, its transform spreads and flattens to a constant.
 * A delta function cannot be written as you propose, as a simple exponential, which may account for your confusion.
 * If making a single impulse requires some "funny business", making an impulse train requires more. But perhaps we should stop here for now. --KSmrqT 23:24, 24 September 2006 (UTC)

According to the Dirac comb article the fourier transform of an impulse train is: $$\sum_{n=-\infty}^{\infty} \delta(t - n T) \quad \Longleftrightarrow \quad {1\over T}\sum_{k=-\infty}^{\infty} \delta \left( f - {k\over T} \right) \quad = \sum_{n=-\infty}^{\infty} e^{-i2\pi fnT}$$. My question is, under what grounds are the last two terms (sum with delta function in the frequency domain, sum with complex exponential) equivalent? -- Crj 02:34, 25 September 2006 (UTC)
 * Let me rephrase the question:
 * Let's look at the sum. $$\sum_{n=-\infty}^{\infty} e^{-i2\pi fnT} = \frac{e^{-2\pi ifT(N+1)}-e^{2\pi ifTN}}{e^{-2\pi ifT}-1} = \frac{\sin 2\pi fT(N+{1\over 2})}{\sin 2\pi fT}$$. Now let N approach infinity. It's possible to check that this sum approaches infinity if f = k/T (with integer k), approaches zero otherwise, and that its integral over any interval containing exactly one point where f = k/T approaches 1. So it's a sum of delta functions. Conscious 11:53, 25 September 2006 (UTC)


 * The question is based on an elementary slip. If the sum 1+2+3 equals the sum 2+2+2, we have no grounds for assuming equality between respective terms. This is true for infinite sums as well. --KSmrqT 14:41, 25 September 2006 (UTC)
 * By the way, you are wrong in saying (in your initial post) that $$F^{-1}{[1]}= \frac{1}{it}\exp^{i\omega t}$$. You have to use the definite integral, i.e. $$F^{-1}{[1]}= \int_{-\infty}^{+\infty}e^{i\omega t}d\omega$$, to obtain the result. Conscious 17:02, 25 September 2006 (UTC)


 * The question of equivalence of the sum of deltas and the sum of exponentials is discussed at the Nyqvist-Shannon sampling theorem (in the Mathematical basis for the theorem section). There, it is explained that the sum of exponentials only agrees with the sum of deltas in the sense of tempered distributions.  This is slightly weaker that pointwise equality. -- Fuzzyeric 02:26, 28 September 2006 (UTC)

wow this turned out to be a funny business. my difficulty really was in recovering the delta function intact after I take the fourier transform of an impulse train: "delta functions in... delta functions out", or so I thought. Thanks for all the responses! -- Crj 14:17, 26 September 2006 (UTC)