Wikipedia:Reference desk/Archives/Mathematics/2012 August 26

= August 26 =

Rapidly oscillating function times a slowly oscillating function
As my textbook was presenting the solution to a (coupled) ODE, the following equations came up: $$\frac{\omega_1}{4}\left( e^{i(\omega_0+\omega)t}+ e^{i(\omega_0 -\omega)t}\right)d(t)=i\dot{c}(t), \ \frac{\omega_1}{4}\left( e^{-i(\omega_0+\omega)t}+ e^{-i(\omega_0 -\omega)t}\right)c(t)=i\dot{d}(t)$$.

$$c(t)$$ and $$d(t)$$ are the unknown functions we're solving for, and $$\omega, \ \omega_0, \mathrm{and} \ \omega_1$$ are given constants with $$\omega_1 \ll \omega, \omega_0$$.

At this point the author claims that "[u]nless $$\omega$$ is chosen to be very near to $$\omega_0$$, both the exponentials in [the above equations] are rapidly oscillating functions that when multiplied by a more slowly oscillating function such as $$c(t)$$ or $$d(t)$$, whose time scale is set by $$\omega_1$$, will cause the right-hand side of [the equations] to average to zero." He then goes on to assume that $$\omega=\omega_0$$ and sets the terms with $$e^{\pm i(\omega_0+\omega)t}$$ to zero.

That $$c(t)$$ and $$d(t)$$ oscillate more slowly than the exponentials in the equations is pretty clear from the fact that their derivatives are proportional to $$i\omega_1$$. What I don't see is why a rapidly oscillating function vanishes when it is multiplied by a more slowly oscillating one. It certainly isn't true for, say, $$\sin(t)*\cos(100t)$$.

Can anyone decipher what he means? Thanks. 65.92.7.148 (talk) 03:24, 26 August 2012 (UTC)


 * The rapidly oscillating function when multiplied by a more slowly oscillating one does not vanish, but its average vanishes. Bo Jacoby (talk) 11:46, 26 August 2012 (UTC).


 * First: why does the average vanish? (nevermind I get it now, it's really obvious) Second: why does a vanishing average allow you to remove the function wholesale? 65.92.7.148 (talk) 16:36, 26 August 2012 (UTC)
 * The average tends to zero for (fast oscillation tending to infinity) essentially by the Riemann–Lebesgue lemma. (This is similar to the weak convergence of sin(nx) to 0, which is given as an example in the weak convergence article). You could also check out Stationary phase approximation. —Kusma (t·c) 17:19, 26 August 2012 (UTC)
 * Thanks. Any idea why the function can just be ignored if it's average is zero? 65.92.7.148 (talk) 18:22, 26 August 2012 (UTC)
 * It is just an approximation. But a reason to ignore it is that not just its average is zero (well, that is true for any periodic function), but that as $$\omega\to\infty$$, any local average of it tends to zero as well. So if we only look at the function "on time scales much larger than its period" (or if we integrate over it), it somewhat looks like the zero function. (I'm not sure I'm helping here, so ignore me if I am confusing you. I don't quite know what your textbook is trying to do, but I think that at the heart of it is some form of weak convergence). —Kusma (t·c) 18:43, 26 August 2012 (UTC)

This integration question is taken from WP:WIKISPEAK
This might have been asked before but I can't find archive: How to evaluate $$\int_0^\infty \frac{\cos(x)-1}{x^2} \,dx$$? — Preceding unsigned comment added by Ibicdlcod (talk • contribs) 06:28, 26 August 2012 (UTC)
 * I'm not sure if this is okay, but try writing
 * $$\cos(x) = 1 - 2\sin^2(x/2)$$
 * so the integral becomes:
 * $$\int_0^\infty \frac{\cos(x)-1}{x^2}dx = \int_0^\infty \frac{-2\sin^2(x/2)}{x^2} dx $$
 * tidying up slightly
 * $$ = -\frac{1}{2} \int_0^\infty \frac{\sin^2(x/2)}{(x/2)^2} dx $$
 * i.e. the integrand is sinc^2(x/2) which someone else has solved via fourier transform here: . Hopefully... 94.72.217.246 (talk) 10:55, 26 August 2012 (UTC)


 * Wolfram link. hydnjo (talk) 13:02, 26 August 2012 (UTC)

Contour integration is a lot easier, in fact it is so easy that you can do it in your head this way. The integral is half times the integral from minus to plus infinity, and cos(x) = Re[exp(i x)]. Modify the integration interval by leaving out a part from minus to plus epsilon, this allows us to move the Re operator outside the integral. We need to take the limit epsilon to zero when we're done, of course. Close the contour by a half circle in the upper half plane and a half circle connecting the points minus epsilon and plus epsilon in the upper half plane. The big half circle tends to zero as the radius goes to infinity, the small half circle is not zero, you need to subtract this term.

Then since there are no poles inside the contour, the contour integral is zero, and you are left with minus the half circle from minus to plus epsilon in the limit that epsilon tends to zero, which is of course, the half circle from plus to minus epsilon and by Plemelj's theorem, it is pi i times the residue at zero.

We have

exp(i z) = 1 + i z + O(z^2),

so the residue is i, the integral is thus -pi, divide by two to get the integral from zero to infinity and you find the result of -pi/2. Count Iblis (talk) 15:49, 26 August 2012 (UTC)

calculating error of slope and intercept for ordinary least squares
I remember seeing a formula for calculating the standard error of the estimators for the intercept and slope of a regression model. Is this directly calculable from r or R^2? If not, what is the nature of the relationship between standard error of regression and standard error of the estimators? I would like to compare estimators for different experimental populations, because I am studying a speed-curvature power law relationship (log curvature versus log speed) and I keep getting a slope value between 0.39 and 0.41 and intercept value between 0.59 and 0.65. I want to argue that the slopes of the power law relationship I am measuring for each of the populations are the same. I eventually also want to put the (fruit fly) populations on cocaine and compare the difference. Nothing gold can stay (talk) 23:51, 26 August 2012 (UTC)