Wikipedia:Reference desk/Archives/Mathematics/2010 April 25

= April 25 =

Square of wavefunction
Why is it that:

$$|t|^2= \frac{1}{1+\frac{V_0^2\sinh^2(k_1 a)}{4E(V_0-E)}}$$

when:


 * $$t=\frac{4 k_0k_1 e^{-i a(k_0-k_1)}}{(k_0+k_1)^2-e^{2ia k_1}(k_0-k_1)^2}$$

and k1 is imaginary? How do you calculate the absolute value of t anyhow? --99.237.234.104 (talk) 00:50, 25 April 2010 (UTC)
 * So you're going to have to give us a bit more information. Your expression for $$t$$ doesn't contain $$E$$ or $$V_0$$, so without additional facts you can't just manipulate one into the other. However, in general for a complex number $$|t|^2 = t^{\!*}t$$. That'll be the gist of your calculation. Martlet1215 (talk) 10:58, 25 April 2010 (UTC)
 * Right, I didn't realize the two expressions use different variables. The two equations came from the rectangular potential barrier article, and k0 and k1 are defined in terms of E and V0 there.  --99.237.234.104 (talk) 15:45, 25 April 2010 (UTC)

Polynomials
"In projective n-space over a field, a homogeneous multivariable polynomial of degree d-1 cannot have d roots on a line (projective line) without vanishing identically on the line." According to my textbook, this is not "too hard to see". I've been stuggling to see this. Do I need to do some heavy calculations, or am I supposed to visualise it? I can visualise why this is true somewhat, but I just can't see why a degree 2 multivariable polynomial for example cannot have 3 roots on a line algebraically, especially in projective n-space which I can't visualize for dimension greater than 3. How do I show that a d-1 degree multivariable homegeneous polynomial cannot have d roots on a given line in projective space without vanishing on the entire line? This is not homework (I'm just trying to understand the text). Thank you in advance. --Annonymous —Preceding unsigned comment added by 122.109.239.224 (talk) 03:47, 25 April 2010 (UTC)


 * Say the polynomial is P(X0,...,Xn). After a linear change of coordinates, you can assume that the line consists of all points with homogeneous coordinates of the form (λ,μ,0,...0). (The change of coordinates will change your polynomial, but it will still be homogeneous of the same degree.) Setting the variables X2,..., Xn equal to zero, you end up with a polynomial
 * a0X0d-1 + a1X0d-2X1 + ... + ad-1X1d-1
 * Now you need only evaluate it at points with homogeneous (X0,X1) coordinates (λ,1) and (1,0). Depending on whether (1,0) is a zero of the polynomial or not, you get a polynomial of degree d-1 or (at most) d-2 in terms of λ when you evaluate at (λ,1). The number of zeros of a nonzero polynomial is at most the degree of the polynomial. 86.205.30.114 (talk) 04:25, 25 April 2010 (UTC)

Reverse of an n-gram model
Given a table that yields the probability distribution for a term in a series given the n terms before it, is there an algorithm to invert the table so that it instead yields the distribution of a term given the n terms after it (i.e. if the table is empirically derived, to give the table we would have come up with had we reversed the order of the sample data)? Neon Merlin  04:09, 25 April 2010 (UTC)
 * Shouldn't there be an obvious brute force method? Are you asking if there is something better? 69.228.170.24 (talk) 06:59, 25 April 2010 (UTC)
 * I think there is missing data. Let's take a simple case where $$n=1$$. Given $$P(X_{k+1}|X_k)$$, you have $$P(X_k|X_{k+1})=\frac{P(X_k)P(X_{k+1}|X_k)}{P(X_{k+1})}$$ by Bayes' theorem. But you need to know the prior $$P(X_k)$$. -- Meni Rosenfeld (talk) 08:16, 25 April 2010 (UTC)


 * The step you're missing is that, for all terms X, $$\textstyle P(X) = \sum_Y P(X|Y)$$, where Y ranges over all possible terms. This of course generalizes to $$n > 1$$:
 * $$\textstyle P(X) = \sum_Y P(X|Y) = \sum_Y \sum_Z P(X|Y,Z) = \sum_Y \sum_Z \sum_W P(X|Y,Z,W)$$, etc.
 * In general, the simplest way to reverse an n-gram model of arbitrary length is to first convert it into the symmetric form, consisting of the unconditional probabilities of all the n+1 -grams:
 * $$P(A,B,C,\ldots,X,Y,Z) = P(A) P(B|A) P(C|A,B) \cdots P(Y|A,B,\ldots,X) P(Z|A,B,\ldots,X,Y)$$.
 * The last conditional probability comes straight from your original model, while for the others you need to sum over the excess preceding terms like above. Once you have the unconditional n+1 -gram probabilities, it should be obvious how to change the direction — just let:
 * $$P'(A,B,C,\ldots,X,Y,Z) = P(Z,Y,X,\ldots,C,B,A)$$.
 * Then the conversion back to conditional probabilities is even simpler than the first step:
 * $$P'(Z|A,B,C,\ldots,X,Y) = \frac{P'(A,B,C,\ldots,X,Y,Z)}{P'(A,B,C,\ldots,X,Y)} = \frac{P'(A,B,C,\ldots,X,Y,Z)}{\sum_{Z'} P'(A,B,C,\ldots,X,Y,Z')}$$.
 * —Ilmari Karonen (talk) 18:13, 27 April 2010 (UTC)
 * I'm confused. Usually $$\textstyle P(X) = \sum_Y P(Y)P(X|Y)$$; where does $$\textstyle P(X) = \sum_Y P(X|Y)$$ come from? It looks like you've shifted the problem of knowing the prior of X to knowing the prior of Y.
 * I'll note that the problem of priors may be resolved if we assume we are at a stationary distribution. -- Meni Rosenfeld (talk) 19:09, 27 April 2010 (UTC)
 * You're right, I screwed that up. (Can you tell I'm more used to starting from the unconditional n-gram probabilities?)  Let's see now... yes, I think you're right that the real solution is to define $$P(A,B,C,\ldots,X,Y,Z)$$ as the stationary distribution of the time-homogenous n-th order Markov chain given by $$P(Z|A,B,C,\ldots,X,Y)$$.  In general, of course, such a stationary distribution need neither exist not be unique, but any Markov chain arising as an n-gram model of a finite data series should (assuming that we identify the start-of-data and end-of-data states) be irreducible and positive recurrent, which should guarantee existence and uniqueness of the stationary distribution.  —Ilmari Karonen (talk) 02:05, 28 April 2010 (UTC)

Quantum Harmonic Oscillator
I was trying to solve the QHO and I found a power series expansion for the each energy eigenstate (in the position basis). This was $$f(x) = \sum_{n=0}^\infty a_n x^n$$ where $$a_n = \frac{a_{n-4}-E a_{n-2}}{n^2-n},$$ with $$a_{-\left|n\right|} = 0, \quad a_0 = 1, \quad a_1 = 0$$ and E measured in units of $$\hbar \omega / 2$$. What prevents me from using and arbitrary value of E rather than only odd integers? 74.14.111.225 (talk) 05:40, 25 April 2010 (UTC)


 * You will probably find a more receptive audience if you transfer this question to WP:Reference desk/Mathematics. (ie delete it from the Science Reference Desk and post it at the Mathematics Reference Desk.)  Dolphin  ( t ) 06:20, 25 April 2010 (UTC)
 * Okay. 74.14.111.225 (talk) 07:01, 25 April 2010 (UTC)
 * Mathematically, E can be arbitrary, and the series will converge nicely on the entire real line. Whatever reason there is for E to be an odd integer, it has to do with the physics of it. Perhaps the function needs to have zeros at specific places or something of the sort. -- Meni Rosenfeld (talk) 08:36, 25 April 2010 (UTC)
 * I wonder why there are so many QM questions lately? -- Meni Rosenfeld (talk) 08:36, 25 April 2010 (UTC)
 * It's not that the series fails to converge for other E, but that the Hamiltonian in the QHO has discrete eigenvalues which are described by that equation. 69.228.170.24 (talk) 08:45, 25 April 2010 (UTC)


 * I haven't checked your relation or anything but the usual reason to impose specific eigenvalues on the QHO is the additional requirement that a physical wavefunction must be square-integrable (therefore normalisable). Martlet1215 (talk) 11:19, 25 April 2010 (UTC)
 * As far as I'm aware it's just a big coincidence there are 2 Quantum Harmonic Oscillator questions in the last 2 days, I certainly could have asked mine any time in the last week, but perhaps there's something far more sinister going on... :_: With regards to E taking only odd integer eigenvalues, I found that when you solve for the series solution in the differential equation - this is the differential equation $$(-\frac{d^2}{dx^2}+x^2)\chi=E\chi$$, which you get from a minor substitution for E and x - it's best to write your solution $$\chi(x)=f(x)e^{-\frac{1}{2}x^2}$$ and then solve for f(x): you get a recurrence relation for the coefficients which is something like $$a_{n+2}=\frac{2n-E+1}{(...)}a_n$$ (or something similar) where '...' is something irrelevant - a quadratic in N if I recall correctly - on the denominator, and so you find your series only terminates when 2n-E+1=0 for some n, so E=2n+1, and if your series doesn't terminate, then the large-x behaviour of f(x) is the same as that of $$e^{x^2}$$, so overall $$\chi(x)\approx e^{-\frac{1}{2}x^2}e^{x^2}=e^{\frac{1}{2}x^2}$$ and this is obviously not a bounded wavefunction; that's why you want to take E=2n+1 for some n so your series for f(x) terminates, and then reverting back from your original substitution, you get that the actual energy $$\epsilon = \hbar \omega \frac{E}{2}=(n+\frac{1}{2})\hbar \omega$$, which is where your eigenvalues come from as in the wikipedia article on the QHO. Still, this is entirely self-taught and from memory, so someone please correct me if I'm wrong! :) Simba31415 (talk) 11:21, 25 April 2010 (UTC)
 * It is sad that E is usually measured in units of $$\scriptstyle \hbar \omega$$ rather than $$\scriptstyle \hbar \omega / 2$$. Once you discover that an atomic unit is not atomic after all, you should use the new atom as a unit. $$\scriptstyle (n+\frac 1 2)\hbar \omega$$ should be written $$\scriptstyle (2n+1)\frac{\hbar \omega}2$$. Half odd integral spin (for fermions) is awkward compared to odd spin. Bo Jacoby (talk) 07:41, 26 April 2010 (UTC).

Two sample t-test in R
Dear Wikipedians:

I am doing a two sample t-test in R on two samples shown below:

x = (9, 10, 1, 2, 4, 2, 1, 7, 9, 2, 2, 5, 6, 7, 6, 3, 9, 2, 10, 4, 8, 5, 6, 9, 2)

y = (20, 18, 19, 11, 10, 17, 12, 20, 12, 15, 11, 13, 19, 17, 12, 16, 15, 11, 14, 12, 19, 10, 12, 20, 17)

However, R outputs the following:

Welch Two Sample t-test

data: x and y t = -10.383, df = 47.182, p-value = 8.962e-14 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -11.507589 -7.772411 sample estimates: mean of x mean of y     5.24     14.88

where my textbook shows the following output:

Two sample t-test: Assuming same variance Data: ER visits by asthma status t = -10.383, df = 48, p-value = 7.29 x 10^-14 Sample Estimates: Mean in Non-Asthmatics 5.24 Mean in Asthmatics 14.88 95 percent confidence interval: (0.000, 11.234) (Non-Asthmatics) (8.036, 21.724) (Asthmatics)

My question is: how do I make R output two confidence intervals?

Thanks,

174.88.242.191 (talk) 16:13, 25 April 2010 (UTC)


 * Hi. I can't quite get exact agreement with the two CIs that you give, but I can offer the following as a closer match to what you're trying.

> x <- c(9, 10, 1, 2, 4, 2, 1, 7, 9, 2, 2, 5, 6, 7, 6, 3, 9, 2, 10, 4, 8, 5, 6, 9, 2) > y <- c(20, 18, 19, 11, 10, 17, 12, 20, 12, 15, 11, 13, 19, 17, 12, 16, 15, 11, 14, 12, 19, 10, 12, 20, 17) > data.frame(asthma=rep(c("no","yes"),each=c(length(x),length(y))),d=c(x,y)) -> jj > t.test(jj$d~jj$asthma,var.equal=TRUE)

Two Sample t-test

data: jj$d by jj$asthma t = -10.383, df = 48, p-value = 7.294e-14 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -11.506753 -7.773247 sample estimates: mean in group no mean in group yes 5.24            14.88


 * HTH, Robinh (talk) 20:08, 25 April 2010 (UTC)