Wikipedia:Reference desk/Archives/Mathematics/2018 November 27

= November 27 =

Calculating the probability for difference between two samples
I have two samples of different sizes on which I'm measuring a certain variable. On one sample I get $$x_1 \pm \sigma_1$$ and on the other one $$x_2 \pm \sigma_2$$. Now there are some rules of thumb I can use when the standard deviations are the same, and when one of the standard deviations is much smaller than the other and than the difference between the means, I can also get a close enough result for my needs by assuming that it's zero which turns it into a z-value problem. How to solve the general case? Assuming that we're trying to prove that the true expectancies of the two samples differ, as opposed to the hypothesis that the different results were caused by random noise.

In my particular case, $$\sigma_1 \approx 2 \sigma_2, \left\vert x_1 - x_2 \right\vert \approx \frac{4}{3} \sigma_1$$ 93.139.59.187 (talk) 09:03, 27 November 2018 (UTC)
 * According to our article on the normal distribution, "if X1 and X2 are two independent normal random variables, with means μ1, μ2 and standard deviations σ1, σ2, then their sum X1 + X2 will also be normally distributed, with mean μ1 + μ2 and variance σ12 + σ22." Then, since X2 will have mean −μ2 and s.d. μ2, X=X1−X2 will have mean μ1−μ2 and s.d. √(σ12 + σ22). So you can apply your hypothesis testing machinery on μ(X) = 0 with σ(X) given as √(σ12 + σ22). I'm assuming normally distributed variables here; you didn't explicitly say so but it seemed like you were assuming that implicitly. --RDBury (talk) 10:19, 27 November 2018 (UTC)
 * Ah yes of course, summing the variances! A little ashamed I didn't think of that :-) 93.139.59.187 (talk) 11:06, 27 November 2018 (UTC)