Wikipedia:Reference desk/Archives/Mathematics/2012 January 5

= January 5 =

Goodness of Fit
Hello. Why is a hypothesis rejected as false if there is a probability of less than 5% of observing a deviation from expectation at least as large as the result? Suppose the probability of observing some difference (or a more extreme difference) is 99%. Does this mean that the probability is almost certain that you record an outcome that falls outside of the hypothesis? Thanks in advance. --Mayfare (talk) 03:56, 5 January 2012 (UTC)
 * The 5% level is usually considered to be the minimum significance that is worth considering. For many purposes, the 1% level is more appropriate, and 0.1% is used in some tests.  It all depends on what you mean by "almost certain".  See Statistical significance for some detail.    D b f i r s   09:23, 5 January 2012 (UTC)

If it is almost certain that an outcome will fall outside of observation (which already deviates from expectation), why does the goodness of fit make sense? --Mayfare (talk) 14:54, 5 January 2012 (UTC)
 * The Goodness of fit article explains that if there is a large discrepancy between observed values and the values expected under the model in question, then the model is not a good fit at all. You might be asking about Type I and type II errors, but I'm not sure that I understand your question.  Perhaps an expert here can see what you mean .... awaiting expert view ... (otherwise, could you give an example?)    D b f i r s   20:51, 5 January 2012 (UTC)

Let's refer to the example in goodness of fit. "The probability of observing that difference (or a more extreme difference than that) if men and women were equally numerous in the population is 0.23." Why does a higher probability not refute the hypothesis? --Mayfare (talk) 03:04, 6 January 2012 (UTC)
 * It's not the probability that refutes the hypothesis, it's a low-probability observation. The example with a significance level of 23% is similar to that of someone getting two heads in a row and concluding that the coin is two-headed.  This would be a type 1 error, where the person rejects the null hypothesis of an unbiased coin without having sufficient evidence for the rejection.  A test at the 0.1% level would be (approximately) throwing the coin ten times.  Getting ten heads in a row (with fair tosses) would be good evidence that the coin was two-headed. (I'm glossing over fact that this is a one-sided test with the alternative hypothesis being "two-headed", not just "biased".)    D b f i r s   10:36, 6 January 2012 (UTC)


 * Mayfare - I think you have this the wrong way round. Given an observation O, we can calculate the probability that O or a more extreme observation than O will occur given the assumptions of the null hypothesis H0; let's call this probability P(>O|H0). We set a threshold value before the experiment of, say, 5%, and then we reject the null hypothesis if P(>O|H0) is smaller than this threshold. The lower we set the threshold value, the less likely we are to incorrectly reject the null hypothesis (which, as Dbfirs says, is called a "Type 1 error"). The trade off is that the lower we set the threshold value, the more likely we are to incorrectly accept the null hypothesis too (a "Type 2 error"). In the example from Goodness of fit, an observed 44:56 ratio in a sample of 100 has P(>O|H0) of 23%, which is not less than 5%, so suggesting that this observation alone is not sufficient evidence to reject the null hypothesis of a 50:50 ratio in the whole population. Gandalf61 (talk) 10:32, 6 January 2012 (UTC)

Power series
Hello. This probably isn't the type of question you usually get, having little really to do with math. Does anybody know, or think they vaguely recall, any quotations by famous mathematicians about power series / Taylor series / Maclaurin series? I'm looking for something not too long - I have Niels Henrik Abel quoted in Spivak, but the quote is rather long and explicit. Something like "Power series are wonderful" or something would be ideal (though not necessarily that concise :) Thanks. 24.92.85.35 (talk) 23:57, 5 January 2012 (UTC)


 * "The master forbids it." -- Mittag-Leffler.  (Context here.)  Sławomir Biały  (talk) 01:32, 6 January 2012 (UTC)
 * “Divergent series converge faster than convergent series because they don't have to converge.” -- George F. Carrier. -- Meni Rosenfeld (talk) 04:35, 6 January 2012 (UTC)
 * Lackadaisically googled for Hardy's?, Littlewood's? 'This series diverges, so we can do something with it' but didn't nail down the locus classicus.John Z (talk)