User:Grizthedog

Estimating Confidence Intervals
Honorable Melcombe: Is it valid to estimate a confidence interval by multiplying the Student's critical T Value (e.g., +/- 1.96 for a 95% confidence range in a normal population) times the standard deviation of the data range? For example, if the Standard Deviation of the series happens to be 10 and the estimated value is 100, then the 95% confidence range would be from 81.4 (100 - 1.96 x 10) to 119.6 (100 + 1.96 x 10).

If this isn't the appropriate forum to ask this question, please forgive my ignorance and intrusion. Mike. --Grizthedog (talk) 21:50, 13 June 2008 (UTC)

You should really find a better forum for discussing all this. You might try the newgroup called sci.stat.math ... if you don't have a news-reader, it is accessible at http://groups.google.com/group/sci.stat.math/topics.

You need to be more clear about what it is you want to construct. The most usual case is a confidence interval for the mean, in which case your formula would be wrong as you would need to divide the standard deviation of the sample by the square root of the sample size. If you want to find a confidence interval for a further observation from the same population, then your formula in approximately correct ... for modest same sizes you would need to multiply the sample standard deviation by the square root of (1+1/N), where N is the sample size (you might find something about this in the article on "tolerance intervals" ). All this would be making the usual assumptions, but your use of the word "series" might indicate that you have a "time-series" in which there might be serial correlation, in which case the assumptions would not hold. Melcombe (talk) 09:07, 16 June 2008 (UTC)