Talk:Pivotal quantity

a second example
Is there a well known example outside of the t-statistic for pivotal statistics? O18 (talk) 23:04, 16 April 2009 (UTC)
 * The uniform distribution on (0,b) is an example (sample maximum divided by b) given by Young&Smith (2005) Essentials of Statistical Inference, CUP ISBN 0-521-83971-8 Example 7.5. Melcombe (talk) 10:07, 27 April 2009 (UTC)


 * Can you try to explain that one more time? What is the statistic that is pivotal? 018 (talk) 13:16, 3 February 2010 (UTC)


 * As stated ... the pivotal quantity is T=(sample maximum divided by b) .... which has a distribution on (0,1) for which the cdf is F(t)=tn, where n is the sample size ... a distribution that does not depend on the unknown parameter.
 * Another example can be found in the normal distribution case (with either known or unknown mean) where the sample variance divided by the population variance is a pivotal quantity ... if this quantity is multiplied by a suitable constant the distribution is a chi-squared distribution. This example is in Statistics: An Introduction, by D.A.S.Fraser (1958), who gives some other normal-theory examples involving difference of means and regression, but these might be thought similar to the t-distribution
 * Melcombe (talk) 13:41, 3 February 2010 (UTC)

Thanks. I realize now that I asked the wrong question. The t-distribution does not contain a population parameter in it (such as mu or sigma) it only has sample parameters in it (such as the mean and the sample standard deviation). Are there other pivotal statistics where you don't need to use population parameters to pivot? 018 (talk) 14:51, 3 February 2010 (UTC)
 * But for the "t-distribution" case, the pivotal quantity does contain an unknown parameter, specifically the unknown mean: thus
 * $$ T=\frac{ \sqrt{n}(\bar X - \mu)}{s} .$$
 * Melcombe (talk) 12:34, 4 February 2010 (UTC)


 * Right, I don't really think of that as a parameter because I'm usually thinking of it as a test rather than about constructing the test statistic. Even the one-sample comparison of means test has a parameter in it (when stated in its most general form). Thanks for pointing that out. 018 (talk) 14:31, 5 February 2010 (UTC)

Bayesian uninformative prior distributions
Mention might be added about how pivotal quantities can relate to the construction of uninformative priors by Bayesians. (for example, Gelman et al mention this in their Bayesian Data Analysis, pp. 54-55 in the first edition (1995).)

So, for example, when in a normal distribution one finds that the probability of s2/&sigma;2 conditioned on &sigma;2 is independent of &sigma;2, then turning to the Bayesian analysis one might seek a prior for &sigma;2 such that s2/&sigma;2 now conditioned on s2 remains a pivotal quantity, ie independent of the value of s2.

And it turns out that using the standard uninformative prior for a scale-parameter, $$\scriptstyle{p(\sigma) \;\propto\; 1/\sigma ; \;\;p(\sigma^2) \;\propto\; 1/\sigma^2}$$, does indeed have this result, that
 * $$p(\tfrac{s^2}{\sigma^2}|{s^2}) = p(\tfrac{s^2}{\sigma^2}|{\sigma^2}) = g(\tfrac{s^2}{\sigma^2})$$

Refs probably need beefing up, but might be worth mentioning? Jheald (talk) 18:23, 9 November 2012 (UTC)