User:Michael Hardy/Tukey

In statistics, Tukey's test for interaction or Tukey's test for non-additivity, named after John Tukey, is a test of the null hypothesis that there is no interaction between two categorical predictor variables in a two-way analysis of variance.

Tukey's publication in 1949 was the first to show how to test for interaction when there is no replication, i.e. there is only one observation in each cell.

Without replication, one cannot partition the sum of squares due to error into a lack-of-fit sum of squares and a "pure-error" sum of squares.

The "additive model" in two-way ANOVA is:



\begin{align} Y_{jk\ell} & = \mu + \alpha_j + \beta_k + \varepsilon_{jk\ell}, \\ & {} \quad j = 1,\dots,J,\quad k = 1,\dots,K,\quad \ell = 1,\dots, n, \end{align} $$

where



\mu + \alpha_j + \beta_k \, $$

is the estimated average value of the variable Y among members of the population falling into the jth row and the kth column of the table, and


 * $$ \sum_{j=1}^J \alpha_j = 0, $$

and


 * $$ \sum_{k=1}^K \beta_j = 0. $$

The last term


 * $$ \varepsilon_{jk\ell} \, $$

is the "error"&mdash;the amount added to &mu; + &alpha;j + &beta;k to get the measurement of the ℓth individual unit in the sample from the jth row and kth column. We assume the errors are independent random variables with expected value 0 and with equal variances.

ℓ

Tukey's interaction model:



\begin{align} Y_{jk\ell} & = \mu + \alpha_j + \beta_k + \lambda\alpha_j\beta_k + \varepsilon_{jk\ell}, \\ & {} \quad j = 1,\dots,J,\quad k = 1,\dots,K,\quad \ell = 1,\dots, n. \end{align} $$


 * George A. Milliken, Dallas E. Johnson, Analysis of Messy Data: Nonreplicated experiments, Volume 2, CRC Press, 1989, pages 7–8