Cochran's C test

Cochran's $$C$$ test, named after William G. Cochran, is a one-sided upper limit variance outlier statistical test. The C test is used to decide if a single estimate of a variance (or a standard deviation) is significantly larger than a group of variances (or standard deviations) with which the single estimate is supposed to be comparable. The C test is discussed in many text books and has been recommended by IUPAC and ISO. Cochran's C test should not be confused with Cochran's Q test, which applies to the analysis of two-way randomized block designs.

The C test assumes a balanced design, i.e. the considered full data set should consist of individual data series that all have equal size. The C test further assumes that each individual data series is normally distributed. Although primarily an outlier test, the C test is also in use as a simple alternative for regular homoscedasticity tests such as Bartlett's test, Levene's test and the Brown–Forsythe test to check a statistical data set for homogeneity of variances. An even simpler way to check homoscedasticity is provided by Hartley's Fmax test, but Hartley's Fmax test has the disadvantage that it only accounts for the minimum and the maximum of the variance range, while the C test accounts for all variances within the range.

Description
The C test detects one exceptionally large variance value at a time. The corresponding data series is then omitted from the full data set. According to ISO standard 5725 the C test may be iterated until no further exceptionally large variance values are detected, but such practice may lead to excessive rejections if the underlying data series are not normally distributed. The C test evaluates the ratio:


 * $$C_j = \frac{S_j^2}{\displaystyle \sum_{i=1}^N S_i^2}$$

where:
 * Cj	=	Cochran's C statistic for data series j
 * Sj	=	standard deviation of data series j
 * N	=	number of data series that remain in the data set; N is decreased in steps of 1 upon each iteration of the C test
 * Si	=	standard deviation of data series i (1 ≤ i ≤ N)

The C test tests the null hypothesis (H0) against the alternative hypothesis (Ha):


 * H0: All variances are equal.
 * Ha: At least one variance value is significantly larger than the other variance values.

Critical values
The sample variance of data series j is considered an outlier at significance level α if Cj exceeds the upper limit critical value CUL. CUL depends on the desired significance level α, the number of considered data series N, and the number of data points (n) per data series. Selections of values for CUL have been tabulated at significance levels α = 0.01, α = 0.025, and α = 0.05. CUL can also be calculated from:
 * $$C_\text {UL}(\alpha,n,N) = \left [ 1+ \frac{N-1}{F_\text {c}(\alpha/N,(n-1),(N-1)(n-1))} \right ]^{-1} .$$

Here:
 * CUL	=	upper limit critical value for one-sided test on a balanced design
 * α	=	significance level, e.g., 0.05
 * n	=	number of data points per data series
 * Fc	=	critical value of Fisher's F ratio; Fc can be obtained from tables of the F distribution or using computer software for this function.

Generalization
The C test can be generalized to include unbalanced designs, one-sided lower limit tests and two-sided tests at any significance level α, for any number of data series N, and for any number of individual data points nj in data series j.