Wald–Wolfowitz runs test

The Wald–Wolfowitz runs test (or simply runs test), named after statisticians Abraham Wald and Jacob Wolfowitz is a non-parametric statistical test that checks a randomness hypothesis for a two-valued data sequence. More precisely, it can be used to test the hypothesis that the elements of the sequence are mutually independent.

Definition
A run of a sequence is a maximal non-empty segment of the sequence consisting of adjacent equal elements. For example, the 22-element-long sequence



consists of 6 runs, with lengths 4, 3, 3, 2, 6, and 4. The run test is based on the null hypothesis that each element in the sequence is independently drawn from the same distribution.

Under the null hypothesis, the number of runs in a sequence of N elements is a random variable whose conditional distribution given the observation of N+ positive values and N− negative values (N = N+ + N−) is approximately normal, with:



\begin{align} \text{mean: } & \mu=\frac{2\ N_+\ N_-}{N} + 1, \\[6pt] \text{variance: } & \sigma^2=\frac{2\ N_+\ N_-\ (2\ N_+\ N_--N)}{N^2\ (N-1)}=\frac{(\mu-1)(\mu-2)}{N-1}. \end{align} $$ Equivalently, the number of runs is $$R = \frac 12 (N_+ + N_- + 1 - \sum_{i=1}^{N-1}x_ix_{i+1} ) $$.

These parameters do not assume that the positive and negative elements have equal probabilities of occurring, but only assume that the elements are independent and identically distributed. If the number of runs is significantly higher or lower than expected, the hypothesis of statistical independence of the elements may be rejected.

Moments
The number of runs is $$R = \frac 12 (N_+ + N_- + 1 - \sum_{i=1}^{N-1}x_ix_{i+1} ) $$. By independence, the expectation is$$E[R] = \frac 12 (N+1 - (N-1) E[x_1x_2])$$Writing out all possibilities, we find$$x_1 x_2 = \begin{cases} +1 \quad & \text{ with probability }\frac{N_+ (N_+-1) + N_- (N_- - 1)}{N(N-1)} \\ -1 \quad & \text{ with probability }\frac{2N_+N_-}{N(N-1)} \\ \end{cases}$$Thus, $$E[x_1x_2] = \frac{(N_+-N_-)^2 - N}{N(N-1)}$$. Now simplify the expression to get $$E[R] = \frac{2\ N_+\ N_-}{N} + 1$$.

Similarly, the variance of the number of runs is$$Var[R] = \frac 14 Var[\sum_{i=1}^{N-1}x_i x_{i+1}] = \frac 14 ((N-1)E[x_1x_2x_1x_2] + 2(N-2)E[x_1x_2x_2x_3] + (N-2)(N-3) E[x_1x_2x_3x_4] - (N-1)^2 E[x_1x_2]^2)$$and simplifying, we obtain the variance.

Similarly we can calculate all moments of $$R$$, but the algebra becomes uglier and uglier.

Asymptotic normality
Theorem. If we sample longer and longer sequences, with $$\lim N_+ / N = p$$ for some fixed $$p \in (0, 1)$$, then $$\frac{R-\mu}{\sigma} \sim \sqrt N (R/\mu - 1) $$ converges in distribution to the normal distribution with mean 0 and variance 1.

Proof sketch. It suffices to prove the asymptotic normality of the sequence $$\sum_{i=1}^{N-1} x_i x_{i+1} $$, which can be proven by a martingale central limit theorem.

Applications
Runs tests can be used to test:
 * 1) the randomness of a distribution, by taking the data in the given order and marking with + the data greater than the median, and with – the data less than the median (numbers equalling the median are omitted.)
 * 2) whether a function fits well to a data set, by marking the data exceeding the function value with + and the other data with −. For this use, the runs test, which takes into account the signs but not the distances, is complementary to the chi square test, which takes into account the distances but not the signs.

Related tests
The Kolmogorov–Smirnov test has been shown to be more powerful than the Wald–Wolfowitz test for detecting differences between distributions that differ solely in their location. However, the reverse is true if the distributions differ in variance and have at the most only a small difference in location.

The Wald–Wolfowitz runs test has been extended for use with several samples.