User:Talgalili/sandbox/Holm–Bonferroni method

In statistics, the Holm–Bonferroni method is a method used to counteract the problem of multiple comparisons. It is intended to control the Familywise error rate and offers a simple test uniformly more powerful than the Bonferroni correction. It is one of the earliest usage of stepwise algorithms in simultaneous inference.

It is named after Sture Holm who invented the method in 1978 and Carlo Emilio Bonferroni.

Introduction
When considering several hypotheses in the same test the problem of multiplicity arises. Intuitively, the more hypotheses we check, the higher the probability to witness a rare result. With 10 different hypotheses and significance level of 0.05, there is more than 40% chance of having one or more type I errors. Holm–Bonferroni method is one of the many ways to address this issue. It modifies the rejection criteria in order to enable us to control the overall probability of witnessing one or more type I errors (also called the familywise error rate) at a predetermined level.

Formulation
The method algorithm is as follows:
 * Let $$H_{1},...,H_{m}$$ be a family of hypotheses and $$P_{1},...,P_{m}$$ the corresponding P-values.
 * Start by ordering the p-values $$P_{(1)} \ldots P_{(m)}$$ and let the associated hypotheses be $$H_{(1)} \ldots H_{(m)}$$
 * For a given significance level $$\alpha$$, let $$k$$ be the minimal index such that $$P_{(k)} > \frac{\alpha}{m+1-k}$$
 * Reject the null hypotheses $$H_{(1)} \ldots H_{(k-1)}$$ and do not reject $$H_{(k)} \ldots H_{(m)}$$
 * If $$k=1$$ then don't reject any of the hypotheses and if no such $$k$$ exist then reject all hypotheses.

The Holm–Bonferroni method ensures that this method will control the $$FWER\leq\alpha$$, where $$FWER$$ is the Familywise error rate

Proof that Holm-Bonferroni controls the FWER
Let $$H_{(1)}\ldots H_{(m)}$$ be a family of hypotheses, and $$P_{(1)}\leq P_{(2)}\leq\ldots\leq P_{(m)}$$ be the sorted p-values. Let $$I_{0}$$ be the set of indices corresponding to the (unknown) true null hypotheses, having $$m_{0}$$ members and define $$i_{0}=m-m_{0}+1$$. Also let $$k$$ be the stopping index for Holm–Bonferroni method, as described above. In the case we don't reject any of the true null hypotheses, $$k$$ will be smaller than $$i_{0}$$. Therefore, the $$FWER=Pr\left\{ k\geq i_{0}\right\}$$.

Let's define $$A=\left\{ P_{(i)}>\frac{\alpha}{m_{0}},\forall i\in I_{0}\right\}$$. From Bonferroni inequalities we get that $$Pr(A)\geq1-\alpha$$. Since the event of $$\left\{ P_{(i_{0})}>\frac{\alpha}{m_{o}}=\frac{\alpha}{m-i_{0}+1}\right\} \subseteq A$$ when $$k<i_{0}$$ and for all $$i<k$$, $$P_{(i)}\leq\frac{\alpha}{m-i+1}$$, we can conclude that $$FWER=Pr\left\{ k\geq i_{0}\right\} \leq\alpha$$, as required.

Proof that Holm-Bonferroni controls the FWER using the closure principle
The Holm-Bonferroni method can be viewed as closed testing procedure, with Bonferroni method applied locally on each of the intersections of null hypotheses.

It is a shortcut procedure since practically the number of comparisons to be made equal to $$m$$ or less, while the number of all intersections of null hypotheses to be tested is of order $$2^m$$.

The closure principle states that a hypothesis $$H_i$$ in a family of hypotheses $$H_1,...,H_m$$ is rejected - while controlling the family-wise error rate of $$\alpha$$ - if and only if all the sub-families of the intersections with $$H_i$$ are controlled at level of family-wise error rate of $$\alpha$$.

In Holm-Bonferroni procedure, we first test $$H_{(1)}$$. If it is not rejected then the intersection of all null hypotheses $$\bigcap\nolimits_{i = 1}^m $$ is not rejected too, such that there exist at least one intersection hypothesis for each of elementary hypotheses $$H_1,...,H_m$$ that is not rejected, thus we reject none of the elementary hypotheses.

If $$H_{(1)}$$ is rejected at level $$\alpha/m$$ then all the intersection sub-families that contain it are rejected too, thus $$H_{(1)}$$ is rejected. This is because $$P_{(1)}$$ is the smallest in each one of the intersection sub-families and the size of the sub-families is the most $$m$$, such that the Bonferroni threshold larger than $$\alpha/m$$.

The same rational applies for $$H_{(2)}$$. However, Since $$H_{(1)}$$ already rejected, it sufficient to reject all the intersection sub-families of $$H_{(2)}$$ without $$H_{(1)}$$. Once $$P_{(2)}\leq\alpha/(m-1)$$ holds all the intersections that contains $$H_{(2)}$$ are rejected.

The same applies for each $$1\leq i \leq m$$.

Example
Consider four null hypotheses $$H_1,...,H_4$$ with unadjusted p-values $$p_1=0.01$$, $$p_2=0.04$$, $$p_3=0.03$$ and $$p_4=0.005$$, to be tested at significance level $$\alpha=0.05$$. Since the procedure is step-down, we first test $$H_4=H_{(1)}$$, which has the smallest p-value $$p_4=p_{(1)}=0.005$$. The p-value is compared to $$\alpha/4=0.125$$, the null hypothesis is rejected and we continue to the next one. Since $$p_1=p_{(2)}=0.01<0.0.0167=\alpha/3$$ we reject $$H_1=H_{(2)}$$ as well and continue. The next hypothesis $$H_3$$ is not rejected since $$p_3=p_{(3)}=0.03 > 0.0.025=\alpha/2$$. We stop testing and conclude that $$H_1$$ and $$H_4$$ are rejected and $$H_2$$ and $$H_3$$ are not rejected while controlling the Family Wise Error Rate at level $$\alpha=0.05$$. Note that even thou $$p_2=p_{(4)}=0.04 < 0.05=\alpha$$ applies, $$H_2$$ is not rejected. This is because the testing procedure stops once there is no rejection.

Adjusted P-value
The adjusted P-values for Holm–Bonferroni method are:
 * $$\widetilde{p}_{(i)}=\max_{j\leq i}\left\{ (N-j+1)p_{(j)}\right\} _{1}$$, where $$\{x\}_{1}\equiv \min(x,1)$$.

In the earlier example, the adjusted p-values are $$\widetilde{p}_1 = 0.03$$, $$\widetilde{p}_2 = 0.06$$, $$\widetilde{p}_3 = 0.06$$ and $$\widetilde{p}_4 = 0.02$$. Only hypotheses $$H_1$$ and $$H_4$$ are rejected at level $$\alpha=0.05$$.

Šidák version
It is possible to replace $$\frac{\alpha}{m},\frac{\alpha}{m-1},...,\frac{\alpha}{1}$$ with
 * $$1-(1-\alpha)^{1/m},1-(1-\alpha)^{1/(m-1)},...,1-(1-\alpha)^{1}$$,

which will provide a more powerful test, but the increase in power will not be very big.

Weighted version
Let $$P_{(1)},...,P_{(m)}$$ be the ordered unadjusted p-values. Let $$H_{(i)}$$, $$0\leq w_{(i)}$$ correspond to $$P_{(i)}$$. Reject $$H_{(i)}$$ as long as


 * $$P_{(j)}\leq\frac{w_{(j)}}{\sum^m_{k=j}{w_{(k)}}}\alpha,\quad j=1,...,i$$

adjusted p-values: The adjusted weighted p-value are: $$\widetilde{p}_{(i)}=\max_{j\leq i}\left\{\frac{\sum^m_{k=j}{w_{(k)}}}{w_{(j)}} p_{(j)}\right\} _{1}$$, where $$\{x\}_{1}\equiv \min(x,1)$$.

Alternatives and usage
Holm-Bonferroni method is uniformly more powerful than the classic Bonferroni correction. Since no assumptions required, it can always substitute the Bonferroni correction. However, it is not the best simultaneous inference controlling procedure available. There are many other methods that intend to control the family-wise error rate, many of them are more powerful than Holm-Bonferroni. Among those there is the Hochberg procedure (1988) and Hommel procedure .

In Hochberg procedure rejection of $$H_{(1)} \ldots H_{(k)}$$ is made after finding the maximal index $$k$$ such that $$P_{(k)} \leq \frac{\alpha}{m+1-k}$$. Thus, The Hochberg procedure is more powerful by construction. However, The Hochberg procedure require the hypotheses to be independent (or under some forms of positive dependence), while Holm-Bonferroni can be applied with no further assumptions on the data.

Bonferroni contribution
Carlo Emilio Bonferroni did not take part in inventing the method described here. Holm originally called the method the "sequentially rejective Bonferroni test", and it became known as Holm-Bonferroni only after some time. Holm's motives for naming his method after Bonferroni are explained in the original paper: "The use of the Boole inequality within multiple inference theory is usually called the Bonferroni technique, and for this reason we will call our test the sequentially rejective Bonferroni test."