Berkson's paradox

Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks, and conditioning on a collider in graphical models.

It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson.

Overview
The most common example of Berkson's paradox is a false observation of a negative correlation between two desirable traits, i.e., that members of a population which have some desirable trait tend to lack a second. Berkson's paradox occurs when this observation appears true when in reality the two properties are unrelated—or even positively correlated—because members of the population where both are absent are not equally observed. For example, a person may observe from their experience that fast food restaurants in their area which serve good hamburgers tend to serve bad fries and vice versa; but because they would likely not eat anywhere where both were bad, they fail to allow for the large number of restaurants in this category which would weaken or even flip the correlation.

Original illustration
Berkson's original illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. Because samples are taken from a hospital in-patient population, rather than from the general public, this can result in a spurious negative association between the disease and the risk factor. For example, if the risk factor is diabetes and the disease is cholecystitis, a hospital patient without diabetes is more likely to have cholecystitis than a member of the general population, since the patient must have had some non-diabetes (possibly cholecystitis-causing) reason to enter the hospital in the first place. That result will be obtained regardless of whether there is any association between diabetes and cholecystitis in the general population.

Ellenberg example
An example presented by Jordan Ellenberg: Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex's dating pool. So, among the men that Alex dates, Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex's selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in the population (since even among nice men, the ugliest portion of the population is skipped). Berkson's negative correlation is an effect that arises within the dating pool: the rude men that Alex dates must have been even more handsome to qualify.

Quantitative example
As a quantitative example, suppose a collector has 1000 postage stamps, of which 300 are pretty and 100 are rare, with 30 being both pretty and rare. 30% of all his stamps are pretty and 10% of his pretty stamps are rare, so prettiness tells nothing about rarity. He puts the 370 stamps which are pretty or rare on display. Just over 27% of the stamps on display are rare (100/370), but still only 10%(30/300) of the pretty stamps are rare (and 100% of the 70 not-pretty stamps on display are rare). If an observer only considers stamps on display, they will observe a spurious negative relationship between prettiness and rarity as a result of the selection bias (that is, not-prettiness strongly indicates rarity in the display, but not in the total collection).

Statement
Two independent events become conditionally dependent given that at least one of them occurs. Symbolically:
 * If $$P(A\cap B)=P(A)P(B)$$ and $$P(A\cup B) < 1$$ then

$$P(A\cap B|A\cup B) < P(A|A\cup B)P(B|A\cup B)$$

Proof: Note that $$P(A|A\cup B)=P(A)/P(A\cup B)$$ and $$P(B|A\cup B)=P(B)/P(A\cup B)$$ which, together with $$P(A\cap B)=P(A)P(B)$$ and $$P(A\cup B) < 1 $$ (so $$ \frac{1}{P(A \cup B)} < \frac{1}{[P(A \cup B)]^2} \ $$) implies that

$$ \begin{align} P(A\cap B|A\cup B) = \frac{P(A\cap B)}{P(A\cup B)} = \frac{P(A)P(B)}{P(A\cup B)} < \frac{P(A)P(B)}{[P(A\cup B) ]^2} = P(A|A\cup B)P(B|A\cup B). \end{align} $$

One can see this in tabular form as follows: the yellow regions are the outcomes where at least one event occurs (and ~A means "not A").

For instance, if one has a sample of $$100$$, and both $$A$$ and $$B$$ occur independently half the time ( $$P(A) = P(B) = 1 / 2$$ ), one obtains:

So in $$75$$ outcomes, either $$A$$ or $$B$$ occurs, of which $$50$$ have $$A$$ occurring. By comparing the conditional probability of $$A$$ to the unconditional probability of $$A$$:
 * $$P(A|A \cup B) = 50 / 75 = 2 / 3 > P(A) = 50 / 100 = 1 / 2$$

We see that the probability of $$A$$ is higher ($$2 / 3$$) in the subset of outcomes where ($$A$$ or $$B$$) occurs, than in the overall population ($$1 / 2$$). On the other hand, the probability of $$A$$ given both $$B$$ and ($$A$$ or $$B$$) is simply the unconditional probability of $$A$$, $$P(A)$$, since $$A$$ is independent of $$B$$. In the numerical example, we have conditioned on being in the top row:

Here the probability of $$A$$ is $$25 / 50 = 1 / 2$$.

Berkson's paradox arises because the conditional probability of $$A$$ given $$B$$ within the three-cell subset equals the conditional probability in the overall population, but the unconditional probability within the subset is inflated relative to the unconditional probability in the overall population, hence, within the subset, the presence of $$B$$ decreases the conditional probability of $$A$$ (back to its overall unconditional probability):


 * $$P(A|B, A \cup B) = P(A|B) = P(A)$$
 * $$P(A|A \cup B) > P(A)$$

Because the effect of conditioning on $$(A \cup B)$$ derives from the relative size of $$P(A|A \cup B)$$ and $$P(A)$$ the effect is particularly large when $$A$$ is rare ($$P(A)<<1$$) but very strongly correlated to $$B$$ ($$P(A|B) \approx 1$$). For example, consider the case below where N is very large:

For the case without conditioning on $$(A \cup B)$$ we have


 * $$P(A) = 1/(N+1)$$
 * $$P(A|B) = 1$$

So A occurs rarely, unless B is present, when A occurs always. Thus B is dramatically increasing the likelihood of A.

For the case with conditioning on $$(A \cup B)$$ we have


 * $$P(A|A \cup B) = 1$$
 * $$P(A|B, A \cup B) = P(A|B) = 1$$

Now A occurs always, whether B is present or not. So B has no impact on the likelihood of A. Thus we see that for highly correlated data a huge positive correlation of B on A can be effectively removed when one conditions on $$(A \cup B)$$.