Horvitz–Thompson estimator

In statistics, the Horvitz–Thompson estimator, named after Daniel G. Horvitz and Donovan J. Thompson, is a method for estimating the total and mean of a pseudo-population in a stratified sample by applying inverse probability weighting to account for the difference in the sampling distribution between the collected data and the a target population. The Horvitz–Thompson estimator is frequently applied in survey analyses and can be used to account for missing data, as well as many sources of unequal selection probabilities.

The method
Formally, let $$Y_i, i = 1, 2, \ldots, n$$ be an independent sample from n of N ≥ n distinct strata with a common mean μ. Suppose further that $$\pi_i$$ is the inclusion probability that a randomly sampled individual in a superpopulation belongs to the ith stratum. The Horvitz–Thompson estimator of the total is given by:

\hat{Y}_{HT} = \sum_{i=1}^n \pi_i ^{-1} Y_i, $$

and the Horvitz–Thompson estimate of the mean is given by:

\hat{\mu}_{HT} = N^{-1}\hat{Y}_{HT} = N^{-1}\sum_{i=1}^n \pi_i ^{-1} Y_i. $$

In a Bayesian probabilistic framework $$\pi_i$$ is considered the proportion of individuals in a target population belonging to the ith stratum. Hence, $$\pi_i^{-1} Y_i$$ could be thought of as an estimate of the complete sample of persons within the ith stratum. The Horvitz–Thompson estimator can also be expressed as the limit of a weighted bootstrap resampling estimate of the mean. It can also be viewed as a special case of multiple imputation approaches.

For post-stratified study designs, estimation of $$\pi$$ and $$\mu$$ are done in distinct steps. In such cases, computating the variance of $$\hat{\mu}_{HT}$$ is not straightforward. Resampling techniques such as the bootstrap or the jackknife can be applied to gain consistent estimates of the variance of the Horvitz–Thompson estimator. The "survey" package for R conducts analyses for post-stratified data using the Horvitz–Thompson estimator.

Proof of Horvitz–Thompson unbiased estimation of the mean
The Horvitz–Thompson estimator can be shown to be unbiased when evaluating the expectation of the Horvitz–Thompson estimator, $$\operatorname E \bar{X}_n^{HT}$$, as follows:

\begin{align} & \operatorname E\bar{X}_n^{HT} = \operatorname E \frac{1}{N} \sum_{i=1}^n \frac{\mathbf{X}_{I_i}}{\pi_{I_i}} \\[6pt] = {} & \operatorname E \frac{1}{N} \sum_{i=1}^N \frac{X_i}{\pi_i}1_{i\in D_n} \\[6pt] = {} & \sum_{b=1}^B P(D_n^{(b)})\left[\frac{1}{N}\sum_{i=1}^N \frac{X_i}{\pi_i}1_{i\in D_n^{(b)}} \right] \\[6pt] = {} & \frac{1}{N} \sum_{i=1}^N \frac{X_i}{\pi_i}\sum_{b=1}^B 1_{i\in D_n^{(b)}}P(D_n^{(b)}) \\[6pt] = {} & \frac{1}{N}\sum_{i=1}^N \left(\frac{X_i}{\pi_i}\right)\pi_i \\[6pt] = {} & \frac{1}{N}\sum_{i=1}^N X_i \\[6pt] & \text{where } D_n = \{x_1,x_2,\ldots,x_n\} \end{align} $$

The Hansen–Hurwitz (1943) is known to be inferior to the Horvitz–Thompson (1952) strategy, associated with a number of Inclusion Probabilities Proportional to Size (IPPS) sampling procedures.