Bradley–Terry model

The Bradley–Terry model is a probability model for the outcome of pairwise comparisons between items, teams, or objects. Given a pair of items $i$ and $j$ drawn from some population, it estimates the probability that the pairwise comparison $i > j$ turns out true, as

where $$ is a positive real-valued score assigned to individual $p_{i}$. The comparison $i > j$ can be read as "$i$ is preferred to $i$", "$j$ ranks higher than $i$", or "$j$ beats $i$", depending on the application.

For example, $j$ might represent the skill of a team in a sports tournament and $$\Pr(i>j)$$ the probability that $p_{i}$ wins a game against $i$. Or $j$ might represent the quality or desirability of a commercial product and $$\Pr(i>j)$$ the probability that a consumer will prefer product $p_{i}$ over product $i$.

The Bradley–Terry model can be used in the forward direction to predict outcomes, as described, but is more commonly used in reverse to infer the scores $j$ given an observed set of outcomes. In this type of application $p_{i}$ represents some measure of the strength or quality of $$i$$ and the model lets us estimate the strengths from a series of pairwise comparisons. In a survey of wine preferences, for instance, it might be difficult for respondents to give a complete ranking of a large set of wines, but relatively easy for them to compare sample pairs of wines and say which they feel is better. Based on a set of such pairwise comparisons, the Bradley–Terry model can then be used to derive a full ranking of the wines.

Once the values of the scores $p_{i}$ have been calculated, the model can then also be used in the forward direction, for instance to predict the likely outcome of comparisons that have not yet actually occurred. In the wine survey example, for instance, one could calculate the probability that someone will prefer wine $$i$$ over wine $$j$$, even if no one in the survey directly compared that particular pair.

History and applications
The model is named after Ralph A. Bradley and Milton E. Terry, who presented it in 1952, although it had already been studied by Ernst Zermelo in the 1920s. Applications of the model include the ranking of competitors in sports, chess, and other competitions, the ranking of products in paired comparison surveys of consumer choice, analysis of dominance hierarchies within animal and human communities, ranking of journals, ranking of AI models, and estimation of the relevance of documents in machine-learned search engines.

Definition
The Bradley–Terry model can be parametrized in various ways. Equation ($p_{i}$) is perhaps the most common, but there are a number of others. Bradley and Terry themselves defined exponential score functions $$p_i = e^{\beta_i}$$, so that


 * $$\Pr(i > j) = \frac{e^{\beta_i}}{e^{\beta_i} + e^{\beta_j}}.$$

Alternatively, one can use a logit, such that


 * $$\operatorname{logit} \Pr(i > j) = \log \frac{\Pr(i > j)}{1 - \Pr(i > j)} = \log \frac{\Pr(i > j)}{\Pr(j > i)} = \beta_i - \beta_j,$$

i.e. $ \operatorname{logit} p = \log\frac p{1-p} $ for $ 0j)$$; in ranking under the Bradley–Terry model one knows the functional form and attempts to infer the parameters.

With a scale factor of 400, this is equivalent to the Elo rating system for players with Elo ratings $Ri$ and $Rj$.
 * $$\Pr(i > j) = \frac{e^{R_i/400}}{e^{R_i/400} + e^{R_j/400}} = \frac{1}{1 + e^{(R_j-R_i)/400}}.$$

Estimating the parameters
The most common application of the Bradley–Terry model is to infer the values of the parameters $$p_i$$ given an observed set of outcomes $$i>j$$, such as wins and losses in a competition. The simplest way to estimate the parameters is by maximum likelihood estimation, i.e., by maximizing the likelihood of the observed outcomes given the model and parameter values.

Suppose we know the outcomes of a set of pairwise competitions between a certain group of individuals, and let $$ be the number of times individual $w_{ij}$ beats individual $i$. Then the likelihood of this set of outcomes within the Bradley–Terry model is $$\prod_{ij} [\Pr(i>j)]^{w_{ij}}$$ and the log-likelihood of the parameter vector $p = [p_{1}, ..., p_{n}]$ is


 * $$\begin{align}

\mathcal{l}(\mathbf{p}) & = \ln \prod_{ij} {\bigl[ \Pr(i>j) \bigr]}^{w_{ij}} = \sum_{i=1}^n \sum_{j=1}^n \ln \biggl[ \left(\frac{p_i}{p_i+p_j}\right)^{w_{ij}} \biggr] \\[6pt] & = \sum_{ij} w_{ij} \ln \biggl( \frac{p_i}{p_i+p_j} \biggr) = \sum_{ij} \bigl[ w_{ij} \ln(p_i) - w_{ij} \ln(p_i + p_j) \bigr]. \end{align}$$

Zermelo showed that this expression has only a single maximum, which can be found by differentiating with respect to $$p_i$$ and setting the result to zero, which leads to

This equation has no known closed-form solution, but Zermelo suggested solving it by simple iteration. Starting from any convenient set of (positive) initial values for the $$p_i$$, one iteratively performs the update

for all $j$ in turn. The resulting parameters are arbitrary up to an overall multiplicative constant, so after computing all of the new values they should be normalized by dividing by their geometric mean thus:

This estimation procedure improves the log-likelihood on every iteration, and is guaranteed to eventually reach the unique maximum. It is, however, slow to converge. More recently it has been pointed out that equation ($$) can also be rearranged as


 * $$p_i = \frac{\sum_{j} w_{ij} p_j/(p_i+p_j)}{\sum_{j} w_{ji}/(p_i+p_j)},$$

which can be solved by iterating

again normalizing after every round of updates using equation ($$). This iteration gives identical results to the one in ($i$) but converges much faster and hence is normally preferred over ($$).

Worked example of solution procedure
Consider a sporting competition between four teams, who play a total of 22 games among themselves. Each team's wins are given in the rows of the table below and the opponents are given as the columns: For example, Team A has beat Team B twice and lost to team B three times; not played team C at all; won once and lost four times against team D.

We would like to estimate the relative strengths of the teams, which we do by calculating the parameters $$p_i$$, with higher parameters indicating greater prowess. To do this, we initialize the four entries in the parameter vector $p$ arbitrarily, for example assigning the value 1 to each team: $[1, 1, 1, 1]$. Then we apply equation ($$) to update $$p_1$$, which gives

$$p_1 = \frac{\sum_{j(\ne 1)} w_{1j} p_j/(p_1+p_j)}{\sum_{j(\ne 1)} w_{j1}/(p_1+p_j)} = \frac{2\frac{1}{1+1} + 0\frac{1}{1+1} + 1\frac{1}{1+1}}{3\frac{1}{1+1}+0\frac{1}{1+1} + 4\frac{1}{1+1}} = 0.429.$$

Now, we apply ($$) again to update $$p_2$$, making sure to use the new value of $$p_1$$ that we just calculated:

$$p_2 = \frac{\sum_{j(\ne 2)} w_{2j} p_j/(p_2+p_j)}{\sum_{j(\ne 2)} w_{j2}/(p_2+p_j)} = \frac{3\frac{0.429}{1+0.429} + 5\frac{1}{1+1} + 0\frac{1}{1+1}}{2\frac{1}{1+0.429}+3\frac{1}{1+1} + 0\frac{1}{1+1}} = 1.172$$

Similarly for $$p_3$$ and $$p_4$$ we get

$$p_3 = \frac{\sum_{j(\ne 3)} w_{3j} p_j/(p_3+p_j)}{\sum_{j(\ne 3)} w_{j3}/(p_3+p_j)} = \frac{0\frac{0.429}{1+0.429} + 3\frac{1.172}{1+1.172} + 1\frac{1}{1+1}}{0\frac{1}{1+0.429}+5\frac{1}{1+1.172} + 3\frac{1}{1+1}} = 0.557$$

$$p_4 = \frac{\sum_{j(\ne 4)} w_{4j} p_j/(p_4+p_j)}{\sum_{j(\ne 4)} w_{j4}/(p_4+p_j)} = \frac{4\frac{0.429}{1+0.429} + 0\frac{1.172}{1+1.172} + 3\frac{0.557}{1+0.557}}{1\frac{1}{1+0.429}+0\frac{1}{1+1.172} + 1\frac{1}{1+0.557}} = 1.694$$

Then we normalize all the parameters by dividing by their geometric mean $$(0.429\times1.172\times0.557\times1.694)^{1/4} = 0.830$$ to get the estimated parameters $p = [0.516, 1.413, 0.672, 2.041]$.

To improve the estimates further, we repeat the process, using the new $p$ values. For example,

$$ p_1 = \frac{2\cdot\frac{1.413}{0.516+1.413} + 0\cdot\frac{0.672}{0.516+0.672} + 1\cdot\frac{2.041}{0.516+2.041}}{3\cdot\frac{1}{0.516+1.413}+0\cdot\frac{1}{0.516+0.672} + 4\cdot\frac{1}{0.516+2.041}} = 0.725.$$

Repeating this process for the remaining parameters and normalizing, we get $p = [0.677, 1.034, 0.624, 2.287]$. Repeating a further 10 times gives rapid convergence toward a final solution of $p = [0.640, 1.043, 0.660, 2.270]$. This indicates that Team D is the strongest and Team B the second strongest, while Teams A and C are nearly equal in strength but below Teams B and D. In this way the Bradley–Terry model lets us infer the relationship between all four teams, even though not all teams have played each other.