Maximum score estimator

In statistics and econometrics, the maximum score estimator is a nonparametric estimator for discrete choice models developed by Charles Manski in 1975. Unlike the multinomial probit and multinomial logit estimators, it makes no assumptions about the distribution of the unobservable part of utility. However, its statistical properties (particularly its asymptotic distribution) are more complicated than the multinomial probit and logit models, making statistical inference difficult. To address these issues, Joel Horowitz proposed a variant, called the smoothed maximum score estimator.

Setting
When modelling discrete choice problems, it is assumed that the choice is determined by the comparison of the underlying latent utility. Denote the population of the agents as T and the common choice set for each agent as C. For agent $$ t \in T $$, denote her choice as $$ y_{t,i} $$ , which is equal to 1 if choice i is chosen and 0 otherwise. Assume latent utility is linear in the explanatory variables, and there is an additive response error. Then for an agent $$ t \in T $$ ,


 * $$ y_{t,i} = 1 \leftrightarrow x_{t,i}\beta + \epsilon_{t,i} > x_{t,j}\beta + \epsilon_{t,j}, \forall j \neq i$$ and $$j \in C $$

where $$ x_{t,i} $$ and $$ x_{t,j} $$ are the q-dimensional observable covariates about the agent and the choice, and  $$\epsilon_{t,i} $$  and  $$ \epsilon_{t,j} $$ are the factors entering the agent's decision that are not observed by the econometrician. The construction of the observable covariates is very general. For instance, if C is a set of different brands of coffee, then $$ x_{t,i} $$  includes the characteristics both of the agent t, such as age, gender, income and ethnicity, and of the coffee i, such as price, taste and whether it is local or imported. All of the error terms are assumed i.i.d. and we need to estimate $$ \beta $$ which characterizes the effect of different factors on the agent's choice.

Parametric estimators
Usually some specific distribution assumption on the error term is imposed, such that the parameter $$ \beta $$ is estimated parametrically. For instance, if the distribution of error term is assumed to be normal, then the model is just a multinomial probit model; if it is assumed to be a Gumbel distribution, then the model becomes a multinomial logit model. The parametric model is convenient for computation but might not be consistent once the distribution of the error term is misspecified.

Binary response
For example, suppose that C only contains two items. This is the latent utility representation of a binary choice model. In this model, the choice is: $$Y_{t}=1[X_{1,t}\beta+\varepsilon_1>X_{2,t}\beta+\varepsilon_2]$$, where $$X_{1,t},X_{2,t}$$ are two vectors of the explanatory covariates, $$\varepsilon_1$$ and $$\varepsilon_2$$ are i.i.d. response errors,


 * $$X_{1,t}\beta+\varepsilon_1 \text{ and } X_{2,t}\beta+\varepsilon_2$$

are latent utility of choosing choice 1 and 2. Then the log likelihood function can be given as:


 * $$Q=\sum_{i-1}^N Y_t \log(P[X_{1,t}\beta-X_{2,t}\beta>\varepsilon_2-\varepsilon_1])+(1-Y_t) \log(1-P[X_{1,t}\beta-X_{2,t}\beta>\varepsilon_2-\varepsilon_1])$$

If some distributional assumption about the response error is imposed, then the log likelihood function will have a closed-form representation. For instance, if the response error is assumed to be distributed as: $$N(0,\sigma^2)$$, then the likelihood function can be rewritten as:


 * $$Q=\sum_{i-1}^N Y_t \log\left(\Phi\left[\frac{X_{1,t}\beta-X_{2,t} \beta}{\surd2\sigma} \right]\right) + (1-Y_t) \log \left(\Phi \left[ \frac{X_{2,t}\beta-X_{1,t}\beta}{\surd2\sigma} \right] \right)$$

where $$\Phi$$ is the cumulative distribution function (CDF) for the standard normal distribution. Here, even if $$\Phi$$ doesn't have a closed-form representation, its derivative does. This is the probit model.

This model is based on a distributional assumption about the response error term. Adding a specific distribution assumption into the model can make the model computationally tractable due to the existence of the closed-form representation. But if the distribution of the error term is misspecified, the estimates based on the distribution assumption will be inconsistent.

The basic idea of the distribution-free model is to replace the two probability term in the log-likelihood function with other weights. The general form of the log-likelihood function can written as:


 * $$Q= \sum_{i-1}^N Y_t \cdot\log(W_1(X_{1,t}\beta,X_{2,t}\beta))+(1-Y_t)\log(W_0 (X_{1,t} \beta, X_{2,t} \beta))$$

Maximum score estimator
To make the estimator more robust to the distributional assumption, Manski (1975) proposed a non-parametric model to estimate the parameters. In this model, denote the number of the elements of the choice set as J, the total number of the agents as N, and $$ W (J -1) > W (J - 2) > \dots > W (1) > W (0) $$ is a sequence of real numbers. The Maximum Score Estimator is defined as:


 * $$ \hat{b}={\operatorname{arg\max}}_b \frac{1}{N} \sum_{t=1}^N \sum_{i=1}^J y_{t,i} W (\sum\nolimits_{j \in C, j \neq i} 1 [x_{t,i}b > x_{t,j}b])$$

Here, $$\textstyle \sum\nolimits_{j \in C, j \neq i} 1 (x_{t,i}b > x_{t,j}b)$$   is the ranking of the certainty part of the underlying utility of choosing i. The intuition in this model is that when the ranking is higher, more weight will be assigned to the choice.

Under certain conditions, the maximum score estimator can be weak consistent, but its asymptotic properties are very complicated. This issue mainly comes from the non-smoothness of the objective function.

Binary example
In the binary context, the maximum score estimator can be represented as:


 * $$W_1(X_{1,t}\beta,X_{2,t}\beta)=w_1 1[X_{1,t}\beta-X_{2,t}\beta>0]+w_0 1[X_{1,t}\beta-X_{2,t}\beta<0],$$

where


 * $$W_0(X_{1,t}\beta,X_{2,t}\beta)=1-W_1(X_{1,t}\beta,X_{2,t}\beta)$$

and $$w_1$$ and $$w_0$$ are two constants in (0,1). The intuition of this weighting scheme is that the probability of the choice depends on the relative order of the certainty part of the utility.

Smoothed maximum score estimator
Horowitz (1992) proposed a smoothed maximum score (SMS) estimator which has much better asymptotic properties. The basic idea is to replace the non-smoothed weight function $$\textstyle W (\sum\nolimits_{j \in C, j \neq i} 1 (x_{t,i}b > x_{t,j}b)) $$  with a smoothed one. Define a smooth kernel function K satisfying following conditions:


 * $$|K(\cdot)|$$ is bounded over the real numbers
 * 1) $$ \lim_{u\to -\infty} K (u) = 0$$  and $$\lim_{u\to +\infty} K (u) =1 $$
 * 2) $$ \dot {K} (u) = \dot {K} (-u) $$

Here, the kernel function is analogous to a CDF whose PDF is symmetric around 0. Then, the SMS estimator is defined as:


 * $$ \hat {b}_{SMS} = {\operatorname{arg\max}}_b \frac {1}{N} \sum_{t=1}^N \sum_{i=1}^J y_{t,i}  \sum\nolimits_{j \in C, j \neq i} K ( X_ {t,i}b - x_{t,j} b / h_N)$$

where $$ (h_N, N = 1,2, ...) $$ is a sequence of strictly positive numbers and $$ \lim_{N\to +\infty} h_N = 0 $$. Here, the intuition is the same as in the construction of the traditional maximum score estimator: the agent is more likely to choose the choice that has the higher observed part of latent utility. Under certain conditions, the smoothed maximum score estimator is consistent, and more importantly, it has an asymptotic normal distribution. Therefore, all the usual statistical testing and inference based on asymptotic normality can be implemented.