Optimal instruments

In statistics and econometrics, optimal instruments are a technique for improving the efficiency of estimators in conditional moment models, a class of semiparametric models that generate conditional expectation functions. To estimate parameters of a conditional moment model, the statistician can derive an expectation function (defining "moment conditions") and use the generalized method of moments (GMM). However, there are infinitely many moment conditions that can be generated from a single model; optimal instruments provide the most efficient moment conditions.

As an example, consider the nonlinear regression model
 * $$y = f(x, \theta) + u$$
 * $$E[u\mid x]=0$$

where $y$ is a scalar (one-dimensional) random variable, $x$ is a random vector with dimension $k$, and $θ$ is a $k$-dimensional parameter. The conditional moment restriction $$E[u\mid x]=0$$ is consistent with infinitely many moment conditions. For example:
 * $$E[ux] = E[u x^2] = E[u x^3] = \dots = 0$$

More generally, for any vector-valued function $z$ of $x$, it will be the case that
 * $$E[z(x) (y - f(x, \theta))] = 0$$.

That is, $z$ defines a finite set of orthogonality conditions.

A natural question to ask, then, is whether an asymptotically efficient set of conditions is available, in the sense that no other set of conditions achieves lower asymptotic variance. Both econometricians and statisticians have extensively studied this subject.

The answer to this question is generally that this finite set exists and have been proven for a wide range of estimators. Takeshi Amemiya was one of the first to work on this problem and show the optimal number of instruments for nonlinear simultaneous equation models with homoskedastic and serially uncorrelated errors. The form of the optimal instruments was characterized by Lars Peter Hansen, and results for nonparametric estimation of optimal instruments are provided by Newey. A result for nearest neighbor estimators was provided by Robinson.

In linear regression
The technique of optimal instruments can be used to show that, in a conditional moment linear regression model with iid data, the optimal GMM estimator is generalized least squares. Consider the model
 * $$y = x^\mathrm T \theta + u$$
 * $$E[u \mid x] = 0$$

where $y$ is a scalar random variable, $x$ is a $k$-dimensional random vector, and $θ$ is a $k$-dimensional parameter vector. As above, the moment conditions are
 * $$E[z(x) (y - x^\mathrm T \theta)] = 0$$

where $z = z(x)$ is an instrument set of dimension $p$ ($p ≥ k$). The task is to choose $z$ to minimize the asymptotic variance of the resulting GMM estimator. If the data are iid, the asymptotic variance of the GMM estimator is
 * $$(E[x z^\mathrm T]^\mathrm T E[\sigma^2(x) z z^\mathrm T]^{-1} E[z x^\mathrm T])^{-1}$$

where $$\sigma^2(x) \equiv E[u^2 \mid x]$$.

The optimal instruments are given by
 * $$z^*(x) = \frac{x}{\sigma^2(x)}$$

which produces the asymptotic variance matrix
 * $$\left( E \left[ \frac{x x^\mathrm T}{\sigma^2(x)} \right] \right)^{-1}.$$

These are the optimal instruments because for any other $z$, the matrix
 * $$\left( E \left[ \frac{x x^\mathrm T}{\sigma^2(x)} \right] \right)^{-1} - (E[x z^\mathrm T]^\mathrm T E[\sigma^2(x) z z^\mathrm T]^{-1} E[z x^\mathrm T])^{-1}$$

is positive semidefinite.

Given iid data $$(y_1, x_1), \dots, (y_N, x_N)$$, the GMM estimator corresponding to $$z^*(x)$$ is
 * $$\widetilde\theta = \left( \sum_{i=1}^N \frac{x_i x_i^\mathrm T}{\sigma^2(x_i)} \right)^{-1} \sum_{i=1}^N \frac{x_i y_i}{\sigma^2(x_i)}$$

which is the generalized least squares estimator. (It is unfeasible because $σ^{2}(·)$ is unknown.)