Observed information

In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher information.

Definition
Suppose we observe random variables $$X_1,\ldots,X_n$$, independent and identically distributed with density f(X; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters $$\theta$$ given the data $$X_1,\ldots,X_n$$ is


 * $$\ell(\theta | X_1,\ldots,X_n) = \sum_{i=1}^n \log f(X_i| \theta) $$.

We define the observed information matrix at $$\theta^{*}$$ as


 * $$\mathcal{J}(\theta^*)

= - \left. \nabla \nabla^{\top} \ell(\theta) \right|_{\theta=\theta^*} $$



\left. \left( \begin{array}{cccc} \tfrac{\partial^2}{\partial \theta_1^2}  &  \tfrac{\partial^2}{\partial \theta_1 \partial \theta_2}  &  \cdots  &  \tfrac{\partial^2}{\partial \theta_1 \partial \theta_p} \\  \tfrac{\partial^2}{\partial \theta_2 \partial \theta_1}  &  \tfrac{\partial^2}{\partial \theta_2^2}  &  \cdots  &  \tfrac{\partial^2}{\partial \theta_2 \partial \theta_p} \\  \vdots &  \vdots &  \ddots &  \vdots \\  \tfrac{\partial^2}{\partial \theta_p \partial \theta_1}  &  \tfrac{\partial^2}{\partial \theta_p \partial \theta_2}  &  \cdots  &  \tfrac{\partial^2}{\partial \theta_p^2} \\ \end{array} \right) \ell(\theta) \right|_{\theta = \theta^*} $$ Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator, the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction. The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted.

Alternative definition
Andrew Gelman, David Dunson and Donald Rubin define observed information instead in terms of the parameters' posterior probability, $$p(\theta|y)$$: $$ I(\theta) = - \frac{d^2}{d\theta^2} \log p(\theta|y)$$

Fisher information
The Fisher information $$\mathcal{I}(\theta)$$ is the expected value of the observed information given a single observation $$X$$ distributed according to the hypothetical model with parameter $$\theta$$:


 * $$\mathcal{I}(\theta) = \mathrm{E}(\mathcal{J}(\theta))$$.

Comparison with the expected information
The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of $$O(n^{-3/2})$$ is ignored. In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness.

However, when the construction of confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for confidence-interval constructions of scalar parameters in the mean squared error sense. This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix.