Hájek–Le Cam convolution theorem

In statistics, the Hájek–Le Cam convolution theorem states that any regular estimator in a parametric model is asymptotically equivalent to a sum of two independent random variables, one of which is normal with asymptotic variance equal to the inverse of Fisher information, and the other having arbitrary distribution.

The obvious corollary from this theorem is that the “best” among regular estimators are those with the second component identically equal to zero. Such estimators are called efficient and are known to always exist for regular parametric models.

The theorem is named after Jaroslav Hájek and Lucien Le Cam.

Statement
Let ℘ = {Pθ | θ ∈ Θ ⊂ ℝk} be a regular parametric model, and q(θ): Θ → ℝm be a parameter in this model (typically a parameter is just one of the components of vector θ). Assume that function q is differentiable on Θ, with the m × k matrix of derivatives denoted as q̇θ. Define


 * $$ I_{q(\theta)}^{-1} = \dot{q}(\theta)I^{-1}(\theta)\dot{q}(\theta)'$$ — the information bound for q,


 * $$ \psi_{q(\theta)} = \dot{q}(\theta)I^{-1}(\theta)\dot\ell(\theta)$$ — the efficient influence function for q,

where I(θ) is the Fisher information matrix for model ℘, $$\scriptstyle\dot\ell(\theta)$$ is the score function, and ′ denotes matrix transpose.

Theorem. Suppose Tn is a uniformly (locally) regular estimator of the parameter q. Then   There exist independent random m-vectors $$\scriptstyle Z_\theta\,\sim\,\mathcal{N}(0,\,I^{-1}_{q(\theta)})$$ and Δθ such that

\sqrt{n}(T_n - q(\theta)) \ \xrightarrow{d}\ Z_\theta + \Delta_\theta, $$ where d denotes convergence in distribution. More specifically,

\begin{pmatrix} \sqrt{n}(T_n - q(\theta)) - \tfrac{1}{\sqrt{n}} \sum_{i=1}^n \psi_{q(\theta)}(x_i) \\ \tfrac{1}{\sqrt{n}} \sum_{i=1}^n \psi_{q(\theta)}(x_i) \end{pmatrix} \ \xrightarrow{d}\ \begin{pmatrix} \Delta_\theta \\ Z_\theta \end{pmatrix}. $$  If the map θ → q̇θ is continuous, then the convergence in (A) holds uniformly on compact subsets of Θ. Moreover, in that case Δθ = 0 for all θ if and only if Tn is uniformly (locally) asymptotically linear with influence function ψq(θ) 