User talk:Xappppp

A proof of Maximum Likelihood Estimator
1. Proving Convergence in Probability

if set $$\,\,\,S_n(\theta)=\frac{1}{n}\frac{\partial log l(\theta)}{\partial \theta}\,\,\,$$,  then usually we will have following conditions:



ES_n(\theta)=0,\,\,\,\,\,\,\, -S'_n(\theta^{})>0 $$ and we know

P(|\theta_{mle}-\theta| \ge \delta) = P(\theta_{mle}\ge \theta+\delta)+ P(\theta_{mle} \le \theta-\delta) $$

so

\begin{align}

P(\theta_{mle} \ge \theta+\delta)  & = P(0 \le S_n(\theta+\delta))\\ & = P(0 \le S_n(\theta)+\delta S'_n(\theta^{*}))\\ & = P(S_n(\theta)\ge -\delta S'_n(\theta^{*}))\\ & \le P(S_n(\theta) > 0)\rightarrow 0 \,\,\,\,\,\,\,(\,\,\,since\,\,\, S_n(\theta)\rightarrow_{a.s.} 0\,\,) \end{align} $$ similarly

P(\theta_{mle} \le \theta-\delta) \rightarrow 0 $$ thus complete the proof of

\theta_{mle}\rightarrow_p \theta $$

2. Derive limiting distribution

given $$ \theta_{mle}\rightarrow_p \theta $$

we will have

S_n(\theta_{mle})=0=S_n(\theta_{})+(\theta-\theta_{mle})S'_n(\theta_{})+o_p(\theta-\theta_{mle}) $$ which indicates

\sqrt{n}(\theta_{mle}-\theta_{})=\frac{\sqrt{n}S_n(\theta_{})}{S'_n(\theta_{})+o_p(1)}\rightarrow \frac{N(0,I_1[\theta])}{I_1[\theta]}=N(1,I^{-1}_1[\theta]) $$