User talk:Aobha

I have some issues with this section that you added:

Maximum likelihood estimation: general case
The first-order conditions for a MLE of parameter &theta; are that the first derivative of the log-likelihood function should be null at &theta;MLE. Intuitively, the second derivative of the log-likelihood function indicates its curvature : the higher it is, the better identified &theta;MLE since the likelihood function will be inverse-V-shaped around &theta;MLE. Formally, it can be proved that


 * $$\sqrt{T}(\theta_\text{MLE}-\theta) \rightarrow \mathcal{N}(0,\Omega) \, $$

where $$\Omega$$ can be estimated by


 * $$\left(-\frac{1}{T}\sum_{t=1}^T \frac{\partial^2 \ell_t}{\partial \theta \, \partial \theta '} (\theta_\text{MLE})\right)^{-1}.$$

(end of new section)


First, notice that all the capital letters in the section heading shouldn't be there; I changed it to Maximum likelihood estimation, per Manual of Style. Also, in the link to the main article, the title is maximum likelihood, without all those capital letters.

I also made some improvements in both TeX and non-TeX mathematical notation. In particular, using \left and \right allows the parentheses to assume their proper sizes, and various other things were done.

But there's a bigger issue: In this case your "general case" appears not to mention covariance matrices at all, but as applied to covariance matrices being estimated, it seems to say that the sample covariance matrix has approximately a normal distribution with a variance &Omega;. I'm not sure I know what is meant by the variance of a matrix-valued random variable. I may have seen it defined at some point in the past, but I think if you're going to write about such a thing here, you'd better explain what it is. And the parameter &theta; you're talking about would in this case be a positive-definite matrix. How do you compute those partial derivatives with respect to a positive-definite matrix-valued random variable? I don't know for sure whether I've seen such a thing, but I think if you're going to do it here, you should explain what it is.

Maybe the paragraph you added belongs in the main maximum likelihood article or one of the other related articles. Michael Hardy (talk) 20:19, 6 May 2009 (UTC)


 * Hi, actually I wrote this section to explain how to estimate covariance matrices not for raw data, but for the parameters you obtain through an MLE (to check the validity of the estimation) - obviously I didn't make that clear enough. I wrote it in that page because that is where I went looking for it, but maybe it does indeed belong in the main maximum likelihood article, I'd prefer to leave it to a more senior Wikipedist (this is my first change on an English-speaking page) to decide (and then I'll add the precision above, if someone hasn't done it by then). --Aobha (talk) 17:37, 7 May 2009 (UTC)