Talk:Inverse-variance weighting

Confusing symbols in the Multivariate case section
The notation used in the section on the multivariate case is quite confusing, in the that the $$\Sigma$$ is used to indicate both a sum and a covariance matrix. Additionally, the symbol $$Var$$ is used to denote a covariance matrix, whereas in the rest of the article is used to mean variance.

I have boldly edited the equations to use the more common symbol $$\mathbf{C}$$ for covariance matrices. The older formulae are retained below.

$$\Sigma_i$$ of the individual estimates $$x_i$$:


 * $$\hat{x} = \left(\sum_i \Sigma_i^{-1}\right)^{-1}\sum_i \Sigma_i^{-1} x_i $$


 * $$Var(\hat{x}) = \left(\sum_i \Sigma_i^{-1}\right)^{-1} $$ — Preceding unsigned comment added by Glopk (talk • contribs) 16:11, 8 September 2023 (UTC)

Derivation from maximum likelihood?
Let there be a set of $$n$$ measurements $$x_i$$, each with uncertainty $$\sigma_i$$, of a variable $$X$$. A "gaussian" probability distribution function of $$x$$ with respect to each measurment is:

$$p_i(x) \propto \frac{1}{\sigma_i}\mathrm{e}^{\frac{\left(x-x_i\right)^2}{2\sigma_i^2}}$$

The log-likelihood of $$X=x$$ given the measurements, ($$\mathcal{L}$$ could be multiplied with -1, doesn't matter):

$$\mathcal{L}(x)=-\sum_{i=1}^n\log(\sigma_i)+\sum_{i=1}^n\frac{\left(x-x_i\right)^2}{2\sigma_i^2}$$

Finding $$x$$ that maximizes likelihood should give the "best" estimator of the weighted-mean of the $$x_i$$ values, taking the uncertainties into account:

$$\frac{\partial\mathcal{L}(x)}{\partial x} = \sum_{i=1}^n \frac{x-x_i}{\sigma_i^2} = 0$$

So then, from the above the "best" $$\hat{x}$$ is:

$$\hat{x}=\frac{\sum_{i=1}^n\frac{x_i}{\sigma_i^2}}{\sum_{i=1}^n\frac{1}{\sigma_i^2}}$$

Decomposing the variance of $$\hat{x}$$, we get:

$$\mathrm{V}[\hat{x}]=\mathrm{V}\left[\frac{\sum_{i=1}^n\frac{x_i}{\sigma_i^2}}{\sum_{i=1}^n\frac{1}{\sigma_i^2}}\right]=\left(\sum_{i=1}^n\frac{1}{\sigma_i^2}\right)^{-2}\left(\sum_{i=1}^n\mathrm{V}\left[\sum_{i=1}^n\frac{x_i}{\sigma_i^2}\right]\right)= \left(\sum_{i=1}^n\frac{1}{\sigma_i^2}\right)^{-2}\cdot\sum_{i=1}^n\frac{1}{\sigma_i^4}\mathrm{V}\left[x_i\right]=$$ $$=\left(\sum_{i=1}^n\frac{1}{\sigma_i^2}\right)^{-2}\cdot\sum_{i=1}^n\frac{1}{\sigma_i^2}=\left(\sum_{i=1}^n\frac{1}{\sigma_i^2}\right)^{-1}$$ Blakut (talk) 08:52, 14 June 2023 (UTC)