User:Nmrenyi

In machine learning and statistics, EM Algorithm is the abbreviation for Expectation Maximization Algorithm, which is used to handle latent variables. GMM Model means Gaussian mixture model. In this article, we'll have an introduction on how to use EM Algorithm to handle GMM model.

Background
First let's warm up with a simple scenario. In the picture below, we have the Red Blood Cell Hemoglobin Concentration and the Red Blood Cell Volume data of two groups of people, the Anemia Group and the Control Group(i.e. the group of people without Anemia). It's clear that people with Anemia have lower red blood cell volume and lower red blood cell hemoglobin concentration than those without Anemia.

To make it simple, let $$x$$ be a random vector: $$x:=(the Red Blood Cell Volume, the Red Blood Cell Hemoglobin Concentration)$$ And denote $$z$$ as the group where $$x$$ belongs.($$z_i = 0$$ when $$x_i$$ belongs to Anemia Group and $$z_i=1$$ when $$x_i$$ belongs to Control Group). And from medical knowledge, we believe that $$x$$ are normally distributed in each group, i.e. $$x \sim \mathcal N(\mu, \Sigma)$$. Also $$z \sim Multinomial(\phi)$$, where $$\phi_j \geq 0, and \sum_{j=1}^k\phi_j=1$$(in this scenario, $$k=2$$). Now we'd like to estimate $$\phi, \mu, \Sigma$$.

We can use maximum likelihood estimation on this question. The log likelihood function is shown below.

$$\ell(\phi,\mu,\Sigma)=\sum_{i=1}^m \log (p(x^{(i)};\phi,\mu,\Sigma)) =\sum_{i=1}^{m} \log \sum_{z^{(i)}=1}^{k} p\left(x^{(i)} | z^{(i)} ; \mu, \Sigma\right) p\left(z^{(i)} ; \phi\right) $$

As we know the $$z_i$$ for each $$x_i$$, the log likelihood function can be simplified as below:

$$\ell(\phi, \mu, \Sigma)=\sum_{i=1}^{m} \log p\left(x^{(i)} | z^{(i)} ; \mu, \Sigma\right)+\log p\left(z^{(i)} ; \phi\right)$$

Now we can maximize the likelihood function by making partial derivative over $$\mu, \Sigma, \phi$$. Since this step only involves some simple algebra calculation, I'll directly show the result.

$$\phi_{j} =\frac{1}{m} \sum_{i=1}^{m} 1\{z^{(i)}=j\}$$

$$\mu_{j} =\frac{\sum_{i=1}^{m} 1\left\{z^{(i)}=j\right\} x^{(i)}}{\sum_{i=1}^{m} 1\left\{z^{(i)}=j\right\}}$$

$$\Sigma_{j} =\frac{\sum_{i=1}^{m} 1\left\{z^{(i)}=j\right\}\left(x^{(i)}-\mu_{j}\right)\left(x^{(i)}-\mu_{j}\right)^{T}}{\sum_{i=1}^{m} 1\left\{z^{(i)}=j\right\}}$$

In the example above, we can see that if $$z_i$$ is known to us, the estimation of parameters can be quite simple with maximum likelihood estimation. But what if $$z_i$$ is unknown to us? It'll be hard to estimate the parameters.

In this case, we call $$z$$ a latent variable(i.e. not observed). With unlabled scenario, we need the Expectation Maximization Algorithm to estimate $z$ as well as other parameters. Generally, we would name the problem setting above as Gaussian Mixture Models(i.e. GMM) since the data in each group is normally distributed.

In a general circumstance in machine learning, we can see the latent variable $$z$$ as some latent pattern lying under the data, which we cannot see very directly. And we can see $$x_i$$ as our data, $$\phi, \mu, \Sigma$$ as the parameter of the model. With EM algorithm, we may find some underlying pattern $$z$$ in the data $$x_i$$, along with the estimation of parameters. The wide application of this circumstance in machine learning makes EM algorithm very important.

EM Algorithm in GMM
The Expectation Maximization Algorithm consists of two steps: the E-step and the M-step. Firstly, we can randomly initialize the value of our model parameters and the $$z^{(i)}$$ In the E-step, the algorithm tries to guess the value of $$z^{(i)}$$ based on the parameters. In the M-step, the algorithm updates the value of the model parameters based on the guess of $$z^{(i)}$$ in the E-step. These two steps will repeat until convergence. Let's see the algorithm in GMM first.

Repeat until convergence: {

1. (E-step) For each $$i, j$$, set

$$w_{j}^{(i)}:=p\left(z^{(i)}=j | x^{(i)} ; \phi, \mu, \Sigma\right)$$

2. (M-step) Update the parameters $$\phi_{j} :=\frac{1}{m} \sum_{i=1}^{m} w_{j}^{(i)}$$ $$\mu_{j} :=\frac{\sum_{i=1}^{m} w_{j}^{(i)} x^{(i)}}{\sum_{i=1}^{m} w_{j}^{(i)}}$$ $$\Sigma_{j} :=\frac{\sum_{i=1}^{m} w_{j}^{(i)}\left(x^{(i)}-\mu_{j}\right)\left(x^{(i)}-\mu_{j}\right)^{T}}{\sum_{i=1}^{m} w_{j}^{(i)}}$$

}

We can take a closer look at the E-step. In fact, with Bayes Rule, we can get the following result: $$p\left(z^{(i)}=j | x^{(i)} ; \phi, \mu, \Sigma\right)=\frac{p\left(x^{(i)} | z^{(i)}=j ; \mu, \Sigma\right) p\left(z^{(i)}=j ; \phi\right)}{\sum_{l=1}^{k} p\left(x^{(i)} | z^{(i)}=l ; \mu, \Sigma\right) p\left(z^{(i)}=l ; \phi\right)}$$

According to GMM setting, we can have these following formulas: $$p\left(x^{(i)} | z^{(i)}=j ; \mu, \Sigma\right)=\frac{1}{(2 \pi)^{n / 2}\left|\Sigma_{j}\right|^{1 / 2}} \exp \left(-\frac{1}{2}\left(x^{(i)}-\mu_{j}\right)^{T} \Sigma_{j}^{-1}\left(x^{(i)}-\mu_{j}\right)\right)$$ $$p\left(z^{(i)}=j ; \phi\right)=\phi_j$$

In this way, we can clearly move on to the E-step and M-step according to the randomly initialized parameters.