User:Jheald/sandbox/PCA/new

Details
PCA is mathematically defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.

Consider a data matrix, X, with zero empirical mean (the empirical (sample) mean of the distribution has been subtracted from the data set), where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of datum (say, the results from a particular probe).

Mathematically, the transformation is defined by a set of p-dimensional vectors of weights or loadings $$\mathbf{w}_{(k)} = (w_1, \dots, w_p)_{(k)} $$ that map each row vector $$\mathbf{x}_{(i)}$$ of X to new vector of principal component scores $$\mathbf{t}_{(i)} = (t_1, \dots, t_p)_{(i)}$$, given by


 * $${t_{k}}_{(i)} = \mathbf{x}_{(i)} \cdot \mathbf{w}_{(k)}$$

in such a way that the individual variables of t considered over the data set successively inherit the maximum possible variance from x, with each loading vector w constrained to be a unit vector.

First component
The first loading vector w(1) thus has to satisfy
 * $$\mathbf{w}_{(1)}

= \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\,\{ \sum_i \left(t_{1}\right)^2_{(i)} \} = \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\, \sum_i \left(\mathbf{x}_{(i)} \cdot \mathbf{w} \right)^2 $$

Equivalently, writing this in matrix form gives
 * $$\mathbf{w}_{(1)}

= \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\, \{ \Vert \mathbf{Xw} \Vert^2 \} = \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\, \{ \mathbf{w}^T \mathbf{X}^T \mathbf{X w} \}$$

Since w(1) has been defined to be a unit vector, it equally must also satisfy
 * $$\mathbf{w}_{(1)}

= \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\, \left\{ \frac{\mathbf{w}^T\mathbf{X}^T \mathbf{X w}}{\mathbf{w}^T \mathbf{w}} \right\}$$

The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a symmetric matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector.

With w(1) found, the first component of a data vector x(i) can then be given as a score t1(i) = x(i) ⋅ w(1) in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i) ⋅ w(1)} w(1).

Further components
The kth component can be found by subtracting the first k-1 principal components from X:
 * $$\mathbf{\hat{X}}_{k - 1}

= \mathbf{X} - \sum_{s = 1}^{k - 1} \mathbf{X} \mathbf{w}_{(s)} \mathbf{w}_{(s)}^{\rm T} $$ and then finding the loading vector which extracts the maximum variance from this new data matrix
 * $$\mathbf{w}_{(k)}

= \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{arg\,max}} \left\{ \Vert \mathbf{\hat{X}}_{k - 1} \mathbf{w} \Vert^2 \right\} = \underset{\Vert \mathbf{w} \Vert = 1}{\operatorname{\arg\,max}}\, \left\{ \tfrac{\mathbf{w}^T\mathbf{\hat{X}}_{k - 1}^T \mathbf{\hat{X}}_{k - 1} \mathbf{w}}{\mathbf{w}^T \mathbf{w}} \right\}$$

It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues.

The kth principal component of a data vector x(i) can therefore be given as a score tk(i) = x(i) ⋅ w(k) in the transformed co-ordinates, or as the corresponding vector in the space of the original variables, {x(i) ⋅ w(k)} w(k), where w(k) is the kth eigenvector of XTX.

The full principal components decomposition of X can therefore be given as
 * $$\mathbf{T} = \mathbf{X} \mathbf{W}$$

where W is a p-by-p matrix whose columns are the eigenvectors of XTX

Covariances
XTX itself can be recognised as proportional to the empirical sample covariance matrix of the dataset X.

The sample covariance Q between two of the different principal components over the dataset is given by
 * $$\begin{align}

Q(\mathrm{PC}_{(j)}, \mathrm{PC}_{(k)}) & \propto (\mathbf{X}\mathbf{w}_{(j)}) \cdot (\mathbf{X}\mathbf{w}_{(k)}) \\ & = \mathbf{w}_{(j)}^T \mathbf{X}^T \mathbf{X} \mathbf{w}_{(k)} \\ & = \mathbf{w}_{(j)}^T \lambda_{(k)} \mathbf{w}_{(k)} \\ & = \lambda_{(k)} \mathbf{w}_{(j)}^T \mathbf{w}_{(k)} \end{align}$$

where the eigenvector property of w(k) has been used to move from line 2 to line 3. However eigenvectors w(j) and w(k) corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance over the dataset.

Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix.

In matrix form, the empirical covariance matrix for the original variables can be written
 * $$\mathbf{Q} \propto \mathbf{X}^T \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^T$$

The empirical covariance matrix between the principal components becomes
 * $$\mathbf{W}^T \mathbf{Q} \mathbf{W} \propto \mathbf{W}^T \mathbf{W} \, \mathbf{\Lambda} \, \mathbf{W}^T \mathbf{W}

= \mathbf{\Lambda} $$

where Λ is the diagonal matrix of eigenvalues λ(k) of XTX

(λ(k) being equal to the sum of the squares over the dataset associated with each component k:  λ(k) = Σi tk2(i) = Σi (x(i) ⋅ w(k))2)

Dimensionality reduction
The faithful transformation T = X W maps a data vector x(i) from an original space of p variables to a new space of p variables which are uncorrelated over the dataset. However, not all the principal components need to be kept. Keeping only the first L principal components, produced by using only the first L loading vectors, gives the truncated transformation


 * $$\mathbf{T}_L = \mathbf{X} \mathbf{W}_L$$

where the matrix TL now has n rows but only L columns. By construction, of all the transformed data matrices with only L columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error ||T - TL||2.

Such dimensionality reduction can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting L=2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains clusters these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.

Similarly, in regression analysis, the larger the number of explanatory variables allowed, the greater is the chance of overfitting the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called principal component regression.

Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of T will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix W, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less -- the first components achieve a higher signal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss.

Singular value decomposition
The principal components transformation can also be associated with another matrix factorisation, the singular value decomposition (SVD) of X,
 * $$\mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T$$

Here Σ is a p-by-p diagonal matrix of positive numbers σ(k), called the singular values of X; U is an n-by-p matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X; and W is a p-by-p whose columns are orthogonal unit vectors of length p and called the right singular vectors of X.

In terms of this factorisation, the matrix XTX can be written
 * $$\begin{align}

\mathbf{X}^T\mathbf{X} & = \mathbf{W}\mathbf{\Sigma}\mathbf{U}^T \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T \\ & = \mathbf{W}\mathbf{\Sigma}^2\mathbf{W}^T \end{align}$$

Comparison with the eigenvector factorisation of XTX establishes that the right singular vectors W of X are equivalent to the eigenvectors of XTX, while the singular values σ(k) of X are equal to the square roots of the eigenvalues λ(k) of XTX.

Using the singular value decomposition the score matrix T can be written
 * $$\begin{align}

\mathbf{T} & = \mathbf{X} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma} \end{align}$$ so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value.

Efficient algorithms exist to calculate the SVD of X without having to form the matrix XTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix, unless only a handful of components are required.

As with the eigendecomposition, a truncated n-by-L score matrix TL can be obtained by considering only the first L largest singular values and their singular vectors:
 * $$\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L $$

The truncation of a matrix M or T using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of rank L to the original matrix, in the sense of the difference between the two having the smallest possible Frobenius norm, a result known as the Eckart-Young theorem [1936].