Common spatial pattern



Common spatial pattern (CSP) is a mathematical procedure used in signal processing for separating a multivariate signal into additive subcomponents which have maximum differences in variance between two windows.

Details
Let $$\mathbf{X}_1$$ of size $$(n,t_1)$$ and $$\mathbf{X}_2$$ of size $$(n,t_2)$$ be two windows of a multivariate signal, where $$n$$ is the number of signals and $$t_1$$ and $$t_2$$ are the respective number of samples.

The CSP algorithm determines the component $$\mathbf{w}^\text{T}$$ such that the ratio of variance (or second-order moment) is maximized between the two windows:
 * $$\mathbf{w}={\arg \max}_\mathbf{w} \frac{ \left\| \mathbf{wX}_1 \right\| ^2 } { \left\| \mathbf{wX}_2 \right\| ^2 }$$

The solution is given by computing the two covariance matrices:


 * $$\mathbf{R}_1=\frac{\mathbf{X}_1\mathbf{X}_1^\text{T}}{t_1}$$


 * $$\mathbf{R}_2=\frac{\mathbf{X}_2\mathbf{X}_2^\text{T}}{t_2}$$

Then, the simultaneous diagonalization of those two matrices (also called generalized eigenvalue decomposition) is realized. We find the matrix of eigenvectors $$\mathbf{P}=\begin{bmatrix} \mathbf{p}_1 & \cdots & \mathbf{p}_n \end{bmatrix}$$ and the diagonal matrix $$\mathbf{D}$$ of eigenvalues $$\{\lambda_1, \cdots, \lambda_n \}$$ sorted by decreasing order such that:


 * $$\mathbf{P}^{\mathrm{T}} \mathbf{R}_1 \mathbf{P} = \mathbf{D}$$

and


 * $$\mathbf{P}^{\mathrm{T}} \mathbf{R}_2 \mathbf{P} = \mathbf{I}_n$$

with $$\mathbf{I}_n$$ the identity matrix.

This is equivalent to the eigendecomposition of $$\mathbf{R}_2^{-1} \mathbf{R}_1$$:


 * $$\mathbf{R}_2^{-1} \mathbf{R}_1=\mathbf{PDP}^{-1}$$


 * $$\mathbf{w}^\text{T}$$ will correspond to the first column of $$\mathbf{P}$$:


 * $$\mathbf{w}=\mathbf{p}_1^\text{T}$$

Relation between variance ratio and eigenvalue
The eigenvectors composing $$\mathbf{P}$$ are components with variance ratio between the two windows equal to their corresponding eigenvalue:


 * $$ \mathbf{\lambda}_i = \frac{ \left\| \mathbf{p}_i^\text{T} \mathbf{X}_1 \right\| ^2 }{ \left\| \mathbf{p}_i^\text{T} \mathbf{X}_2 \right\| ^2 } $$

Other components
The vectorial subspace $$E_i$$ generated by the $$i$$ first eigenvectors $$\begin{bmatrix} \mathbf{p}_1 & \cdots & \mathbf{p}_i \end{bmatrix}$$ will be the subspace maximizing the variance ratio of all components belonging to it:


 * $$E_i={\arg \max}_{E} \begin{pmatrix}\min_{p \in E} \frac{ \left\| \mathbf{p^\text{T} X}_1 \right\| ^2 }{ \left\| \mathbf{p^\text{T} X}_2 \right\| ^2}\end{pmatrix}$$

On the same way, the vectorial subspace $$F_j$$ generated by the $$j$$ last eigenvectors $$\begin{bmatrix} \mathbf{p}_{n-j+1} & \cdots & \mathbf{p}_n \end{bmatrix}$$ will be the subspace minimizing the variance ratio of all components belonging to it:


 * $$ F_j = {\arg \min}_{F} \begin{pmatrix}\max_{p \in F} \frac{ \left\| \mathbf{p^\text{T} X}_1 \right\| ^2 }{ \left\| \mathbf{p^\text{T} X}_2 \right\| ^2} \end{pmatrix} $$

Variance or second-order moment
CSP can be applied after a mean subtraction (a.k.a. "mean centering") on signals in order to realize a variance ratio optimization. Otherwise CSP optimizes the ratio of second-order moment.

Choice of windows X1 and X2

 * The standard use consists on choosing the windows to correspond to two periods of time with different activation of sources (e.g. during rest and during a specific task).
 * It is also possible to choose the two windows to correspond to two different frequency bands in order to find components with specific frequency pattern. Those frequency bands can be on temporal or on frequential basis. Since the matrix $$\mathbf{P}$$ depends only of the covariance matrices, the same results can be obtained if the processing is applied on the Fourier transform of the signals.
 * Y. Wang has proposed a particular choice for the first window $$\mathbf{X}_1$$ in order to extract components which have a specific period. $$\mathbf{X}_1$$ was the mean of the different periods for the examined signals.
 * If there is only one window, $$\mathbf{R}_2$$ can be considered as the identity matrix and then CSP corresponds to Principal component analysis.

Relation between LDA and CSP
Linear discriminant analysis (LDA) and CSP apply in different circumstances. LDA separates data that have different means, by finding a rotation that maximizes the (normalized) distance between the centers of the two sets of data. On the other hand, CSP ignores the means. Thus CSP is good, for example, in separating the signal from the noise in an event-related potential (ERP) experiment because both distributions have zero mean and there is no distinction for LDA to separate. Thus CSP finds a projection that makes the variance of the components of the average ERP as large as possible so the signal stands out above the noise.

Applications
The CSP method can be applied to multivariate signals in generally, is commonly found in application to electroencephalographic (EEG) signals. Particularly, the method is often used in brain–computer interfaces to retrieve the component signals which best transduce the cerebral activity for a specific task (e.g. hand movement). It can also be used to separate artifacts from EEG signals.

CSP can be adapted for the analysis of the event-related potentials.