User:To stats or not to stats/SPCA sandbox

Sparse principal component analysis (SPCA or sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables.

A particular disadvantage of ordinary PCA is that the principal components are usually linear combinations of all input variables. SPCA overcomes this disadvantage by finding components that are linear combinations of just a few input variables (SPCs). This means that some of the coefficients of the linear combinations defining the SPCs, called loadings, are equal to zero. The number of nonzero loadings is called the cardinality of the SPC.

There exist two different approaches to SPCA: conventional SPCA, based on Harold Hotelling's definition of PCA, by which the PCs are the linear combinations of the variables with unit norm and orthogonal loadings which sequentially have the largest variance (the  norm of the SPCs); the second approach is Least Squares SPCA (LS SPCA), which is based on Karl Pearson's definition of PCA by which the SPCs are orthogonal and sequentially best approximate the data matrix in a least square sense. While the two definitions lead to the same PCA solution, when sparsity constraints are added, this is no longer true.

Contemporary datasets often have the number of input variables ($$p$$) comparable with or even much larger than the number of samples ($$n$$). It has been shown that if $$p/n$$ does not converge to zero, the classical PCA is not consistent. But conventional SPCA can retain consistency even if $$p\gg n.$$

LS SPCA Variants
In optimal LS SPCA (USPCA, uncorrelated SPCA) the orthogonality constraints require that the cardinality of the solutions is not smaller than the order of the component, These constraints may also create numerical problems when computing components of order larger than two.

Correlated SPCA (CSPCA, correlated SPCA) is a variant of LS SPCA in which the orthogonality constraints are relaxed and the solutions are obtained iteratively by minimizing the norm of the approximation error from residuals orthogonal to the previously computed SPCs. Even though the resulting components are correlated (usually very mildly), they have lower cardinality and in many cases explain more variance than the corresponding USPCA solutions.

The computation of the USPCA and CSPCA solutions is demanding when the data matrix is large. With Projection SPCA (PSPCA) approximate CSPCA solutions can computed much more efficiently by simply projecting the current first PC onto subsets of the variables. This means that the solutions can be computed with  efficient linear regression routines. PSPCA is fast and and can be shown to explain a proportion of the variance of the dataset comparable with that explained by CSPCA.

Mathematical formulation
Consider a data matrix, $$X$$, where each of the $$p$$ columns represent an input variable, and each of the $$n$$ rows represents an independent sample from data population. One assumes each column of $$X$$ has mean zero, otherwise one can subtract column-wise mean from each element of $$X$$. Let $$\Sigma=\frac{1}{n-1} X^\top X$$ be the empirical covariance matrix of $$X$$, which has dimension $$p \times p$$.

Conventional SPCA
Given an integer $$k$$ with $$1\le k \le p$$, the sparse PCA problem can be formulated as maximizing the variance along a direction represented by vector $$ v \in \mathbb{R}^p$$ while constraining its cardinality:
 * $$\begin{align}

\max \quad & v^{T} \Sigma v\\ \text{subject to} \quad & \left\Vert v\right\Vert _{2}=1\\ & \left\Vert v \right\Vert_{0}\leq k. \end{align}$$$$ The first constraint specifies that v is a unit vector. In the second constraint, $$\left\Vert v \right\Vert_{0}$$ represents the $\ell_0$ pseudo-norm of v, which is defined as the number of its non-zero components. So the second constraint specifies that the number of non-zero components in v is less than or equal to k, which is typically an integer that is much smaller than dimension p. The optimal value of $$ is known as the k-sparse largest eigenvalue.

If one takes k=p, the problem reduces to the ordinary PCA, and the optimal value becomes the largest eigenvalue of covariance matrix Σ.

After finding the optimal solution v, one deflates Σ to obtain a new matrix

\Sigma_1 = \Sigma - (v^T \Sigma v)v v^T, $$ and iterate this process to obtain further principal components. However, unlike PCA, sparse PCA cannot guarantee that different principal components are orthogonal. In order to achieve orthogonality, additional constraints must be enforced.

The following equivalent definition is in matrix form. Let $$V$$ be a p×p symmetric matrix, one can rewrite the sparse PCA problem as

\begin{align} \max \quad & Tr(\Sigma V)\\ \text{subject to}\quad & Tr(V)=1\\ & \Vert V \Vert_{0}\leq k^{2}\\ & Rank(V)=1, V\succeq 0. \end{align} $$$$ Tr is the matrix trace, and $$\Vert V \Vert_{0}$$ represents the non-zero elements in matrix V. The last line specifies that V has matrix rank one and is positive semidefinite. The last line means that one has $$V=vv^T$$, so $$ is equivalent to $$.

Moreover, the rank constraint in this formulation is actually redundant, and therefore sparse PCA can be cast as the following mixed-integer semidefinite program

\begin{align} \max \quad & Tr(\Sigma V)\\ \text{subject to}\quad & Tr(V)=1\\ & \vert V_{i,i}\vert \leq z_i, \forall i \in \{1, ..., p\}, \vert V_{i,j}\vert \leq \frac{1}{2} z_i, \forall i,j \in \{1, ..., p\}: i \neq j,\\ & V\succeq 0, z \in \{0, 1\}^p, \sum_i z_i \leq k \end{align} $$$$

Because of the cardinality constraint, the maximization problem is hard to solve exactly, especially when dimension p is high. In fact, the sparse PCA problem in $$ is NP-hard in the strong sense.

Least Squares SPCA
In this section the generic SPCs will be defined as $$t_j = Xv_j,\, j = 1, \ldots, d$$, where $$v_j$$ denotes the jth $$p$$-vectors of loadings.

By denoting $$V$$ the matrix with columns $$v_j$$ and $$T = XV$$, the USPCs are derived as the solutions to the minimization problem:
 * $$\begin{align}

V = \text{arg}\min\quad & \vert\vert X - \Pi_{T}X \vert\vert^2 = \text{arg}\max \vert\vert \Pi_{T}X\vert\vert ^2\\ \text{subject to}\quad & card\{v_j\}\leq p\\ &T'T = diag{\{s_j\}}, \end{align}$$$$ where $$\Pi_{T}$$ is the orthogonal projector onto the column space of $$T$$ and the maximization problem follows from Pythagoras's theorem.

Because of the orthogonality constraints, the problem in equation $$ can be solved sequentially for each $$v_j,\, j=1,\ldots, d$$ as:
 * $$\begin{align}

v_j = \text{arg }\min\quad & \vert\vert X - \Pi_{t_j}X \vert\vert^2 = \text{arg }\max \vert\vert \Pi_{t_j}X\vert\vert ^2\\ \text{subject to}\quad & card\{v_j\}\leq p\\ &T_{[j-1]}'t_j = 0, \end{align}$$$$ where $$T_{[j-1]}$$ denotes the first $$j-1$$ columns of $$T$$. Closed form solutions to equation $$ for given $$p$$ are given in Merola.

In CSPC the loadings are obtained iteratively by minimizing the norm of the approximation error from the residuals orthogonal to the previously computed SPCs. That is, if we let  $$X^\perp_{j-1} = X - \Pi_{T_{[j-1]}}X$$, the $$j^\text{th}$$ vector of loadings is obtained as the solution to:
 * $$\begin{align}

v_j = \text{arg }\min\quad & \vert\vert X^\perp_{j-1} - \Pi_{t_j}X \vert\vert^2\\ \text{subject to}\quad & card\{v_j\} \leq p, \end{align}$$$$ Without the sparsity constraints, the problems in equations $$ are equivalent to Pearson's definition of PCA.

The PSPCA sparse loadings are obtained iteratively by projecting the first PCs of the residual matrix $$X^\perp_{j-1}$$ on subsets of the variables. that is by solving the problems
 * $$\begin{align}

v_j = \text{arg }\min\quad & \vert\vert r_{j} - \Pi_{t_j}r_{j} \vert\vert^2\\ \text{subject to}\quad & card\{v_j\} \leq p, \end{align}$$$$ where $$r_{j}$$ is the first principal component of the residual matrix $$X^\perp_{j-1}$$

Computational considerations
As most sparse problems, variable selection in SPCA is a computationally intractable nonconvex NP-hard problem, therefore greedy sub-optimal algorithms are required to find solutions.

Algorithms for conventional SPCA
Several alternative approaches (of $$) have been proposed, including The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies are recently reviewed in a survey paper.
 * a regression framework,
 * a penalized matrix decomposition framework,
 * a convex relaxation/semidefinite programming framework,
 * a generalized power method framework
 * an alternating maximization framework
 * forward-backward greedy search and exact methods using branch-and-bound techniques,
 * a certifiably optimal branch-and-bound approach
 * Bayesian formulation framework.
 * A certifiably optimal mixed-integer semidefinite branch-and-cut approach

Notes on Semidefinite Programming Relaxation
It has been proposed that sparse PCA can be approximated by semidefinite programming (SDP). If one drops the rank constraint and relaxes the cardinality constraint by a 1-norm convex constraint, one gets a semidefinite programming relaxation, which can be solved efficiently in polynomial time:

\begin{align} \max \quad & Tr(\Sigma V)\\ \text{subject to}\quad & Tr(V)=1\\ & \mathbf{1}^T |V|\mathbf{1} \leq k\\ & V\succeq 0. \end{align} $$$$ In the second constraint, $$\mathbf{1}$$ is a p×1 vector of ones, and |V| is the matrix whose elements are the absolute values of the elements of V.

The optimal solution $$V$$ to the relaxed problem $$ is not guaranteed to have rank one. In that case, $$V$$ can be truncated to retain only the dominant eigenvector.

While the semidefinite program does not scale beyond n=300 covariates, it has been shown that a second-order cone relaxation of the semidefinite relaxation is almost as tight and successfully solves problems with n=1000s of covariates

Computation of the LS SPCA solutions
LS SPCA requires that the optimal cardinality and subsets of variables forming each SPC are determined by some optimization criterion. In LS SPCA, a reasonable approach is to choose the lowest cardinality for which an SPC explains a given percentage of the cumulative variance explained by the corresponding standard PC.

Merola proposed a backward elimination algorithm for variable selection in USPCA and CSPCA, which can be slow with large datasets. With PSPCA the optimal subsets can be efficiently computed by using standard algorithms for regression Feature_selection. One possible strategy for computing the LS SPCs is to use PSPCA for variable selection and then compute the SPCs with USPCA or CSPCA.

Financial Data Analysis
Suppose ordinary PCA is applied to a dataset where each input variable represents a different asset, it may generate principal components that are weighted combination of all the assets. In contrast, sparse PCA would produce principal components that are weighted combination of only a few input assets, so one can easily interpret its meaning. Furthermore, if one uses a trading strategy based on these principal components, fewer assets imply less transaction costs.

Biology
Consider a dataset where each input variable corresponds to a specific gene. Sparse PCA can produce a principal component that involves only a few genes, so researchers can focus on these specific genes for further analysis.

High-dimensional Hypothesis Testing
Contemporary datasets often have the number of input variables ($$p$$) comparable with or even much larger than the number of samples ($$n$$). It has been shown that if $$p/n$$ does not converge to zero, the classical PCA is not consistent. In other words, if we let $$k=p$$ in $$, then the optimal value does not converge to the largest eigenvalue of data population when the sample size $$n\rightarrow\infty$$, and the optimal solution does not converge to the direction of maximum variance. But sparse PCA can retain consistency even if $$p\gg n.$$

The k-sparse largest eigenvalue (the optimal value of $$) can be used to discriminate an isometric model, where every direction has the same variance, from a spiked covariance model in high-dimensional setting. Consider a hypothesis test where the null hypothesis specifies that data $$X$$ are generated from a multivariate normal distribution with mean 0 and covariance equal to an identity matrix, and the alternative hypothesis specifies that data $$X$$ is generated from a spiked model with signal strength $$\theta$$:

H_0: X \sim N(0, I_p), \quad H_1: X \sim N(0, I_p + \theta v v^T), $$ where $$v \in \mathbb{R}^p$$ has only k non-zero coordinates. The largest k-sparse eigenvalue can discriminate the two hypothesis if and only if $$\theta > \Theta(\sqrt{k \log(p)/n})$$.

Since computing k-sparse eigenvalue is NP-hard, one can approximate it by the optimal value of semidefinite programming relaxation ($$). If that case, we can discriminate the two hypotheses if $$\theta > \Theta(\sqrt{k^2 \log(p)/n})$$. The additional $$\sqrt{k}$$ term cannot be improved by any other polynomial time algorithm if the planted clique conjecture holds.

Conventional SPCA

 * amanpg - R package for Sparse PCA using the Alternating Manifold Proximal Gradient Method
 * elasticnet – R package for Sparse Estimation and Sparse PCA using Elastic-Nets
 * nsprcomp - R package for sparse and/or non-negative PCA based on thresholded power iterations
 * scikit-learn – Python library for machine learning which contains Sparse PCA and other techniques in the decomposition module.

LS SPCA
An R package with routines in C++ is available on Github

Conventional SPCA computed on rank deficient matrices
The different objective function used in the two approaches lead to very different solutions.

Conventional SPCs are the PCs of subsets of highly correlated variables. Instead, LS SPCs are combinations of variables unlikely to be highly correlated. Furthermore, conventional methods fail to identify the lowest possible representation of column-rank deficient matrices PCs, a task for which they were created.

An example of this shortcoming of conventional SPCA is given in Merola and Chen (2019). Consider a matrix with 100 observations on five perfectly collinear variables defined as

x_{ij} = (-1)^{i}\sqrt{j},\, i = 1, \ldots, 100;\, j = 1,\ldots, 5. $$ The covariance matrix of these variables has rank one, and the only nonzero eigenvalue is equal to 15.2. The first PC explains all the variance and can be written in terms of any of the variables as $$x_j \sqrt{15.2/s_{jj}}, j = 1 \ldots, 5$$, that is, as cardinality one components with loading larger than one. Instead, conventional SPCA would take the variables with larger variance to explain more variance than other collinear variables and that only the full cardinality PC explains the maximum variance. This is illustrated in Table 1, which shows the optimal conventional SPCA results

The results of conventional SPCA are even more bizzarre if it is applied to the correlation matrix of the above dataset. The correlation matrix is a 5 x 5 matrix of ones and the unit norm scaled variables are identical columns. However, in conventional SPCA linear combinations of a different number of these identical variables are deemed to explain more variance than components combination of fewer variables, as shown in Table 2.

Comparison of LS and conventional SPCA results
With due differences, the preference of conventional SPCA for correlated variables can be observed on datasets with multiply correlated variables. As a general rule, conventional SPCA seeks subsets of highly correlated variables. This is not the case for LS SPCA, because the data reconstruction approach discourages the presence of correlated variables in the subset forming the SPCs.

As an example, Table 3 shows the results of USPCA, CSPCA and PSPCA applied to four row-rank deficient matrices. For each of the first three SPCs are shown the cardinality and the cumulative variance that they explain as a percentage of the cumulative variance explained by the corresponding PCs. The reduction in cardinality is striking when considering that over 96% of the variance is explained and in some cases 100% of it. The results for the three the variants of LS SPCA are quite similar. Figure 1 shows the norm, relative cumulative variance explained, and correlation with the first PC for the first SPCs obtained with conventional SPCA and PSCA on four datasets. The first two datasets are full column-rank and the last two are not. More details on these datasets can be found in Merola (2019). It is easy to see how the PSPCs explain more variance and reach close to one correlation with the PC with much smaller cardinality, irrespective of the much larger norm of the conventional SPCA components. As shown in Table 3, the datasets Khanh and Rama have much fewer rows than columns. For this reason, the PSPCs are identical to the PCs when their cardinality is equal to the rank of the matrices. Instead, the conventional SPCs are computed with a much larger cardinality than the matrices rank (hence including perfectly correlated variables) without reaching the LS SPCs performance.