User:To stats or not to stats/lsspca

Least Squares Sparse PCA (LS SPCA)

LS SPCA is an approach to Sparse PCA which aims at creating sparse principal components (SPCs) with the same optimality as the principal components (PCs) as originally defined by  Karl Pearson's definition of Principal Component Analysis. Hence, the LS SPCs are orthogonal and sequentially best approximate the data matrix, in a least square sense.

Conventional SPCA methods are derived from Harold_Hotelling's definition of PCA, by which the PCs are the linear combinations of the variables with unit $L_2$norm  and orthogonal coefficients which, sequentially, have the largest variance (the $$L_2$$ norm of the PCs). While the two definitions of PCA lead to the same solution, when sparsity constraints are added this is no longer true. Conventional SPCs are the PCs of subsets of variables which are chosen so as to be as highly correlated as possible (so as to have maximal variance). These methods were created to be applied to very large matrices and have a number of drawbacks, which make them unappealing for data exploration.

there have been suggested a large number of definitions and algorithms suffer also

Instead, other SPCA methods for its definition: the f