Whitening transformation

A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they are uncorrelated and each have variance 1. The transformation is called "whitening" because it changes the input vector into a white noise vector.

Several other transformations are closely related to whitening:


 * 1) the decorrelation transform removes only the correlations but leaves variances intact,
 * 2) the standardization transform sets variances to 1 but leaves correlations intact,
 * 3) a coloring transformation transforms a vector of white random variables into a random vector with a specified covariance matrix.

Definition
Suppose $$X$$ is a random (column) vector with non-singular covariance matrix $$\Sigma$$ and mean $$0$$. Then the transformation $$Y = W X$$ with a whitening matrix $$W$$ satisfying the condition $$W^\mathrm{T} W = \Sigma^{-1}$$ yields the whitened random vector $$Y$$ with unit diagonal covariance.

There are infinitely many possible whitening matrices $$W$$ that all satisfy the above condition. Commonly used choices are $$W = \Sigma^{-1/2}$$ (Mahalanobis or ZCA whitening), $$W = L^T$$ where $$L$$ is the Cholesky decomposition of $$ \Sigma^{-1}$$ (Cholesky whitening), or the eigen-system of $$\Sigma$$ (PCA whitening).

Optimal whitening transforms can be singled out by investigating the cross-covariance and cross-correlation of $$X$$ and $$Y$$. For example, the unique optimal whitening transformation achieving maximal component-wise correlation between original $$X$$ and whitened $$Y$$ is produced by the whitening matrix $$W = P^{-1/2} V^{-1/2}$$ where $$P$$ is the correlation matrix and $$V$$ the variance matrix.

Whitening a data matrix
Whitening a data matrix follows the same transformation as for random variables. An empirical whitening transform is obtained by estimating the covariance (e.g. by maximum likelihood) and subsequently constructing a corresponding estimated whitening matrix (e.g. by Cholesky decomposition).

High-dimensional whitening
This modality is a generalization of the pre-whitening procedure extended to more general spaces where $$X$$ is usually assumed to be a random function or other random objects in a Hilbert space $$H$$. One of the main issues of extending whitening to infinite dimensions is that the covariance operator has an unbounded inverse in $$H$$. Nevertheless, if one assumes that Picard condition holds for $$X$$ in the range space of the covariance operator, whitening becomes possible. A whitening operator can be then defined from the factorization of the Moore–Penrose inverse of the covariance operator, which has effective mapping on Karhunen–Loève type expansions of $$X$$. The advantage of these whitening transformations is that they can be optimized according to the underlying topological properties of the data (smoothness, continuity and contiguity), thus producing more robust whitening representations. High-dimensional features of the data can be exploited through kernel regressors or basis function systems.

R implementation
An implementation of several whitening procedures in R, including ZCA-whitening and PCA whitening but also CCA whitening, is available in the "whitening" R package published on CRAN. The R package "pfica" allows the computation of high-dimensional whitening representations using basis function systems (B-splines, Fourier basis, etc.).