Singular value decomposition



In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any $D$ matrix. It is related to the polar decomposition.

Specifically, the singular value decomposition of an $$m \times n$$ complex matrix $m \times n$ is a factorization of the form $$\mathbf{M} = \mathbf{U\Sigma V^*},$$ where $\mathbf M$ is an $\mathbf U$ complex unitary matrix, $$\mathbf \Sigma$$ is an $$m \times n$$ rectangular diagonal matrix with non-negative real numbers on the diagonal, $m \times m$ is an $$n \times n$$ complex unitary matrix, and $$\mathbf V^*$$ is the conjugate transpose of $\mathbf V$. Such decomposition always exists for any complex matrix. If $\mathbf V$ is real, then $\mathbf M$ and $\mathbf U$ can be guaranteed to be real orthogonal matrices; in such contexts, the SVD is often denoted $$\mathbf U \mathbf \Sigma \mathbf V^\mathrm{T}.$$

The diagonal entries $$\sigma_i = \Sigma_{i i}$$ of $$\mathbf \Sigma$$ are uniquely determined by $\mathbf V$ and are known as the singular values of $\mathbf M$. The number of non-zero singular values is equal to the rank of $\mathbf M$. The columns of $\mathbf M$ and the columns of $\mathbf U$ are called left-singular vectors and right-singular vectors of $\mathbf V$, respectively. They form two sets of orthonormal bases $\mathbf M$ and $\mathbf u_1, \ldots, \mathbf u_m$ and if they are sorted so that the singular values $$\sigma_i$$ with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as

$$ \mathbf{M} = \sum_{i=1}^{r}\sigma_i\mathbf{u}_i\mathbf{v}_i^{*}, $$

where $$r \leq \min\{m,n\}$$ is the rank of $\mathbf v_1, \ldots, \mathbf v_n,$

The SVD is not unique, however it is always possible to choose the decomposition such that the singular values $$\Sigma_{i i}$$ are in descending order. In this case, $$\mathbf \Sigma$$ (but not $\mathbf M.$ and $\mathbf U$) is uniquely determined by $\mathbf V$

The term sometimes refers to the compact SVD, a similar decomposition $\mathbf M.$ in which $\mathbf M = \mathbf{U\Sigma V}^*$ is square diagonal of size $\mathbf \Sigma$ where $r \times r,$ is the rank of $r \leq \min\{m,n\}$ and has only the non-zero singular values. In this variant, $\mathbf M,$ is an $\mathbf U$ semi-unitary matrix and $$\mathbf{V}$$ is an $m \times r$ semi-unitary matrix, such that $$\mathbf U^* \mathbf U = \mathbf V^* \mathbf V = \mathbf I_r.$$

Mathematical applications of the SVD include computing the pseudoinverse, matrix approximation, and determining the rank, range, and null space of a matrix. The SVD is also extremely useful in all areas of science, engineering, and statistics, such as signal processing, least squares fitting of data, and process control.

Rotation, coordinate scaling, and reflection
In the special case when $>n \times r$ is an $\mathbf M$ real square matrix, the matrices $m \times m$ and $\mathbf U$ can be chosen to be real $\mathbf V^*$ matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as $m \times m$ as a linear transformation $\mathbf A,$ of the space $\mathbf x \mapsto \mathbf{Ax}$ the matrices $\mathbf R_m,$ and $\mathbf U$ represent rotations or reflection of the space, while $\mathbf V^*$ represents the scaling of each coordinate $\mathbf \Sigma$ by the factor $\mathbf x_i$ Thus the SVD decomposition breaks down any linear transformation of $\sigma_i.$ into a composition of three geometrical transformations: a rotation or reflection ($\mathbf R^m$), followed by a coordinate-by-coordinate scaling ($\mathbf V^*$), followed by another rotation or reflection ($\mathbf \Sigma$).

In particular, if $\mathbf U$ has a positive determinant, then $\mathbf M$ and $\mathbf U$ can be chosen to be both rotations with reflections, or both rotations without reflections. If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type.

If the matrix $\mathbf V^*$ is real but not square, namely $\mathbf M$ with $m\times n$ it can be interpreted as a linear transformation from $m \neq n,$ to $\mathbf R^n$  Then $\mathbf R^ m.$ and $\mathbf U$ can be chosen to be rotations/reflections of $\mathbf V^*$ and $\mathbf R^m$ respectively; and $\mathbf R^n,$ besides scaling the first $\mathbf \Sigma,$ coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turn $\min\{m,n\}$ into $\mathbf R^n$

Singular values as semiaxes of an ellipse or ellipsoid
As shown in the figure, the singular values can be interpreted as the magnitude of the semiaxes of an ellipse in 2D. This concept can be generalized to $\mathbf R^m.$-dimensional Euclidean space, with the singular values of any $n$ square matrix being viewed as the magnitude of the semiaxis of an $n \times n$-dimensional ellipsoid. Similarly, the singular values of any $n$ matrix can be viewed as the magnitude of the semiaxis of an $m \times n$-dimensional ellipsoid in $n$-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. See below for further details.

The columns of $UΣV^{⁎}$ and $2&thinsp;×&thinsp;2$ are orthonormal bases
Since $m$ and $\mathbf U$ are unitary, the columns of each of them form a set of orthonormal vectors, which can be regarded as basis vectors. The matrix $\mathbf V^*$ maps the basis vector $\mathbf M$ to the stretched unit vector $\mathbf V_i$ By the definition of a unitary matrix, the same is true for their conjugate transposes $\sigma_i \mathbf U_i.$ and $\mathbf U^*$ except the geometric interpretation of the singular values as stretches is lost. In short, the columns of $\mathbf V,$ $\mathbf U,$ $\mathbf U^*,$ and $\mathbf V,$ are orthonormal bases. When $\mathbf V^*$ is a positive-semidefinite Hermitian matrix, $\mathbf M$ and $\mathbf U$ are both equal to the unitary matrix used to diagonalize $\mathbf V$ However, when $\mathbf M.$ is not positive-semidefinite and Hermitian but still diagonalizable, its eigendecomposition and singular value decomposition are distinct.

Relation to the four fundamental subspaces

 * The first $\mathbf M$ columns of $r$ are a basis of the column space of $\mathbf U$.
 * The last $\mathbf M$ columns of $m-r$ are a basis of the null space of $\mathbf U$.
 * The first $\mathbf M^*$ columns of $r$ are a basis of the column space of $\mathbf V$ (the row space of $\mathbf M^*$ in the real case).
 * The last $\mathbf M$ columns of $n-r$ are a basis of the null space of $\mathbf V$.

Geometric meaning
Because $\mathbf M$ and $\mathbf U$ are unitary, we know that the columns $\mathbf V$ of $\mathbf U_1, \ldots, \mathbf U_m$ yield an orthonormal basis of $\mathbf U$ and the columns $K^m$ of $\mathbf V_1, \ldots, \mathbf V_n$ yield an orthonormal basis of $\mathbf V$ (with respect to the standard scalar products on these spaces).

The linear transformation

$$ T : \left\{\begin{aligned} K^n &\to K^m \\ x &\mapsto \mathbf{M}x \end{aligned}\right. $$

has a particularly simple description with respect to these orthonormal bases: we have

$$ T(\mathbf{V}_i) = \sigma_i \mathbf{U}_i, \qquad i = 1, \ldots, \min(m, n), $$

where $K^n$ is the $\sigma_i$-th diagonal entry of $i$ and $\mathbf \Sigma,$ for $T(\mathbf V_i) = 0$

The geometric content of the SVD theorem can thus be summarized as follows: for every linear map $i > \min(m,n).$ one can find orthonormal bases of $T : K^n \to K^m$ and $K^n$ such that $K^m$ maps the $T$-th basis vector of $i$ to a non-negative multiple of the $K^n$-th basis vector of $i$ and sends the leftover basis vectors to zero. With respect to these bases, the map $K^m,$ is therefore represented by a diagonal matrix with non-negative real diagonal entries.

To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphere $T$ of radius one in $S$ The linear map $\mathbf R^n.$ maps this sphere onto an ellipsoid in $T$ Non-zero singular values are simply the lengths of the semi-axes of this ellipsoid. Especially when $\mathbf R^m.$ and all the singular values are distinct and non-zero, the SVD of the linear map $$ can be easily analyzed as a succession of three consecutive moves: consider the ellipsoid $T$ and specifically its axes; then consider the directions in $T(S)$ sent by $\mathbf R^n$ onto these axes. These directions happen to be mutually orthogonal. Apply first an isometry $T$ sending these directions to the coordinate axes of $\mathbf V^*$ On a second move, apply an endomorphism $\mathbf R^n.$ diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths of $\mathbf D$ as stretching coefficients. The composition $T(S)$ then sends the unit-sphere onto an ellipsoid isometric to $\mathbf D \circ \mathbf V^*$ To define the third and last move, apply an isometry $T(S).$ to this ellipsoid to obtain $\mathbf U$ As can be easily checked, the composition $T(S).$ coincides with $\mathbf U \circ \mathbf D \circ \mathbf V^*$

Example
Consider the $T.$ matrix

$$ \mathbf{M} = \begin{bmatrix} 1 & 0 & 0 & 0 & 2 \\ 0 & 0 & 3 & 0 & 0 \\  0 & 0 & 0 & 0 & 0 \\  0 & 2 & 0 & 0 & 0 \end{bmatrix} $$

A singular value decomposition of this matrix is given by $4 \times 5$

$$\begin{align} \mathbf{U} &= \begin{bmatrix} \color{Green}0 & \color{Blue}-1 & \color{Cyan}0 & \color{Emerald}0 \\ \color{Green}-1 & \color{Blue}0 & \color{Cyan}0 & \color{Emerald}0 \\ \color{Green}0 & \color{Blue}0 & \color{Cyan}0 & \color{Emerald}-1 \\ \color{Green}0 & \color{Blue}0 & \color{Cyan}-1 & \color{Emerald}0 \end{bmatrix} \\[6pt]

\mathbf \Sigma &= \begin{bmatrix} 3 &      0  &   0 &                    0  & \color{Gray}\mathit{0} \\ 0 & \sqrt{5} &  0 &                    0  & \color{Gray}\mathit{0} \\ 0 &      0  &   2 &                    0  & \color{Gray}\mathit{0} \\ 0 &      0  &   0 & \color{Red}\mathbf{0} & \color{Gray}\mathit{0} \end{bmatrix} \\[6pt]

\mathbf{V}^* &= \begin{bmatrix} \color{Violet}0       & \color{Violet}0    & \color{Violet}-1  & \color{Violet}0  &\color{Violet}0 \\ \color{Plum}-\sqrt{0.2}& \color{Plum}0     & \color{Plum}0    & \color{Plum}0    &\color{Plum}-\sqrt{0.8} \\ \color{Magenta}0      & \color{Magenta}-1  & \color{Magenta}0 & \color{Magenta}0 &\color{Magenta}0 \\ \color{Orchid}0          & \color{Orchid}0  & \color{Orchid}0  & \color{Orchid}1  &\color{Orchid}0 \\ \color{Purple} - \sqrt{0.8} & \color{Purple}0 & \color{Purple}0 & \color{Purple}0 & \color{Purple}\sqrt{0.2} \end{bmatrix} \end{align}$$

The scaling matrix $\mathbf U\mathbf \Sigma \mathbf V^*$ is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matrices $\mathbf \Sigma$ and $\mathbf U$ are unitary, multiplying by their respective conjugate transposes yields identity matrices, as shown below. In this case, because $\mathbf V^*$ and $\mathbf U$ are real valued, each is an orthogonal matrix.

$$\begin{align} \mathbf{U} \mathbf{U}^* &= \begin{bmatrix} 1 & 0 & 0 & 0 \\   0 & 1 & 0 & 0 \\    0 & 0 & 1 & 0 \\    0 & 0 & 0 & 1  \end{bmatrix} = \mathbf{I}_4 \\[6pt] \mathbf{V} \mathbf{V}^* &= \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\   0 & 1 & 0 & 0 & 0 \\    0 & 0 & 1 & 0 & 0 \\    0 & 0 & 0 & 1 & 0 \\    0 & 0 & 0 & 0 & 1  \end{bmatrix} = \mathbf{I}_5 \end{align}$$

This particular singular value decomposition is not unique. Choosing $\mathbf V^*$ such that

$$\mathbf{V}^* = \begin{bmatrix} \color{Violet}0   &  \color{Violet}1 & \color{Violet}0  &       \color{Violet}0    &        \color{Violet}0    \\ \color{Plum}0   &    \color{Plum}0 & \color{Plum}1    &         \color{Plum}0    &          \color{Plum}0    \\ \color{Magenta}\sqrt{0.2} & \color{Magenta}0 & \color{Magenta}0 &     \color{Magenta}0    & \color{Magenta}\sqrt{0.8} \\ \color{Orchid}\sqrt{0.4} & \color{Orchid}0 & \color{Orchid}0  & \color{Orchid}\sqrt{0.5} & \color{Orchid}-\sqrt{0.1}  \\ \color{Purple}-\sqrt{0.4} & \color{Purple}0 & \color{Purple}0  & \color{Purple}\sqrt{0.5} &  \color{Purple}\sqrt{0.1} \end{bmatrix}$$

is also a valid singular value decomposition.

Singular values, singular vectors, and their relation to the SVD
A non-negative real number $\mathbf V$ is a singular value for $\sigma$ if and only if there exist unit-length vectors $\mathbf M$ in $\mathbf u$ and $K^m$ in $\mathbf v$ such that

$$\begin{align} \mathbf{M v} &= \sigma \mathbf{u}, \\[3mu] \mathbf M^*\mathbf u &= \sigma \mathbf{v}. \end{align}$$

The vectors $K^n$ and $\mathbf u$ are called left-singular and right-singular vectors for $\mathbf v$ respectively.

In any singular value decomposition

$$ \mathbf M = \mathbf U \mathbf \Sigma \mathbf V^* $$

the diagonal entries of $\sigma,$ are equal to the singular values of $\mathbf \Sigma$ The first $\mathbf M.$ columns of $$ and $\mathbf U$ are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that:
 * An $\mathbf V$ matrix $m \times n$ has at most $\mathbf M$ distinct singular values.
 * It is always possible to find a unitary basis $p$ for $\mathbf U$ with a subset of basis vectors spanning the left-singular vectors of each singular value of $K^m$
 * It is always possible to find a unitary basis $\mathbf M.$ for $\mathbf V$ with a subset of basis vectors spanning the right-singular vectors of each singular value of $K^n$

A singular value for which we can find two left (or right) singular vectors that are linearly independent is called degenerate. If $\mathbf M.$ and $\mathbf u_1$ are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns of $\mathbf u_2$ and $\mathbf U$ corresponding to diagonal elements of $\mathbf V$ all with the same value $\mathbf \Sigma$

As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in the cokernel and kernel, respectively, of $\sigma.$ which by the rank–nullity theorem cannot be the same dimension if $\mathbf M,$ Even if all singular values are nonzero, if $m \neq n.$ then the cokernel is nontrivial, in which case $m > n$ is padded with $\mathbf U$ orthogonal vectors from the cokernel. Conversely, if $m - n$ then $m < n,$ is padded by $\mathbf V$ orthogonal vectors from the kernel. However, if the singular value of $n - m$ exists, the extra columns of $0$ or $\mathbf U$ already appear as left or right-singular vectors.

Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factor $\mathbf V$ (for the real case up to a sign). Consequently, if all singular values of a square matrix $e^{i\varphi}$ are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column of $\mathbf M$ by a unit-phase factor and simultaneous multiplication of the corresponding column of $\mathbf U$ by the same unit-phase factor. In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of both $\mathbf V$ and $\mathbf U$ spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors of $\mathbf V$ and $\mathbf U$ spanning the kernel and cokernel, respectively, of $\mathbf V$

Relation to eigenvalue decomposition
The singular value decomposition is very general in the sense that it can be applied to any $\mathbf M.$ matrix, whereas eigenvalue decomposition can only be applied to square diagonalizable matrices. Nevertheless, the two decompositions are related.

If $m \times n$ has SVD $\mathbf M$ the following two relations hold:

$$\begin{align} \mathbf{M}^* \mathbf{M} &= \mathbf{V} \mathbf \Sigma^* \mathbf{U}^*\, \mathbf{U} \mathbf \Sigma \mathbf{V}^* = \mathbf{V} (\mathbf \Sigma^* \mathbf \Sigma) \mathbf{V}^*, \\[3mu] \mathbf{M} \mathbf{M}^* &= \mathbf{U} \mathbf \Sigma \mathbf{V}^*\, \mathbf{V} \mathbf \Sigma^* \mathbf{U}^* = \mathbf{U} (\mathbf \Sigma \mathbf \Sigma^*) \mathbf{U}^*. \end{align}$$

The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently:


 * The columns of $\mathbf M = \mathbf U \mathbf \Sigma \mathbf V^*,$ (referred to as right-singular vectors) are eigenvectors of $\mathbf V$
 * The columns of $\mathbf M^* \mathbf M.$ (referred to as left-singular vectors) are eigenvectors of $\mathbf U$
 * The non-zero elements of $\mathbf M \mathbf M^*.$ (non-zero singular values) are the square roots of the non-zero eigenvalues of $\mathbf \Sigma$ or $\mathbf M^* \mathbf M$

In the special case of $\mathbf M \mathbf M^*.$ being a normal matrix, and thus also square, the spectral theorem ensures that it can be unitarily diagonalized using a basis of eigenvectors, and thus decomposed as $\mathbf M$ for some unitary matrix $\mathbf M = \mathbf U\mathbf D\mathbf U^*$ and diagonal matrix $\mathbf U$ with complex elements $\mathbf D$ along the diagonal. When $\sigma_i$ is positive semi-definite, $\mathbf M$ will be non-negative real numbers so that the decomposition $\sigma_i$ is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phase $\mathbf M = \mathbf U \mathbf D \mathbf U^*$ of each $e^{i\varphi}$ to either its corresponding $\sigma_i$ or $\mathbf V_i$ The natural connection of the SVD to non-normal matrices is through the polar decomposition theorem: $\mathbf U_i.$ where $\mathbf M = \mathbf S \mathbf R,$ is positive semidefinite and normal, and  $\mathbf S = \mathbf U \mathbf\Sigma \mathbf U^*$ is unitary.

Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD of $\mathbf R = \mathbf U \mathbf V^*$ while related, differ: the eigenvalue decomposition is $\mathbf M,$ where $\mathbf M = \mathbf U \mathbf D \mathbf U^{-1},$ is not necessarily unitary and $\mathbf U$ is not necessarily positive semi-definite, while the SVD is $\mathbf D$ where $\mathbf M = \mathbf U \mathbf \Sigma \mathbf V^*,$ is diagonal and positive semi-definite, and $\mathbf \Sigma$ and $\mathbf U$ are unitary matrices that are not necessarily related except through the matrix $\mathbf V$ While only non-defective square matrices have an eigenvalue decomposition, any $\mathbf M.$ matrix has a SVD.

Pseudoinverse
The singular value decomposition can be used for computing the pseudoinverse of a matrix. The pseudoinverse of the matrix $m \times n$ with singular value decomposition $\mathbf M$ is,

$$ \mathbf M^+ = \mathbf V \boldsymbol \Sigma^+ \mathbf U^\ast, $$

where $$\boldsymbol \Sigma^+$$ is the pseudoinverse of $$\boldsymbol \Sigma$$, which is formed by replacing every non-zero diagonal entry by its reciprocal and transposing the resulting matrix. The pseudoinverse is one way to solve linear least squares problems.

Solving homogeneous linear equations
A set of homogeneous linear equations can be written as $\mathbf M = \mathbf U \mathbf \Sigma \mathbf V^*$ for a matrix $\mathbf A \mathbf x = \mathbf 0$ and vector $\mathbf A$ A typical situation is that $\mathbf x.$ is known and a non-zero $\mathbf A$ is to be determined which satisfies the equation. Such an $\mathbf x$ belongs to $\mathbf x$'s null space and is sometimes called a (right) null vector of $\mathbf A$ The vector $\mathbf A.$ can be characterized as a right-singular vector corresponding to a singular value of $\mathbf x$ that is zero. This observation means that if $\mathbf A$ is a square matrix and has no vanishing singular value, the equation has no non-zero $\mathbf A$ as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero $\mathbf x$ satisfying $\mathbf x$ with $\mathbf x^* \mathbf A = \mathbf 0$ denoting the conjugate transpose of $\mathbf x^*$ is called a left null vector of $\mathbf x,$

Total least squares minimization
A total least squares problem seeks the vector $\mathbf A.$ that minimizes the 2-norm of a vector $\mathbf x$ under the constraint $$ \| \mathbf x \| = 1.$$ The solution turns out to be the right-singular vector of $\mathbf A \mathbf x$ corresponding to the smallest singular value.

Range, null space and rank
Another application of the SVD is that it provides an explicit representation of the range and null space of a matrix $\mathbf A$ The right-singular vectors corresponding to vanishing singular values of $\mathbf M.$ span the null space of $\mathbf M$ and the left-singular vectors corresponding to the non-zero singular values of $\mathbf M$ span the range of $\mathbf M$ For example, in the above example the null space is spanned by the last row of $\mathbf M.$ and the range is spanned by the first three columns of $\mathbf V^*$

As a consequence, the rank of $\mathbf U.$ equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in $$\mathbf \Sigma$$. In numerical linear algebra the singular values can be used to determine the effective rank of a matrix, as rounding error may lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero.

Low-rank matrix approximation
Some practical applications need to solve the problem of approximating a matrix $\mathbf M$ with another matrix $$\tilde{\mathbf{M}}$$, said to be truncated, which has a specific rank $\mathbf M$. In the case that the approximation is based on minimizing the Frobenius norm of the difference between $r$ and $\mathbf M$ under the constraint that $$\operatorname{rank}\bigl(\tilde{\mathbf{M}}\bigr) = r,$$ it turns out that the solution is given by the SVD of $\tilde{\mathbf M}$ namely

$$ \tilde{\mathbf{M}} = \mathbf{U} \tilde{\mathbf \Sigma} \mathbf{V}^*, $$

where $$\tilde{\mathbf \Sigma}$$ is the same matrix as $$\mathbf \Sigma$$ except that it contains only the $\mathbf M,$ largest singular values (the other singular values are replaced by zero). This is known as the Eckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; see ).

Separable models
The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix $r$ can be written as an outer product of two vectors $\mathbf A$ or, in coordinates, $\mathbf A = \mathbf u \otimes \mathbf v,$ Specifically, the matrix $A_{ij} = u_i v_j.$ can be decomposed as,

$$ \mathbf{M} = \sum_i \mathbf{A}_i = \sum_i \sigma_i \mathbf U_i \otimes \mathbf V_i. $$

Here $\mathbf M$ and $\mathbf U_i$ are the $\mathbf V_i$-th columns of the corresponding SVD matrices, $i$ are the ordered singular values, and each $\sigma_i$ is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero $\mathbf A_i$ is exactly the rank of the matrix. Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described by a Gabor filter in the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example, reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of $\sigma_i$ in the SVD factorization is then a Gabor while the first column of $\mathbf U$ represents the time modulation (or vice versa). One may then define an index of separability

$$ \alpha = \frac{\sigma_1^2}{\sum_i \sigma_i^2}, $$

which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition.

Nearest orthogonal matrix
It is possible to use the SVD of a square matrix $\mathbf V$ to determine the orthogonal matrix $\mathbf A$ closest to $\mathbf O$ The closeness of fit is measured by the Frobenius norm of $\mathbf A.$ The solution is the product $\mathbf O - \mathbf A.$ This intuitively makes sense because an orthogonal matrix would have the decomposition $\mathbf U \mathbf V^*.$ where $\mathbf U \mathbf I \mathbf V^*$ is the identity matrix, so that if $\mathbf I$ then the product $\mathbf A = \mathbf U \mathbf \Sigma \mathbf V^*$ amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix $\mathbf A = \mathbf U \mathbf V^*$ of the Polar Decomposition $$\mathbf M = \mathbf R \mathbf P = \mathbf P' \mathbf R$$ in either order of stretch and rotation, as described above.

A similar problem, with interesting applications in shape analysis, is the orthogonal Procrustes problem, which consists of finding an orthogonal matrix $\mathbf R = \mathbf U \mathbf V^*$ which most closely maps $\mathbf O$ to $\mathbf A$ Specifically,

$$ \mathbf{O} = \underset\Omega\operatorname{argmin} \|\mathbf{A}\boldsymbol{\Omega} - \mathbf{B}\|_F \quad\text{subject to}\quad \boldsymbol{\Omega}^\operatorname{T}\boldsymbol{\Omega} = \mathbf{I}, $$

where $$\| \cdot \|_F$$ denotes the Frobenius norm.

This problem is equivalent to finding the nearest orthogonal matrix to a given matrix $$\mathbf M = \mathbf A^\operatorname{T} \mathbf B$$.

The Kabsch algorithm
The Kabsch algorithm (called Wahba's problem in other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules.

Signal processing
The SVD and pseudoinverse have been successfully applied to signal processing, image processing and big data (e.g., in genomic signal processing).

Other examples
The SVD is also applied extensively to the study of linear inverse problems and is useful in the analysis of regularization methods such as that of Tikhonov. It is widely used in statistics, where it is related to principal component analysis and to correspondence analysis, and in signal processing and pattern recognition. It is also used in output-only modal analysis, where the non-scaled mode shapes can be determined from the singular vectors. Yet another usage is latent semantic indexing in natural-language text processing.

In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number" $$\kappa := \sigma_\text{max} / \sigma_\text{min}$$. It often controls the error rate or convergence rate of a given computational scheme on such systems.

The SVD also plays a crucial role in the field of quantum information, in a form often referred to as the Schmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to be entangled: if the rank of the $$\mathbf \Sigma$$ matrix is larger than one.

One application of SVD to rather large matrices is in numerical weather prediction, where Lanczos methods are used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate an ensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction.

SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled with radial basis functions to interpolate solutions to three-dimensional unsteady flow problems.

Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO. SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models.

Singular value decomposition is used in recommender systems to predict people's item ratings. Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines.

Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to disease outbreak detection. A combination of SVD and higher-order SVD also has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance.

In astrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design and orbital station-keeping.

Proof of existence
An eigenvalue $\mathbf B.$ of a matrix $\lambda$ is characterized by the algebraic relation $\mathbf M$ When $\mathbf M \mathbf u = \lambda \mathbf u.$ is Hermitian, a variational characterization is also available. Let $\mathbf M$ be a real $\mathbf M$ symmetric matrix. Define

$$ f : \left\{ \begin{align} \R^n &\to \R \\ \mathbf{x} &\mapsto \mathbf{x}^\operatorname{T} \mathbf{M} \mathbf{x} \end{align}\right.$$

By the extreme value theorem, this continuous function attains a maximum at some $n \times n$ when restricted to the unit sphere $$\{\|\mathbf x\| = 1\}.$$ By the Lagrange multipliers theorem, $\mathbf u$ necessarily satisfies

$$\nabla \mathbf{u}^\operatorname{T} \mathbf{M} \mathbf{u} - \lambda \cdot \nabla \mathbf{u}^\operatorname{T} \mathbf{u} = 0$$

for some real number $\mathbf u$ The nabla symbol, $\lambda.$, is the del operator (differentiation with respect to $\nabla$). Using the symmetry of $\mathbf x$ we obtain

$$\nabla \mathbf{x}^\operatorname{T} \mathbf{M} \mathbf{x} - \lambda \cdot \nabla \mathbf{x}^\operatorname{T} \mathbf{x} = 2(\mathbf{M}-\lambda \mathbf{I})\mathbf{x}.$$

Therefore $\mathbf M$ so $\mathbf M \mathbf u = \lambda \mathbf u,$ is a unit length eigenvector of $\mathbf u$ For every unit length eigenvector $\mathbf M.$ of $\mathbf v$ its eigenvalue is $\mathbf M$ so $f(\mathbf v),$ is the largest eigenvalue of $\lambda$ The same calculation performed on the orthogonal complement of $\mathbf M.$ gives the next largest eigenvalue and so on. The complex Hermitian case is similar; there $\mathbf u$ is a real-valued function of $f(\mathbf x) = \mathbf x^* \mathbf M \mathbf x$ real variables.

Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, of $2n$ is no longer required.

This section gives these two arguments for existence of singular value decomposition.

Based on the spectral theorem
Let $$\mathbf{M}$$ be an $\mathbf M$ complex matrix. Since $$\mathbf{M}^* \mathbf{M}$$ is positive semi-definite and Hermitian, by the spectral theorem, there exists an $m \times n$ unitary matrix $$\mathbf{V}$$ such that

$$ \mathbf V^* \mathbf M^* \mathbf M \mathbf V = \bar\mathbf{D} = \begin{bmatrix} \mathbf{D} & 0 \\ 0 & 0\end{bmatrix}, $$

where $$\mathbf{D}$$ is diagonal and positive definite, of dimension $$\ell\times \ell$$, with $$\ell$$ the number of non-zero eigenvalues of $$\mathbf{M}^* \mathbf{M}$$ (which can be shown to verify $$\ell\le\min(n,m)$$). Note that $$\mathbf{V}$$ is here by definition a matrix whose $$i$$-th column is the $$i$$-th eigenvector of $$\mathbf{M}^* \mathbf{M}$$, corresponding to the eigenvalue $$\bar{\mathbf{D}}_{ii}$$. Moreover, the $$j$$-th column of $$\mathbf{V}$$, for $$j>\ell$$, is an eigenvector of $$\mathbf{M}^* \mathbf{M}$$ with eigenvalue $$\bar{\mathbf{D}}_{jj}=0$$. This can be expressed by writing $$\mathbf{V}$$ as $$\mathbf{V}=\begin{bmatrix}\mathbf{V}_1 &\mathbf{V}_2\end{bmatrix}$$, where the columns of $$\mathbf{V}_1$$ and $$\mathbf{V}_2$$ therefore contain the eigenvectors of $$\mathbf{M}^* \mathbf{M}$$ corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting of $$\mathbf{V}$$, the equation becomes:

$$ \begin{bmatrix} \mathbf{V}_1^* \\ \mathbf{V}_2^* \end{bmatrix} \mathbf{M}^* \mathbf{M}\, \begin{bmatrix} \mathbf{V}_1 & \!\! \mathbf{V}_2 \end{bmatrix} = \begin{bmatrix} \mathbf{V}_1^* \mathbf{M}^* \mathbf{M} \mathbf{V}_1 & \mathbf{V}_1^* \mathbf{M}^* \mathbf{M} \mathbf{V}_2 \\ \mathbf{V}_2^* \mathbf{M}^* \mathbf{M} \mathbf{V}_1 & \mathbf{V}_2^* \mathbf{M}^* \mathbf{M} \mathbf{V}_2 \end{bmatrix} = \begin{bmatrix} \mathbf{D} & 0 \\ 0 & 0 \end{bmatrix}.$$

This implies that

$$ \mathbf{V}_1^* \mathbf{M}^* \mathbf{M} \mathbf{V}_1 = \mathbf{D}, \quad \mathbf{V}_2^* \mathbf{M}^* \mathbf{M} \mathbf{V}_2 = \mathbf{0}. $$

Moreover, the second equation implies $$\mathbf{M}\mathbf{V}_2 = \mathbf{0}$$. Finally, the unitary-ness of $$\mathbf{V}$$ translates, in terms of $$\mathbf{V}_1$$ and $$\mathbf{V}_2$$, into the following conditions:

$$\begin{align} \mathbf{V}_1^* \mathbf{V}_1 &= \mathbf{I}_1, \\ \mathbf{V}_2^* \mathbf{V}_2 &= \mathbf{I}_2, \\ \mathbf{V}_1 \mathbf{V}_1^* + \mathbf{V}_2 \mathbf{V}_2^* &= \mathbf{I}_{12}, \end{align}$$

where the subscripts on the identity matrices are used to remark that they are of different dimensions.

Let us now define

$$ \mathbf{U}_1 = \mathbf{M} \mathbf{V}_1 \mathbf{D}^{-\frac{1}{2}}. $$

Then,

$$ \mathbf{U}_1 \mathbf{D}^\frac{1}{2} \mathbf{V}_1^* = \mathbf{M} \mathbf{V}_1 \mathbf{D}^{-\frac{1}{2}} \mathbf{D}^\frac{1}{2} \mathbf{V}_1^* = \mathbf{M} (\mathbf{I} - \mathbf{V}_2\mathbf{V}_2^*) = \mathbf{M} - (\mathbf{M}\mathbf{V}_2)\mathbf{V}_2^* = \mathbf{M}, $$

since $$\mathbf{M}\mathbf{V}_2 = \mathbf{0}. $$ This can be also seen as immediate consequence of the fact that $$\mathbf{M}\mathbf{V}_1\mathbf{V}_1^* = \mathbf{M}$$. This is equivalent to the observation that if $$\{\boldsymbol v_i\}_{i=1}^\ell$$ is the set of eigenvectors of $$\mathbf{M}^* \mathbf{M}$$ corresponding to non-vanishing eigenvalues $$\{\lambda_i\}_{i=1}^\ell$$, then $$\{\mathbf M \boldsymbol v_i\}_{i=1}^\ell$$ is a set of orthogonal vectors, and $$\bigl\{\lambda_i^{-1/2}\mathbf M \boldsymbol v_i\bigr\}\vphantom|_{i=1}^\ell$$ is a (generally not complete) set of orthonormal vectors. This matches with the matrix formalism used above denoting with $$\mathbf{V}_1$$ the matrix whose columns are $$\{\boldsymbol v_i\}_{i=1}^\ell$$, with $$\mathbf{V}_2$$ the matrix whose columns are the eigenvectors of $$\mathbf{M}^* \mathbf{M}$$ with vanishing eigenvalue, and $$\mathbf{U}_1$$ the matrix whose columns are the vectors $$\bigl\{\lambda_i^{-1/2}\mathbf M \boldsymbol v_i\bigr\}\vphantom|_{i=1}^\ell$$.

We see that this is almost the desired result, except that $$\mathbf{U}_1$$ and $$\mathbf{V}_1$$ are in general not unitary, since they might not be square. However, we do know that the number of rows of $$\mathbf{U}_1$$ is no smaller than the number of columns, since the dimensions of $$\mathbf{D}$$ is no greater than $$m$$ and $$n$$. Also, since

$$ \mathbf{U}_1^*\mathbf{U}_1 = \mathbf{D}^{-\frac{1}{2}}\mathbf{V}_1^*\mathbf{M}^*\mathbf{M} \mathbf{V}_1 \mathbf{D}^{-\frac{1}{2}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{D}\mathbf{D}^{-\frac{1}{2}} = \mathbf{I_1}, $$

the columns in $$\mathbf{U}_1$$ are orthonormal and can be extended to an orthonormal basis. This means that we can choose $$\mathbf{U}_2$$ such that $$\mathbf{U} = \begin{bmatrix} \mathbf{U}_1 & \mathbf{U}_2 \end{bmatrix}$$ is unitary.

For $n \times n$ we already have $\mathbf V_1$ to make it unitary. Now, define

$$ \mathbf \Sigma = \begin{bmatrix} \begin{bmatrix} \mathbf{D}^\frac{1}{2} & 0 \\ 0 & 0 \end{bmatrix} \\ 0 \end{bmatrix}, $$

where extra zero rows are added or removed to make the number of zero rows equal the number of columns of $\mathbf V_2$ and hence the overall dimensions of $$\mathbf \Sigma$$ equal to $$m\times n$$. Then

$$ \begin{bmatrix} \mathbf{U}_1 & \mathbf{U}_2 \end{bmatrix} \begin{bmatrix} \begin{bmatrix} \mathbf{}D^\frac{1}{2} & 0 \\ 0 & 0 \end{bmatrix} \\ 0 \end{bmatrix} \begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix}^* = \begin{bmatrix} \mathbf{U}_1 & \mathbf{U}_2 \end{bmatrix} \begin{bmatrix} \mathbf{D}^\frac{1}{2} \mathbf{V}_1^* \\ 0 \end{bmatrix} = \mathbf{U}_1 \mathbf{D}^\frac{1}{2} \mathbf{V}_1^* = \mathbf{M}, $$

which is the desired result:

$$ \mathbf{M} = \mathbf{U} \mathbf \Sigma \mathbf{V}^*. $$

Notice the argument could begin with diagonalizing $\mathbf U_2,$ rather than $\mathbf M \mathbf M^*$ (This shows directly that $\mathbf M^* \mathbf M$ and $\mathbf M \mathbf M^*$ have the same non-zero eigenvalues).

Based on variational characterization
The singular values can also be characterized as the maxima of $\mathbf M^* \mathbf M$ considered as a function of $\mathbf u^\mathrm{T} \mathbf M \mathbf v,$ and $\mathbf U$ over particular subspaces. The singular vectors are the values of $\mathbf V,$ and $\mathbf U$ where these maxima are attained.

Let $\mathbf V$ denote an $\mathbf M$ matrix with real entries. Let $m \times n$ be the unit $$(k-1)$$-sphere in $$ \mathbb{R}^k $$, and define $$\sigma(\mathbf{u}, \mathbf{v}) = \mathbf{u}^\operatorname{T} \mathbf{M} \mathbf{v},$$ $$\mathbf{u} \in S^{m-1},$$ $$\mathbf{v} \in S^{n-1}.$$

Consider the function $S^{k-1}$ restricted to $\sigma$ Since both $S^{m-1} \times S^{n-1}.$ and $S^{m-1}$ are compact sets, their product is also compact. Furthermore, since $S^{n-1}$ is continuous, it attains a largest value for at least one pair of vectors $\sigma$ in $\mathbf u$ and $S^{m-1}$ in $\mathbf v$ This largest value is denoted $S^{n-1}.$ and the corresponding vectors are denoted $\sigma_1$ and $\mathbf u_1$ Since $\mathbf v_1.$ is the largest value of $\sigma_1$ it must be non-negative. If it were negative, changing the sign of either $\sigma(\mathbf u, \mathbf v)$ or $\mathbf u_1$ would make it positive and therefore larger.

Statement. $\mathbf v_1$ and $\mathbf u_1$ are left and right-singular vectors of $\mathbf v_1$ with corresponding singular value $\mathbf M$

Proof. Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation:

$$ \nabla \sigma = \nabla \mathbf{u}^\operatorname{T} \mathbf{M} \mathbf{v} - \lambda_1 \cdot \nabla \mathbf{u}^\operatorname{T} \mathbf{u} - \lambda_2 \cdot \nabla \mathbf{v}^\operatorname{T} \mathbf{v} $$

After some algebra, this becomes

$$ \begin{align} \mathbf{M} \mathbf{v}_1 &= 2 \lambda_1 \mathbf{u}_1 + 0, \\ \mathbf{M}^\operatorname{T} \mathbf{u}_1 &= 0 + 2 \lambda_2 \mathbf{v}_1. \end{align}$$

Multiplying the first equation from left by $\sigma_1.$ and the second equation from left by $\mathbf u_1^\textrm{T}$ and taking $$ \| \mathbf u \| = \| \mathbf v \| = 1$$ into account gives

$$ \sigma_1 = 2\lambda_1 = 2\lambda_2. $$

Plugging this into the pair of equations above, we have

$$\begin{align} \mathbf{M} \mathbf{v}_1 &= \sigma_1 \mathbf{u}_1, \\ \mathbf{M}^\operatorname{T} \mathbf{u}_1 &= \sigma_1 \mathbf{v}_1. \end{align}$$

This proves the statement.

More singular vectors and singular values can be found by maximizing $\mathbf v_1^\textrm{T}$ over normalized $\sigma(\mathbf u, \mathbf v)$ and $\mathbf u$ which are orthogonal to $\mathbf v$ and $\mathbf u_1$ respectively.

The passage from real to complex is similar to the eigenvalue case.

One-sided Jacobi algorithm
One-sided Jacobi algorithm is an iterative algorithm, where a matrix is iteratively transformed into a matrix with orthogonal columns. The elementary iteration is given as a Jacobi rotation,

$$ M\leftarrow MJ(p, q, \theta), $$

where the angle $$\theta$$ of the Jacobi rotation matrix $$J(p,q,\theta)$$ is chosen such that after the rotation the columns with numbers $$p$$ and $$q$$ become orthogonal. The indices $$(p,q)$$ are swept cyclically, $$(p=1\dots m,q=p+1\dots m)$$, where $$m$$ is the number of columns.

After the algorithm has converged, the singular value decomposition $$M=USV^T$$ is recovered as follows: the matrix $$V$$ is the accumulation of Jacobi rotation matrices, the matrix $$U$$ is given by normalising the columns of the transformed matrix $$M$$, and the singular values are given as the norms of the columns of the transformed matrix $$M$$.

Two-sided Jacobi algorithm
Two-sided Jacobi SVD algorithm—a generalization of the Jacobi eigenvalue algorithm—is an iterative algorithm where a square matrix is iteratively transformed into a diagonal matrix. If the matrix is not square the QR decomposition is performed first and then the algorithm is applied to the $$R$$ matrix. The elementary iteration zeroes a pair of off-diagonal elements by first applying a Givens rotation to symmetrize the pair of elements and then applying a Jacobi transformation to zero them,

$$ M \leftarrow J^TGMJ $$

where $$G$$ is the Givens rotation matrix with the angle chosen such that the given pair of off-diagonal elements become equal after the rotation, and where $$J$$ is the Jacobi transformation matrix that zeroes these off-diagonal elements. The iterations proceeds exactly as in the Jacobi eigenvalue algorithm: by cyclic sweeps over all off-diagonal elements.

After the algorithm has converged the resulting diagonal matrix contains the singular values. The matrices $$U$$ and $$V$$ are accumulated as follows: $$U\leftarrow UG^TJ$$, $$V\leftarrow VJ$$.

Numerical approach
The singular value decomposition can be computed using the following observations:
 * The left-singular vectors of $\mathbf v_1,$ are a set of orthonormal eigenvectors of $\mathbf M$.
 * The right-singular vectors of $\mathbf M \mathbf M^*$ are a set of orthonormal eigenvectors of $\mathbf M$.
 * The non-zero singular values of $\mathbf M^* \mathbf M$ (found on the diagonal entries of $$\mathbf \Sigma$$) are the square roots of the non-zero eigenvalues of both $\mathbf M$ and $\mathbf M^* \mathbf M$.

The SVD of a matrix $\mathbf M \mathbf M^*$ is typically computed by a two-step procedure. In the first step, the matrix is reduced to a bidiagonal matrix. This takes order $\mathbf M$ floating-point operations (flop), assuming that $O(mn^2)$ The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with an iterative method (as with eigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like the machine epsilon. If this precision is considered constant, then the second step takes $m \geq n.$ iterations, each costing $O(n)$ flops. Thus, the first step is more expensive, and the overall cost is $O(n)$ flops.

The first step can be done using Householder reflections for a cost of $O(mn^2)$ flops, assuming that only the singular values are needed and not the singular vectors. If $4mn^2 - 4n^3/3$ is much larger than $m$ then it is advantageous to first reduce the matrix $n$ to a triangular matrix with the QR decomposition and then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is $\mathbf M$ flops.

The second step can be done by a variant of the QR algorithm for the computation of eigenvalues, which was first described by. The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small. Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD routine for the computation of the singular value decomposition.

The same algorithm is implemented in the GNU Scientific Library (GSL). The GSL also offers an alternative method that uses a one-sided Jacobi orthogonalization in step 2. This method computes the SVD of the bidiagonal matrix by solving a sequence of $2mn^2 + 2n^3$ SVD problems, similar to how the Jacobi eigenvalue algorithm solves a sequence of $2 \times 2$ eigenvalue methods. Yet another method for step 2 uses the idea of divide-and-conquer eigenvalue algorithms.

There is an alternative way that does not explicitly use the eigenvalue decomposition. Usually the singular value problem of a matrix $2 \times 2$ is converted into an equivalent symmetric eigenvalue problem such as $\mathbf M$ $\mathbf M \mathbf M^*,$ or

$$ \begin{bmatrix} \mathbf{0} & \mathbf{M} \\ \mathbf{M}^* & \mathbf{0} \end{bmatrix}. $$

The approaches that use eigenvalue decompositions are based on the QR algorithm, which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between the QR decomposition and the LQ decomposition to find the real diagonal Hermitian matrices. The QR decomposition gives $\mathbf M^* \mathbf M,$ and the LQ decomposition of $\mathbf M \Rightarrow \mathbf Q \mathbf R$ gives $\mathbf R$ Thus, at every iteration, we have $\mathbf R \Rightarrow \mathbf L \mathbf P^*.$ update $\mathbf M \Rightarrow \mathbf Q \mathbf L \mathbf P^*,$ and repeat the orthogonalizations. Eventually, this iteration between QR decomposition and LQ decomposition produces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD.

Analytic result of 2 × 2 SVD
The singular values of a $\mathbf M \Leftarrow \mathbf L$ matrix can be found analytically. Let the matrix be $$\mathbf{M} = z_0\mathbf{I} + z_1\sigma_1 + z_2\sigma_2 + z_3\sigma_3$$

where $$z_i \in \mathbb{C}$$ are complex numbers that parameterize the matrix, $2 \times 2$ is the identity matrix, and $$\sigma_i$$ denote the Pauli matrices. Then its two singular values are given by

$$\begin{align} \sigma_\pm &= \sqrt{|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2 \pm \sqrt{\bigl(|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2\bigr)^2 - |z_0^2 - z_1^2 - z_2^2 - z_3^2|^2}} \\ &= \sqrt{|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2 \pm 2\sqrt{(\operatorname{Re}z_0z_1^*)^2 + (\operatorname{Re}z_0z_2^*)^2 + (\operatorname{Re}z_0z_3^*)^2 + (\operatorname{Im}z_1z_2^*)^2 + (\operatorname{Im}z_2z_3^*)^2 + (\operatorname{Im}z_3z_1^*)^2}} \end{align}$$

Reduced SVDs
[[File:Reduced Singular Value Decompositions.svg|thumb|Visualization of Reduced SVD variants. From top to bottom: 1: Full SVD,

2: Thin SVD (remove columns of $M$ not corresponding to rows of $M$),

3: Compact SVD (remove vanishing singular values and corresponding columns/rows in $e_{1}$ and $e_{2}$),

4: Truncated SVD (keep only largest t singular values and corresponding columns/rows in $V^{⁎}$ and $D$)]]

In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for an $\mathbf I$ matrix $m \times n$ of rank $\mathbf M$:

Thin SVD
The thin, or economy-sized, SVD of a matrix $r$ is given by

$$ \mathbf{M} = \mathbf{U}_k \mathbf \Sigma_k \mathbf{V}^*_k, $$

where $$k = \min(m, n),$$ the matrices $\mathbf M$ and $\mathbf U_k$ contain only the first $\mathbf V_k$ columns of $k$ and $\mathbf U$ and $\mathbf V,$ contains only the first $\mathbf \Sigma_k$ singular values from $k$ The matrix $\mathbf \Sigma.$ is thus $\mathbf U_k$ $m \times k,$ is $\mathbf \Sigma_k$ diagonal, and $k \times k$ is $\mathbf V_k^*$

The thin SVD uses significantly less space and computation time if $k \times n.$ The first stage in its calculation will usually be a QR decomposition of $k \ll \max(m, n).$ which can make for a significantly quicker calculation in this case.

Compact SVD
The compact SVD of a matrix $\mathbf M,$ is given by

$$ \mathbf{M} = \mathbf U_r \mathbf \Sigma_r \mathbf V_r^*. $$

Only the $\mathbf M$ column vectors of $r$ and $\mathbf U$ row vectors of $r$ corresponding to the non-zero singular values $\mathbf V^*$ are calculated. The remaining vectors of $\mathbf \Sigma_r$ and $\mathbf U$ are not calculated. This is quicker and more economical than the thin SVD if $\mathbf V^*$ The matrix $r \ll \min(m,n).$ is thus $\mathbf U_r$ $m \times r,$ is $\mathbf \Sigma_r$ diagonal, and $r \times r$ is $\mathbf V_r^*$

Truncated SVD
In many applications the number $r \times n.$ of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute only $r$ non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrix $t \ll r$ but rather provides the optimal low-rank matrix approximation $\mathbf M,$ by any matrix of a fixed rank $\tilde{\mathbf M}$

$$ \tilde{\mathbf{M}} = \mathbf{U}_t \mathbf \Sigma_t \mathbf{V}_t^*, $$

where matrix $t$ is $\mathbf U_t$ $m \times t,$ is $\mathbf \Sigma_t$ diagonal, and $t \times t$ is $\mathbf V_t^*$ Only the $t \times n.$ column vectors of $t$ and $\mathbf U$ row vectors of $t$ corresponding to the $\mathbf V^*$ largest singular values $t$ are calculated. This can be much quicker and more economical than the compact SVD if $\mathbf \Sigma_t$ but requires a completely different toolset of numerical solvers.

In applications that require an approximation to the Moore–Penrose inverse of the matrix $t \ll r,$ the smallest singular values of $\mathbf M,$ are of interest, which are more challenging to compute compared to the largest ones.

Truncated SVD is employed in latent semantic indexing.

Ky Fan norms
The sum of the $\mathbf M$ largest singular values of $k$ is a matrix norm, the Ky Fan $\mathbf M$-norm of $k$

The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of $\mathbf M.$ as a linear operator with respect to the Euclidean norms of $\mathbf M$ and $K^m$ In other words, the Ky Fan 1-norm is the operator norm induced by the standard $$\ell^2$$ Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operator $K^n.$ on (possibly infinite-dimensional) Hilbert spaces

$$ \| \mathbf M \| = \| \mathbf M^* \mathbf M \|^\frac{1}{2} $$

But, in the matrix case, $\mathbf M$ is a normal matrix, so $$ \|\mathbf M^* \mathbf M\|^{1/2} $$ is the largest eigenvalue of $(\mathbf M^* \mathbf M)^{1/2}$ i.e. the largest singular value of $(\mathbf M^* \mathbf M)^{1/2},$

The last of the Ky Fan norms, the sum of all singular values, is the trace norm (also known as the 'nuclear norm'), defined by $$\| \mathbf M \| = \operatorname{Tr}(\mathbf M^* \mathbf M)^{1/2}$$ (the eigenvalues of $\mathbf M.$ are the squares of the singular values).

Hilbert–Schmidt norm
The singular values are related to another norm on the space of operators. Consider the Hilbert–Schmidt inner product on the $\mathbf M^* \mathbf M$ matrices, defined by

$$ \langle \mathbf{M}, \mathbf{N} \rangle = \operatorname{tr} \left( \mathbf{N}^*\mathbf{M} \right). $$

So the induced norm is

$$ \| \mathbf{M} \| = \sqrt{\langle \mathbf{M}, \mathbf{M} \rangle} = \sqrt{\operatorname{tr} \left( \mathbf{M}^*\mathbf{M} \right)}. $$

Since the trace is invariant under unitary equivalence, this shows

$$ \| \mathbf{M} \| = \sqrt{\vphantom\bigg|\sum_i \sigma_i ^2} $$

where $n \times n$ are the singular values of $\sigma_i$ This is called the Frobenius norm, Schatten 2-norm, or Hilbert–Schmidt norm of $\mathbf M.$ Direct calculation shows that the Frobenius norm of $\mathbf M.$ coincides with:

$$ \sqrt{\vphantom\bigg|\sum_{ij} | m_{ij} |^2}. $$

In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of the Schatten norm.

Scale-invariant SVD
The singular values of a matrix $\mathbf M = (m_ij)$ are uniquely defined and are invariant with respect to left and/or right unitary transformations of $\mathbf A$ In other words, the singular values of $\mathbf A.$ for unitary matrices $\mathbf U \mathbf A \mathbf V,$ and $\mathbf U$ are equal to the singular values of $\mathbf V,$ This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations.

The Scale-Invariant SVD, or SI-SVD, is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of $\mathbf A.$ In other words, the singular values of $\mathbf A.$ for invertible diagonal matrices $\mathbf D \mathbf A \mathbf E,$ and $\mathbf D$ are equal to the singular values of $\mathbf E,$ This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed.

Bounded operators on Hilbert spaces
The factorization $\mathbf A.$ can be extended to a bounded operator $\mathbf M = \mathbf U \mathbf \Sigma \mathbf V^*$ on a separable Hilbert space $\mathbf M$ Namely, for any bounded operator $H.$ there exist a partial isometry $\mathbf M,$ a unitary $\mathbf U,$ a measure space $\mathbf V,$ and a non-negative measurable $(X, \mu),$ such that

$$ \mathbf{M} = \mathbf{U} T_f \mathbf{V}^* $$

where $f$ is the multiplication by $T_f$ on $f$

This can be shown by mimicking the linear algebraic argument for the matrix case above. $L^2(X, \mu).$ is the unique positive square root of $\mathbf V T_f \mathbf V^*$ as given by the Borel functional calculus for self-adjoint operators. The reason why $\mathbf M^* \mathbf M,$ need not be unitary is that, unlike the finite-dimensional case, given an isometry $\mathbf U$ with nontrivial kernel, a suitable $U_1$ may not be found such that

$$ \begin{bmatrix} U_1 \\ U_2 \end{bmatrix} $$

is a unitary operator.

As for matrices, the singular value factorization is equivalent to the polar decomposition for operators: we can simply write

$$ \mathbf M = \mathbf U \mathbf V^* \cdot \mathbf V T_f \mathbf V^* $$

and notice that $U_2$ is still a partial isometry while $\mathbf U \mathbf V^*$ is positive.

Singular values and compact operators
The notion of singular values and left/right-singular vectors can be extended to compact operator on Hilbert space as they have a discrete spectrum. If $\mathbf V T_f \mathbf V^*$ is compact, every non-zero $T$ in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If $\lambda$ is compact, so is $\mathbf M$. Applying the diagonalization result, the unitary image of its positive square root $\mathbf M^* \mathbf M$ has a set of orthonormal eigenvectors $T_f$ corresponding to strictly positive eigenvalues $\{e_i\}$. For any $\{\sigma_i\}$ in $\psi$

$$ \mathbf{M} \psi = \mathbf{U} T_f \mathbf{V}^* \psi = \sum_i \left \langle \mathbf{U} T_f \mathbf{V}^* \psi, \mathbf{U} e_i \right \rangle \mathbf{U} e_i = \sum_i \sigma_i \left \langle \psi, \mathbf{V} e_i \right \rangle \mathbf{U} e_i, $$

where the series converges in the norm topology on $H,$ Notice how this resembles the expression from the finite-dimensional case. $H.$ are called the singular values of $\sigma_i$ $\mathbf M.$ (resp. $\{\mathbf U e_i\}$) can be considered the left-singular (resp. right-singular) vectors of $\{\mathbf U e_i\}$

Compact operators on a Hilbert space are the closure of finite-rank operators in the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is:


 * Theorem. $\mathbf M.$ is compact if and only if $\mathbf M$ is compact.

History
The singular value decomposition was originally developed by differential geometers, who wished to determine whether a real bilinear form could be made equal to another by independent orthogonal transformations of the two spaces it acts on. Eugenio Beltrami and Camille Jordan discovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form a complete set of invariants for bilinear forms under orthogonal substitutions. James Joseph Sylvester also arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values the canonical multipliers of the matrix $\mathbf M^* \mathbf M$ The fourth mathematician to discover the singular value decomposition independently is Autonne in 1915, who arrived at it via the polar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be by Carl Eckart and Gale J. Young in 1936; they saw it as a generalization of the principal axis transformation for Hermitian matrices.

In 1907, Erhard Schmidt defined an analog of singular values for integral operators (which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed by Émile Picard in 1910, who is the first to call the numbers $$\sigma_k$$ singular values (or in French, valeurs singulières).

Practical methods for computing the SVD date back to Kogbetliantz in 1954–1955 and Hestenes in 1958, resembling closely the Jacobi eigenvalue algorithm, which uses plane rotations or Givens rotations. However, these were replaced by the method of Gene Golub and William Kahan published in 1965, which uses Householder transformations or reflections. In 1970, Golub and Christian Reinsch published a variant of the Golub/Kahan algorithm that is still the one most-used today.