User:Jim.belk/Draft:Rank (linear algebra)

In linear algebra, the rank of a matrix is the maximum number of linearly independent rows (or columns) that can be chosen from the matrix. If we row reduce the matrix, the rank is equal to the number of pivots in the resulting row echelon form. Geometrically, the rank of a matrix is the dimension of the image of the associated linear transformation.

Definition
The rank of a matrix is the maximum number of linearly independent rows that can be chosen from the matrix: The rank of a matrix is the same as the dimension of the row space (the subspace of Rn spanned by the rows of the matrix).

Sometimes the quantity above is called the row rank of the matrix, and the column rank is defined to the the maximum number of linearly independent columns that can be chosen from the matrix (i.e. the dimension of the column space). It is an important theorem of linear algebra that the row rank and column rank of a matrix are equal.

The column space of a matrix is the same as the range (or image) of the associated linear transformation, so the rank is the same as the dimension of the range.

Rank as a dimension
The span of the row vectors of a matrix A is called the row space of A, and the span of the column vectors is the column space of A. The row and column spaces of A always have the same dimension, and this is equal to the rank of A. This definition of rank is essentially equivalent to the one given above.

The column space of A is the same as the image of the corresponding linear transformation. The dimension of this image is equal to the rank of A.

Submatrices
A submatrix of a matrix A is any matrix formed by deleting certain rows and columns from A. For example,
 * $$\begin{bmatrix} 1 & 2 & 4 \\ 9 & 10 & 12 \\ 13 & 14 & 16 \end{bmatrix}\text{ is a submatrix of }\begin{bmatrix} \mathbf{1} & \mathbf{2} & 3 & \mathbf{4} \\ 5 & 6 & 7 & 8 \\ \mathbf{9} & \mathbf{10 } & 11 & \mathbf{12} \\ \mathbf{13} & \mathbf{14} & 15 & \mathbf{16} \end{bmatrix}\text{.}$$

The rank of A is greater than or equal to the rank of any submatrix of A.

A square submatrix is nonsingular if it has an inverse (i.e. if it has nonzero determinant). The rank A is equal to the size of the largest nonsingular square submatrix of A. For example, the matrix
 * $$\begin{bmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 4 & 6 & 8 & 9 \end{bmatrix}$$

has rank two because the 2 &times; 2 submatrix $$\begin{bmatrix} 1 & 5 \\ 2 & 9\end{bmatrix}$$  is nonsingular.

The determinant of a k &times; k submatrix of A is sometimes called a minor of A. Using this terminology, the rank of A is the largest k for which A has a nonzero k &times; k minor.

Rank-nullity theorem
The nullity of a matrix A is the dimension of its null space. Because the row space and null space of A are orthogonal complements, the rank and nullity are related by the equation
 * $$\text{rank}(A) + \text{nullity}(A) = n\text{,}\,$$

where n is the number of columns of A. This is known as the rank-nullity theorem.

Alternative definitions
The maximal number of linearly independent columns of the m-by-n matrix A with entries in the field F is equal to the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A). Since the column rank and the row rank are the same, we can also define the rank of A as the dimension of the row space of A.

If one considers the matrix A as a linear map
 * f : Fn &rarr; Fm

with the rule
 * f(x) = Ax

then the rank of A can also be defined as the dimension of the image of f (see linear map for a discussion of image and kernel). This definition has the advantage that they can be applied to any linear map without need for a specific matrix. The rank can also be defined as n minus the dimension of the kernel of f; the rank-nullity theorem states that this is the same as the dimension of the image of f.

Another equivalent definition of the rank of a matrix is the order of the greatest non-vanishing minor in the matrix.

Properties
We assume that A is an m-by-n matrix over the field F and describes a linear map f as above.


 * only the zero matrix has rank 0
 * the rank of A is at most min(m,n)
 * f is injective if and only if A has rank n (in this case, we say that A has full column rank).
 * f is surjective if and only if A has rank m (in this case, we say that A has full row rank).
 * In the case of a square matrix A (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
 * If B is any n-by-k matrix, then the rank of AB is at most the minimum of the rank of A and the rank of B.
 * As an example of the "<" case, consider the product

\begin{bmatrix} 0 & 0 \\   1 & 0 \\      \end{bmatrix} \begin{bmatrix} 0 & 0 \\   0 & 1 \\  \end{bmatrix} $$
 * Both factors have rank 1, but the product has rank 0.


 * If B is an n-by-k matrix with rank n, then AB has the same rank as A.
 * If C is an l-by-m matrix with rank m, then CA has the same rank as A.
 * The rank of A is equal to r if and only if there exists an invertible m-by-m matrix X and an invertible n-by-n matrix Y such that



XAY = \begin{bmatrix} I_r & 0 \\ 0 & 0 \\ \end{bmatrix} $$


 * where Ir denotes the r-by-r identity matrix.


 * The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix (this is the "rank theorem" or the "rank-nullity theorem").

Computation
The easiest way to compute the rank of a matrix A is given by the Gauss elimination method. The row-echelon form of A produced by the Gauss algorithm has the same rank as A, and its rank can be read off as the number of non-zero rows.

Consider for example the 4-by-4 matrix



A = \begin{bmatrix} 2 & 4 & 1 & 3 \\   -1 & -2 & 1 & 0 \\    0 & 0 & 2 & 2 \\    3 & 6 & 2 & 5 \\  \end{bmatrix} $$

We see that the second column is twice the first column, and that the fourth column equals the sum of the first and the third. The first and the third columns are linearly independent, so the rank of A is two. This can be confirmed with the Gauss algorithm. It produces the following row echelon form of A:



A = \begin{bmatrix} 1 & 2 & 0 & 1 \\   0 & 0 & 1 & 1 \\    0 & 0 & 0 & 0 \\    0 & 0 & 0 & 0 \\  \end{bmatrix} $$

which has two non-zero rows.

When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less expensive choices, such as QR decomposition with pivoting, which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.

Applications
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. The system has at least one solution if the rank of the coefficient matrix equals the rank of the augmented matrix. In that case, it has precisely one solution if the rank equals the number of variables. If the rank of the augmented matrix is greater than the rank of coefficient matrix the general solution has k free parameters where k is the difference between the number of equations and the rank. Otherwise the system is inconsistent.

In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.

Generalization
There are different generalisations of the concept of rank to matrices over arbitrary rings. In those generalisations, column rank, row rank, dimension of column space and dimension of row space of a matrix may be different from the others or may not exist.

There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.