User:Jim.belk/Draft:Matrix (mathematics)


 * For the square matrix section, see square matrix.

In mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries), which may be numbers or, more generally, any abstract quantities that can be added and multiplied. Matrices are used to describe linear equations, keep track of the coefficients of linear transformations and to  record data that depend on multiple parameters. Matrices can be added, multiplied, and decomposed in various ways, making them a key concept in linear algebra and matrix theory.

In this article, the entries of a matrix are real or complex numbers unless otherwise noted.



Definitions and notations
An m &times; n matrix (pronounced "m by n matrix") is a matrix with m rows and n columns. For example,
 * $$A = \begin{bmatrix}

9 & 8 & 6 \\ 1 & 2 & 7 \\ 4 & 9 & 2 \\ 6 & 0 & 5 \end{bmatrix}$$ is a a 4 &times; 3 matrix. The numbers 4 and 3 are called the dimensions (or order) of the matrix. Matrices may be written with either square brackets or parentheses. Matrix variables are usually denoted by capital letters (A, B, M, etc.).

The number in the i-th row and j-th column of a matrix (counting from the top and the left) is called the i,j-th entry. For example, the 2,3 entry in the matrix above is a 7. As with the dimensions of a matrix, the row of an entry is always listed first. The i,j-th entry of a matrix A is usually denoted ai,j (so a2,3 = 7 in the matrix above).

Many sources have their own notation for defining an m &times; n matrix, such as $$A:=(a_{i,j})_{i=1,\ldots,m;j=1,\ldots,n}$$ or $$A:=(a_{i,j})_{m \times n}$$. Using the first notation, $$A := (i+j)_{i=1,\ldots,m;\,j=1,\ldots,n}$$ would define a matrix whose i,j-th entry is the sum i + j.

Linear transformations
In linear algebra, matrices are used to represent linear transformations between sets of variables. For example, the matrix
 * $$\begin{bmatrix}3 & 2 & 5 \\ -1 & 4 & 6\end{bmatrix}$$

represents the transformation
 * $$\begin{alignat}{7}

y_1 &&\; = \;&& 3x_1 &&\; + \;&& 2x_2 &&\; + \;&& 5x_3 & \\ y_2 &&\; = \;&& -x_1 &&\; + \;&& 4x_2 &&\; + \;&& 6x_3 & \end{alignat}\text{.} $$ Such a transformation can be interpreted as a linear function between Euclidean spaces. For example, the transformation above can be viewed as the function T: R3 &rarr; R2 defined by
 * $$T(x_1,x_2,x_3) = (3x_1+2x_2+5x_3,\;-x_1+4x_2+6x_3)\,$$

In abstract linear algebra, matrices can be used to represent arbitrary linear transformations between vector spaces.

Sum
Two or more matrices of identical dimensions $$m$$ and $$n$$ can be added. Given $$m$$-by-$$n$$ matrices $$A$$ and $$B$$, their sum $$A+B$$ is the $$m$$-by-$$n$$ matrix computed by adding corresponding elements (i.e. $$A+B= (a_{i,j})_{1\le i \le m; 1\le j \le n} + (b_{i,j})_{1\le i \le m; 1\le j \le n} = (a_{i,j}+b_{i,j})_{1\le i \le m; 1\le j \le n}$$ ). For example:



\begin{bmatrix} 1 & 3 & 2 \\   1 & 0 & 0 \\    1 & 2 & 2  \end{bmatrix} + \begin{bmatrix} 0 & 0 & 5 \\   7 & 5 & 0 \\    2 & 1 & 1  \end{bmatrix} = \begin{bmatrix} 1+0 & 3+0 & 2+5 \\   1+7 & 0+5 & 0+0 \\    1+2 & 2+1 & 2+1  \end{bmatrix} = \begin{bmatrix} 1 & 3 & 7 \\   8 & 5 & 0 \\    3 & 3 & 3  \end{bmatrix} $$

Another, much less often used notion of matrix addition is the direct sum.

Scalar multiplication
Given a matrix $$A$$ and a number $$c$$, the scalar multiplication $$cA$$ is computed by multiplying every element of $$A$$ by the scalar $$c$$ (i.e. $$(cA)_{i,j} = c \cdot a_{i,j}$$ ). For example:


 * $$2 \cdot

\begin{bmatrix} 1 & 8 & -3 \\   4 & -2 & 5  \end{bmatrix} = \begin{bmatrix} 2 \cdot 1 & 2\cdot 8 & 2\cdot -3 \\ 2\cdot 4 & 2\cdot -2 & 2\cdot 5 \end{bmatrix} = \begin{bmatrix} 2 & 16 & -6 \\   8 & -4 & 10  \end{bmatrix} $$

Matrix addition and scalar multiplication turn the set $$\text{M}(m,n,\mathbb{R})$$ of all $$m$$-by-$$n$$ matrices with real entries into a real vector space of dimension $$m\cdot n$$.

Matrix multiplication
Multiplication of two matrices is well-defined only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If $$A$$ is an $$m$$-by-$$n$$ matrix and $$B$$ is an $$n$$-by-$$p$$ matrix, then their matrix product $$AB$$ is the $$m$$-by-$$p$$ matrix given by:



(AB)_{i,j} = a_{i,1} b_{1,j} + a_{i,2} b_{2,j} + \ldots + a_{i,n} b_{n,j} $$ for each pair $$(i,j)$$.

For example:



\begin{bmatrix} 1 & 0 & 2 \\      -1 & 3 & 1 \\    \end{bmatrix} \times \begin{bmatrix} 3 & 1 \\       2 & 1 \\        1 & 0 \\    \end{bmatrix} =   \begin{bmatrix}

( 1 \times 3 +  0 \times 2  +  2 \times 1) & ( 1 \times 1 +  0 \times 1  +  2 \times 0) \\

(-1 \times 3 +  3 \times 2  +  1 \times 1) & (-1 \times 1 +  3 \times 1  +  1 \times 0) \\

\end{bmatrix} $$



=   \begin{bmatrix} 5 & 1 \\       4 & 2 \\    \end{bmatrix}. $$

Matrix multiplication has the following properties:
 * $$(AB)C=A(BC)$$ for all $$k$$-by-$$m$$ matrices $$A$$, $$m$$-by-$$n$$ matrices $$B$$ and $$n$$-by-$$p$$ matrices $$C$$ ("associativity").
 * $$(A+B)C = AC+BC$$ for all $$m$$-by-$$n$$ matrices $$A$$ and $$B$$ and $$n$$-by-$$k$$ matrices $$C$$ ("right distributivity").
 * $$C(A+B)=CA+CB$$ for all $$m$$-by-$$n$$ matrices $$A$$ and $$B$$ and $$k$$-by-$$m$$ matrices $$C$$ ("left distributivity").

It is important to note that commutativity does not generally hold; that is, given matrices $$A$$ and $$B$$ and their product defined, then generally $$AB \ne BA$$.

Linear transformations, ranks and transpose
Matrices can conveniently represent linear transformations because matrix multiplication neatly corresponds to the composition of maps, as will be described next. This same property makes them powerful data structures in high-level programming languages.

Here and in the sequel we identify Rn with the set of "columns" or n-by-1 matrices. For every linear map f : Rn → Rm there exists a unique m-by-n matrix A such that f(x) = Ax for all x in Rn. We say that the matrix A "represents" the linear map f. Now if the k-by-m matrix B represents another linear map g : Rm → Rk, then the linear map g o f is represented by BA. This follows from the above-mentioned associativity of matrix multiplication.

More generally, a linear map from an n-dimensional vector space to an m-dimensional vector space is represented by an m-by-n matrix, provided that bases have been chosen for each.

The rank of a matrix A is the dimension of the image of the linear map represented by A; this is the same as the dimension of the space generated by the rows of A, and also the same as the dimension of the space generated by the columns of A.

The transpose of an m-by-n matrix A is the n-by-m matrix Atr (also sometimes written as AT or tA) formed by turning rows into columns and columns into rows, i.e. Atr[i, j] = A[j, i] for all indices i and j. If A describes a linear map with respect to two bases, then the matrix Atr describes the transpose of the linear map with respect to the dual bases, see dual space.

We have (A + B)tr = Atr + Btr and (AB)tr = Btr Atr.

Square matrices and related definitions
A square matrix is a matrix which has the same number of rows and columns. The set of all square n-by-n matrices, together with matrix addition and matrix multiplication is a ring. Unless n = 1, this ring is not commutative.

M(n, R), the ring of real square matrices, is a real unitary associative algebra. M(n, C), the ring of complex square matrices, is a complex associative algebra.

The unit matrix or identity matrix In, with elements on the main diagonal set to 1 and all other elements set to 0, satisfies MIn=M and InN=N for any m-by-n matrix M and n-by-k matrix N. For example, if n = 3:

I_3 = \begin{bmatrix} 1 & 0 & 0 \\   0 & 1 & 0 \\    0 & 0 & 1  \end{bmatrix} .$$ The identity matrix is the identity element in the ring of square matrices.

Invertible elements in this ring are called invertible matrices or non-singular matrices. An n by n matrix A is invertible if and only if there exists a matrix B such that
 * AB = In ( = BA).

In this case, B is the inverse matrix of A, denoted by A&minus;1. The set of all invertible n-by-n matrices forms a group (specifically a Lie group) under matrix multiplication, the general linear group.

If λ is a number and v is a non-zero vector such that Av = λv, then we call v an eigenvector of A and λ the associated eigenvalue. (Eigen means "own" in German and in Dutch.) The number λ is an eigenvalue of A if and only if A&minus;λIn is not invertible, which happens if and only if pA(λ) = 0. Here pA(x) is the characteristic polynomial of A. This is a polynomial of degree n and has therefore n complex roots (counting multiple roots according to their multiplicity). In this sense, every square matrix has n complex eigenvalues.

The determinant of a square matrix A is the product of its n eigenvalues, but it can also be defined by the Leibniz formula. Invertible matrices are precisely those matrices with nonzero determinant.

The Gaussian elimination algorithm is of central importance: it can be used to compute determinants, ranks and inverses of matrices and to solve systems of linear equations.

The trace of a square matrix is the sum of its diagonal entries, which equals the sum of its n eigenvalues.

Matrix exponential is defined for square matrices, using power series.

Special types of matrices
In many areas in mathematics, matrices with certain structure arise. A few important examples are
 * Symmetric matrices are such that elements symmetric about the main diagonal (from the upper left to the lower right) are equal, that is, $$a_{i,j}=a_{j,i} \Leftrightarrow A^\mathrm{T} = A$$.
 * Skew-symmetric matrices are such that elements symmetric about the main diagonal are the negative of each other, that is, $$a_{i,j}=-a_{j,i} \Leftrightarrow A^\mathrm{T}=-A$$. In a skew-symmetric matrix, all diagonal elements are zero, that is, $$a_{i,i}=-a_{i,i}\Rightarrow a_{i,i}=0$$.
 * Hermitian (or self-adjoint) matrices are such that elements symmetric about the diagonal are each others complex conjugates, that is, $$a_{i,j}=\overline{a}_{j,i} \Leftrightarrow A^\mathrm{H} = A$$, where $$\overline{z}$$ signifies the complex conjugate of a complex number $$z$$ and $$\,\! A^\mathrm{H}$$ the conjugate transpose of $$A$$.
 * Toeplitz matrices have common elements on their diagonals, that is, $$\,\! a_{i,j}=a_{i+1,j+1}$$.
 * Stochastic matrices are square matrices whose rows are probability vectors; they are used to define Markov chains.
 * A square matrix $$A$$ is called idempotent if $$A^2=AA=A$$.

For a more extensive list see list of matrices.

Matrices in abstract algebra
If we start with a ring R, we can consider the set M(m,n, R) of all m by n matrices with entries in R. Addition and multiplication of these matrices can be defined as in the case of real or complex matrices (see above). The set M(n, R) of all square n by n matrices over R is a ring in its own right, isomorphic to the endomorphism ring of the left R-module Rn.

Similarly, if the entries are taken from a semiring S, matrix addition and multiplication can still be defined as usual. The set of all square n×n matrices over S is itself a semiring. Note that fast matrix multiplication algorithms such as the Strassen algorithm generally only apply to matrices over rings and will not work for matrices over semirings that are not rings.

If R is a commutative ring, then M(n, R) is a unitary associative algebra over R. It is then also meaningful to define the determinant of square matrices using the Leibniz formula; a matrix is invertible if and only if its determinant is invertible in R.

All statements mentioned in this article for real or complex matrices remain correct for matrices over an arbitrary field.

Matrices over a polynomial ring are important in the study of control theory.

History
The study of matrices is quite old. A 3-by-3 magic square appears in Chinese literature dating from as early as 650 BC.

Matrices have a long history of application in solving linear equations. An important Chinese text from between 300 BC and AD 200, The Nine Chapters on the Mathematical Art (Jiu Zhang Suan Shu), is the first example of the use of matrix methods to solve simultaneous equations. In the seventh chapter, "Too much and not enough," the concept of a determinant first appears almost 2000 years before its publication by the Japanese mathematician Seki Kowa in 1683 and the German mathematician Gottfried Leibniz in 1693.

Magic squares were known to Arab mathematicians, possibly as early as the 7th century, when the Arabs conquered northwestern parts of the Indian subcontinent and learned Indian mathematics and astronomy, including other aspects of combinatorial mathematics. It has also been suggested that the idea came via China. The first magic squares of order 5 and 6 appear in an encyclopedia from Baghdad circa 983 AD, the Encyclopedia of the Brethren of Purity (Rasa'il Ihkwan al-Safa); simpler magic squares were known to several earlier Arab mathematicians.

After the development of the theory of determinants by Seki Kowa and Leibniz in the late 17th century, Cramer developed the theory further in the 18th century, presenting Cramer's rule in 1750. Carl Friedrich Gauss and Wilhelm Jordan developed Gauss-Jordan elimination in the 1800s.

The term "matrix" was coined in 1848 by J. J. Sylvester. Cayley, Hamilton, Grassmann, Frobenius and von Neumann are among the famous mathematicians who have worked on matrix theory.

Olga Taussky-Todd (1906-1995) used matrix theory to investigate an aerodynamic phenomenon called fluttering or aeroelasticity during WWII.

Encryption
Matrices can be used to encrypt numerical data. Encryption is done by multiplying the data matrix with a key matrix. Decryption is done simply by multiplying the encrypted matrix with the inverse of the key.

Computer graphics
4&times;4 transformation matrices are commonly used in computer graphics. The upper left 3&times;3 portion of a transformation matrix is composed of the new X, Y, and Z axes of the post-transformation coordinate space.