User:SirMeowMeow/sandbox/Matrices

Definition
For some natural choice of $$r$$ rows and $$c$$ columns, a matrix $$\Phi$$ of size $$r \times c$$ over a field $$\mathsf{F}$$ is a collection of elements $$\Phi_{ij} \in \mathsf{F}$$ indexed by $$(i, j) \in [r] \times [c]$$. Unless specified, the elements of a matrix are assumed to be scalars, but may also be elements from a ring, or something more general. The set of all matrices with elements from $$\mathsf{F}$$ and with indices from $$(i, j) \in [r] \times [c]$$ is denoted $$\mathcal{M}(r, c : \mathsf{F})$$ or $$\mathsf{F}^{r, c}$$.

Notation
Let $$\Phi$$ be an $$r \times c$$ matrix whose elements are from $$\mathsf{F}$$. Any individual entry may be referenced as $$\Phi_{i j}$$ for the $$i$$-th row and the $$j$$-th column.

Rows
The $$i$$-th row vector in a matrix $$\Phi$$ is defined as the subset of elements which shares some $$i$$-th index, and ordered by the $$j$$-th index.

The column index can be omitted for brevity by simply noting $$\Phi_i$$ for the $$i$$-th row.$$\operatorname{row} (\Phi) = [\Phi_{1*} \cdots \Phi_{r*}] $$

Columns
Let $$\operatorname{col}$$ be a function which maps a matrix to a partition of its elements with an equivalence relation on the column row index, ordered by its column index.$$\operatorname{col} (\Phi) = [\Phi_{*1} \cdots \Phi_{*c}] $$

Addition of Matrices
Let $$\Phi, \Psi$$ be matrices from $$\mathsf{F}^{r \times c}$$. Then the sum of matrices is defined as entry-wise field addition.

Scaling of Matrices
Let $$\Phi$$ be a matrix from $$\mathsf{F}^{r \times c}$$, and let $$\lambda \in \mathsf{F}$$. The scalar multiplication of matrices is defined:

Transposition
As a matrix is a collection of double-indexed scalars $$\Phi_{i j} \in \mathsf{F}$$, the transposition is a function $$\mathsf{F}^{r, c} \to \mathsf{F}^{c, r}$$ of the form $$(\Phi) \mapsto \Phi^\intercal$$, defined as a mapping which swaps the positions of indices.

Observations
The transposition of a product $$\Psi \Phi$$ is equal to the product of their transpositions, but in reverse order.

Matrix-Vector Product
Let $$ \Phi : \mathsf{F}^c \to \mathsf{F}^r $$.

Let $$ \Phi^{r \times c} $$ be a matrix, let $$ [\alpha_1 \cdots \alpha_c] \in \mathsf{F}^c $$, and let $$ [\beta_1 \cdots \beta_r] \in \mathsf{F}^r $$.

A matrix-vector product is is a mapping $$ \mathsf{F}^{r \times c} \times \mathsf{F}^c \to \mathsf{F}^r $$, such that:

Column Perspective
Let $$ \Phi : \mathsf{F}^c \to \mathsf{F}^r $$, and let $$ \vec{\alpha} = [\alpha_1 \cdots \alpha_c] \in \mathsf{F}^c $$ . Then the matrix-vector product can be defined as the linear combination of pairing scalar coefficients in $$ [\alpha_1 \cdots \alpha_c] $$ to vectors in $$\operatorname{col} (\Phi)$$.

Product of Matrices
Let $$\Phi : \mathsf{F}^m \to \mathsf{F}^n$$ and $$\Psi : \mathsf{F}^n \to \mathsf{F}^p$$ be matrices. Then the product $$\Psi \Phi$$ is defined:

For all natural pairs $$(i, j) \in [m] \times [p] $$.

Column Perspective
For the product $$\Psi \Phi$$, the $$i$$-th column of the matrix is defined by the application of $$\Psi$$ on the $$i$$-th column of $$\Phi$$.

Rank and Image
The rank of a matrix $$ \Phi^{r \times c} $$ is the number of independent column vectors. The image of a matrix is the span of its columns.

An injective matrix is any full-rank matrix.

A surjective matrix is any full row-rank matrix.

Kernel and Nullity
The kernel of a matrix $$ \Phi : \mathsf{F}^r \to \mathsf{F}^c $$ is the set of vectors which map to $$\vec{0}$$. The nullity is the dimension of the kernel.

Identity Matrix
For any matrix $$\Phi: \mathsf{F}^c \to \mathsf{F}^r$$ there also exists matrices $$\mathrm{I}_c, \mathrm{I}_r$$ which act as the unique right and left identity element under the product of maps. Any matrix which fulfills this condition is known as the identity matrix, denoted $$\mathrm{I}$$ or with a subscript $$\mathrm{I}_n$$ for some dimension $$n$$. All identity matrices are square matrices whose values are defined for any index $$(i, j) \in [n] \times [n]$$: An example of an identity matrix of $$n$$ dimensions.

Inverse Matrix
A matrix $$\Phi: \mathsf{F}^n \to \mathsf{F}^n$$ is invertible if there exists a matrix $$\Phi^{-1} : \mathsf{F}^n \to \mathsf{F}^n$$ such that:


 * An invertible matrix may also be known as a non-singular matrix, a linear isomorphism or bijection.
 * The set of all invertible matrices of $$n$$ size is known as the $$\operatorname{GL}(n, \mathsf{F})$$.
 * All invertible matrices are full-rank square matrices, and thus the kernel is trivial.


 * For endomorphisms over finite-dimensional modules, surjection, injection, and bijection are all equivalent conditions.
 * The determinant of an invertible matrix is non-zero.

Left Inverse
Although only square matrices are strictly invertible, an injective matrix will have a left-inverse by definition.

Orthonormal Matrix
An orthonormal matrix $$\Phi : \mathsf{F}^n \to \mathsf{F}^n$$ is an invertible matrix which preserves the norms between vector spaces. It is also the matrix where the transpose is the multiplicative inverse $$\Phi^{-1}$$. The set of all $$n \times n$$ orthogonal matrices over $$\mathsf{F}$$ forms the orthogonal group $$\mathrm{O}(n, \mathsf{F})$$. The subset of $$\mathrm{O}(n, \mathsf{F})$$ which has only a determinant of $$+1$$ is known as the special orthogonal group $$\mathrm{S}\mathrm{O}(n, \mathsf{F})$$, and all matrices from this group are rotational matrices.

Observations
= \langle \Phi \vec{v}_1, \Phi \vec{v}_2 \rangle$$.
 * Let $$\vec{v}_1, \vec{v}_2 \in \mathsf{F}^n$$. If $$\Phi : \mathsf{F}^n \to \mathsf{F}^n$$ is orthonormal then $$\langle \vec{v}_1, \vec{v}_2 \rangle
 * The determinant of $$\Phi$$ is either $$+1$$ or $$-1$$.
 * If $$\Phi$$ is orthonormal then so is its transpose.

Gram-Schmidt Process
Given the columns of a full-rank matrix, the Gram-Schmidt process can generate a similar orthonormal basis.

Trace of a Square Matrix
Let $$\Phi: \mathsf{F}^n \to \mathsf{F}^n$$. The trace of a product of matrices is the sum of their individual traces.

Rank Factorization (CR)
Rank factorization, or the column-row (CR) form of a matrix $$\Phi: \mathsf{F}^c \to \mathsf{F}^r$$ means to decompose $$\Phi = \mathrm{C} \mathrm{R}$$, where $$\mathrm{C}$$ represents independent columns from $$\Phi$$, and $$\mathrm{R}$$ represents independent rows from $$\Phi$$. This factorization is motivated mostly by pedagogy and demonstrates basic properties of matrix multiplication.

Singular Value Decomposition (SVD)
Any real or complex matrix of size $$c \times r$$ may be decomposed into the triple product $$\mathrm{U} \mathrm{\Sigma} \mathrm{V}^*$$, where $$\mathrm{U}$$ is $$c \times c$$ and orthonormal, $$\mathrm{\Sigma}$$ is $$r \times r$$ is a positive-definite real diagonal matrix, and $$\mathrm{V}^*$$ is $$r \times r$$ and orthonormal.

$$\Phi = \begin{bmatrix} 10 & 8 & 0 & 0 & 0 & 0 \\ 7 & 6 & 8 & 0 & 0 & 0 \\ 9 & 8 & 7 & 4 & 3 & 0 \\ 7 & 6 & 8 & 0 & 0 & 0 \\ 0 & 0 & 0 & 8 & 9 & 8 \\ \end{bmatrix}$$For $$\mathrm{U}$$ we have a relationship between rows and "concepts." For $$\mathrm{V}^*$$ we have a relationship between columns and concepts. For $$\mathrm{\Sigma}$$ we have a matrix which represents the eigenvalues of each concept.

Web
__NOINDEX__