User:Laoer22/Invertible matrix

Lead
In Wikipedia with Invertible Matrix, there are many professional words that lack explanation and examples, which are too technical for readers to understand. In the example part, it lacks the connection between the properties, and the properties part is too haphazard, and the theorems are without order. In addition, the general way of calculating the invertible matrix is Gaussian Elimination, but the explanation of that part is not enough to show its importance, and it should be added the example to teach not only mathematical audiences but also layperson to calculate the invertible matrix by using Gaussian Elimination.

The order and explanations in Invertible Matrix theorems
The order of theorems should make the audience more easily understand, so one theorem should be explained by another theorem, which can bring brief connection between each theorem. The first theorem should be derived by the definition, which is "There is an n-by-n matrix B such that AB = In = BA." and its expansion with left and right inverse, nonsigular. and row-equivalent and column-equivalent of Identity of matrix. Then, we should teach audience the way to find the invertible matrix from easy to hard for understanding, so we should put the proposition of rank first. If the rank of A is equal to I, then it is invertible matrix, and conversely, when the rank of A is not equal (or lower) than I, it can not get I by multiply any matrix. Then, followed by the rank proposition, we can gain the proposition about linear independent, span, Col, and basis. Finally, we talk about the second way to justify if one matrix is invetible: if the determinant of one matrix is not 0, then it is invertible matrix. Then, followed by determinants, we can know the eigenvalue will not equal to zero in invetible matrix.

Examples of theorems
The example only show the invertible matrix with determinants not equal to zero. Although we know if the rank of A is equal to I, then its determinant is zero, it should be shown some examples with rank property and some converse examples of invetible matrix. We don't need to show other examples about other theorems because all theorems are based on the rank property and determinant property. An example with rank of n-1 to be a non-invertible matrix
 * $$\mathbf{A} = \begin{pmatrix} 2 & 4\\ 2 & 4 \end{pmatrix} .$$

We can easily see the rank of this 2*2 matrix is one, which is n-1≠n, so it is a non-invertible matrix. As an example of a non-invertible, or singular, matrix, consider the matrix
 * $$\mathbf{C} = \begin{pmatrix} -1 & \tfrac{3}{2} \\ \tfrac{2}{3} & -1 \end{pmatrix} .$$

The determinant of $$ \mathbf{C} $$ is 0, which is a necessary and sufficient condition for a matrix to be non-invertible.

Gaussian Elimination
Gaussian Elimination is the most useful and easiest way to gain the inverse of matrix, so we should explain it carefully with details and examples. Gaussian Elimination is the way used between each row or column, we can use it the change number of the element in matrix just like the way to solve linear equation with two unknown variables. Then, we use this way to get the identity in the right and the change of identity in the left should be the inverse of that matrix. Take an example of matrix: $$\mathbf{A} = \begin{pmatrix}-1 & \tfrac{3}{2} \\ 1 & -1\end{pmatrix} .$$ We first exchange it into $$\mathbf{B} = \begin{pmatrix}-1 & \tfrac{3}{2} & \ 1 & \ 0 \\ 1  & -1\ & 0\ & 1\end{pmatrix} .$$ Then, we can separate it into two parts, with original matrix in the right side and identity matrix in the left side. We first let row1+row2, and we get $$\mathbf{C} = \begin{pmatrix}-1 & \tfrac{3}{2} & \ 1 & \ 0 \\ 0 & 1/2 \ & 1\ & 1\end{pmatrix}.$$ Then let row1-3*row2, which get $$\mathbf{D} = \begin{pmatrix}-1 & \ 0 & \ -2 & \ -3 \\ 0  & 1/2 \ & 1\ & 1\end{pmatrix}.$$ Then let row1*-1 and row2*2 to gain the identity matrix in left, then we get the inverse matrix in the right side, which is $$\mathbf{E} = \begin{pmatrix}1 & \ 0 & \ 2 & \ 3 \\ 0  & 1\ & 2\ & 2\end{pmatrix}.$$

Regression/square least
Regression/square least can be used in data prediction in financial, physical, biologist, and environmental subjects. We first analyze the data to separate the variables and output value, which are always represented by x and y. Then we create one matrix A with all variables with x1, x2, ..., xn, and matrix b with out put y. Then, we use the multiplication between transpose matrix of A and A to get square matrix C and multiply the inverse matrix of C with b to gain the vector matrix X, which can show the relation between the variable and the output. Thus, if we want to predict some output, we can only bring variables into A'X=b' to gain prediction output b'.