User:Burningbrand/sandbox

Titlen: Matrix (Matematik)

En Matrix (flertal Matricer) er en tabel af tal/objekter som kan addres og multipliceres og tilfredsstiller elementære regneregler. Ved at samle tal i en matrix kan man ved at addere og multiplicere matricer samt foretage andre operationer skabe nye overraskende matematiske strukturer. Matematik handler udelukkende om struktur og nye muligheder

En matrix (flertal Matricer) er i matematikken en meget nyttig måde at stille en række "tal" op i en tabel. Dette kan være tal i en vilkårlig ring R, men det er typisk reelle tal (evt. komplekse). Hvor vektorer v i Rn kan opfattes som en liste v = (v1, v2, ..., vn) af tal i R, er en matrix blot en rektangulær tabel af tal. Det kunne fx være matricen:

H = \begin{pmatrix}3&8&2\\4&9&7\end{pmatrix}. Hvor vektorer har en dimension, svarende til længden af listen af tal, har matricer således et antal rækker og et antal søjler. Man snakker om en m×n-matrix, hvor m er antallet af rækker, og n er antallet af søjler. Vores matrix H fra før er således et eksempel på en 2×3-matrix. Man siger "m-gange-n-matrix" eller "m-kryds-n-matrix".

Ligesom man kan "lægge vektorer sammen" og "gange vektorer med tal", er der også på matricer defineret matrixaddition og skalarmultiplikation. På matricer har man dog også defineret en måde at "gange" to matricer sammen, kaldet matrixmultiplikation. Alle disse begreber er beskrevet nedenfor.

Ordet matrix bøjes: en matrix, matricen, flere matricer, alle matricerne. Det bør ikke forveksles med ordet matrice, der bøjes på samme måde i flertal: en matrice, matricen, flere matricer, alle matricerne.

The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.

Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.

A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function.