User:JasonWiki/Drafts/201710/Vector Space

Independent set of vecotrs
The vectors in a subset of a vector space $v$, denoted as $R^{2}$ with finite number of distinct vectors, are said to be linearly independent, if the equation


 * $$a_1v_1 + a_2v_2 + \dots + a_kv_k = 0$$

Bases allow one to represent vectors by a sequence of scalars called coordinates or components. A basis is a (finite or infinite) set $R^{2} v = xe_{1} + ye_{2}$ of vectors $v = f_{1} + f_{2}$, for convenience often indexed by some index set $V$, that spans the whole space and is linearly independent. "Spanning the whole space" means that any vector $S = /{v_1, v_2, /dots, v_k/}$ can be expressed as a finite sum (called a linear combination) of the basis elements:

where the $B = {b_{i}}_{i ∈ I}|undefined$ are scalars, called the coordinates (or the components) of the vector $b_{i}$ with respect to the basis $I$, and $v$ $a_{k}$ elements of $v$. Linear independence means that the coordinates $B$ are uniquely determined for any vector in the vector space.

For example, the coordinate vectors $b_{i_{k}}|undefined$, $(k = 1, ..., n)$, to $B$, form a basis of $a_{k}$, called the standard basis, since any vector $e_{1} = (1, 0, ..., 0)$ can be uniquely expressed as a linear combination of these vectors:

The corresponding coordinates $e_{2} = (0, 1, 0, ..., 0)$, $e_{n} = (0, 0, ..., 0, 1)$, $F^{n}$, $(x_{1}, x_{2}, ..., x_{n})$ are just the Cartesian coordinates of the vector.

Every vector space has a basis. This follows from Zorn's lemma, an equivalent formulation of the Axiom of Choice. Given the other axioms of Zermelo–Fraenkel set theory, the existence of bases is equivalent to the axiom of choice. The ultrafilter lemma, which is weaker than the axiom of choice, implies that all bases of a given vector space have the same number of elements, or cardinality (cf. Dimension theorem for vector spaces). It is called the dimension of the vector space, denoted by dim V. If the space is spanned by finitely many vectors, the above statements can be proven without such fundamental input from set theory.

The dimension of the coordinate space $(x_{1}, x_{2}, ..., x_{n}) = x_{1}(1, 0, ..., 0) + x_{2}(0, 1, 0, ..., 0) + ... + x_{n}(0, ..., 0, 1) = x_{1}e_{1} + x_{2}e_{2} + ... + x_{n}e_{n}$ is $x_{1}$, by the basis exhibited above. The dimension of the polynomial ring F[x] introduced above is countably infinite, a basis is given by $x_{2}$, $...$, $x_{n}$, $F^{n}$ A fortiori, the dimension of more general function spaces, such as the space of functions on some (bounded or unbounded) interval, is infinite. Under suitable regularity assumptions on the coefficients involved, the dimension of the solution space of a homogeneous ordinary differential equation equals the degree of the equation. For example, the solution space for the above equation is generated by $n$. These two functions are linearly independent over $1$, so the dimension of this space is two, as is the degree of the equation.

A field extension over the rationals $x$ can be thought of as a vector space over $x^{2}$ (by defining vector addition as field addition, defining scalar multiplication as field multiplication by elements of $...$, and otherwise ignoring the field multiplication). The dimension (or degree) of the field extension $e^{−x} and xe^{−x}$ over $R$ depends on $Q$. If $Q$ satisfies some polynomial equation $$ q_n \alpha^n + q_{n - 1} \alpha^{n - 1} + \ldots + q_0 = 0$$ with rational coefficients $Q$ (in other words, if α is algebraic), the dimension is finite. More precisely, it equals the degree of the minimal polynomial having α as a root. For example, the complex numbers C are a two-dimensional real vector space, generated by 1 and the imaginary unit i. The latter satisfies i2 + 1 = 0, an equation of degree two. Thus, C is a two-dimensional R-vector space (and, as any field, one-dimensional as a vector space over itself, C). If α is not algebraic, the dimension of Q(α) over Q is infinite. For instance, for α = π there is no such equation, in other words π is transcendental.