Real coordinate space



In mathematics, the real coordinate space or real coordinate n-space, of dimension $n$, denoted $R^{n}$ or $\R^n$, is the set of all ordered $n$-tuples of real numbers, that is the set of all sequences of $n$ real numbers, also known as coordinate vectors. Special cases are called the real line $R^{1}$, the real coordinate plane $R^{2}$, and the real coordinate three-dimensional space $R^{3}$. With component-wise addition and scalar multiplication, it is a real vector space.

The coordinates over any basis of the elements of a real vector space form a real coordinate space of the same dimension as that of the vector space. Similarly, the Cartesian coordinates of the points of a Euclidean space of dimension $n$, $E^{n}$ (Euclidean line, $E$; Euclidean plane, $E^{2}$; Euclidean three-dimensional space, $E^{3}$) form a real coordinate space of dimension $n$.

These one to one correspondences between vectors, points and coordinate vectors explain the names of coordinate space and coordinate vector. It allows using geometric terms and methods for studying real coordinate spaces, and, conversely, to use methods of calculus in geometry. This approach of geometry was introduced by René Descartes in the 17th century. It is widely used, as it allows locating points in Euclidean spaces, and computing with them.

Definition and structures
For any natural number $n$, the set $R^{n}$ consists of all $n$-tuples of real numbers ($R$). It is called the "$n$-dimensional real space" or the "real $n$-space".

An element of $R^{n}$ is thus a $n$-tuple, and is written $$(x_1, x_2, \ldots, x_n)$$ where each $x_{i}$ is a real number. So, in multivariable calculus, the domain of a function of several real variables and the codomain of a real vector valued function are subsets of $R^{n}$ for some $n$.

The real $n$-space has several further properties, notably:
 * With componentwise addition and scalar multiplication, it is a real vector space. Every $n$-dimensional real vector space is isomorphic to it.
 * With the dot product (sum of the term by term product of the components), it is an inner product space. Every $n$-dimensional real inner product space is isomorphic to it.
 * As every inner product space, it is a topological space, and a topological vector space.
 * It is a Euclidean space and a real affine space, and every Euclidean or affine space is isomorphic to it.
 * It is an analytic manifold, and can be considered as the prototype of all manifolds, as, by definition, a manifold is, near each point, isomorphic to an open subset of $R^{n}$.
 * It is an algebraic variety, and every real algebraic variety is a subset of $R^{n}$.

These properties and structures of $R^{n}$ make it fundamental in almost all areas of mathematics and their application domains, such as statistics, probability theory, and many parts of physics.

The domain of a function of several variables
Any function $f(x_{1}, x_{2}, ..., x_{n})$ of $n$ real variables can be considered as a function on $R^{n}$ (that is, with $R^{n}$ as its domain). The use of the real $n$-space, instead of several variables considered separately, can simplify notation and suggest reasonable definitions. Consider, for $n = 2$, a function composition of the following form: $$ F(t) = f(g_1(t),g_2(t)),$$ where functions $g_{1}$ and $g_{2}$ are continuous. If then $F$ is not necessarily continuous. Continuity is a stronger condition: the continuity of $f$ in the natural $∀x_{1} ∈ R : f(x_{1}, ·)$ topology (discussed below), also called multivariable continuity, which is sufficient for continuity of the composition $F$.
 * $x_{2}$ is continuous (by $∀x_{2} ∈ R : f(·, x_{2})$)
 * $x_{1}$ is continuous (by $R^{2}$)

Vector space
The coordinate space $R^{n}$ forms an $n$-dimensional vector space over the field of real numbers with the addition of the structure of linearity, and is often still denoted $R^{n}$. The operations on $R^{n}$ as a vector space are typically defined by $$\mathbf x + \mathbf y = (x_1 + y_1, x_2 + y_2, \ldots, x_n + y_n)$$ $$\alpha \mathbf x = (\alpha x_1, \alpha x_2, \ldots, \alpha x_n).$$ The zero vector is given by $$\mathbf 0 = (0, 0, \ldots, 0)$$ and the additive inverse of the vector $x$ is given by $$-\mathbf x = (-x_1, -x_2, \ldots, -x_n).$$

This structure is important because any $n$-dimensional real vector space is isomorphic to the vector space $R^{n}$.

Matrix notation
In standard matrix notation, each element of $R^{n}$ is typically written as a column vector $$\mathbf x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$$ and sometimes as a row vector: $$\mathbf x = \begin{bmatrix} x_1 & x_2 & \cdots & x_n \end{bmatrix}.$$

The coordinate space $R^{n}$ may then be interpreted as the space of all $n × 1$ column vectors, or all $1 × n$ row vectors with the ordinary matrix operations of addition and scalar multiplication.

Linear transformations from $R^{n}$ to $R^{m}$ may then be written as $m × n$ matrices which act on the elements of $R^{n}$ via left multiplication (when the elements of $R^{n}$ are column vectors) and on elements of $R^{m}$ via right multiplication (when they are row vectors). The formula for left multiplication, a special case of matrix multiplication, is: $$(A{\mathbf x})_k = \sum_{l=1}^n A_{kl} x_l$$

Any linear transformation is a continuous function (see below). Also, a matrix defines an open map from $R^{n}$ to $R^{m}$ if and only if the rank of the matrix equals to $m$.

Standard basis
The coordinate space $R^{n}$ comes with a standard basis: $$\begin{align} \mathbf e_1 & = (1, 0, \ldots, 0) \\ \mathbf e_2 & = (0, 1, \ldots, 0) \\ & {}\;\; \vdots \\ \mathbf e_n & = (0, 0, \ldots, 1) \end{align}$$

To see that this is a basis, note that an arbitrary vector in $R^{n}$ can be written uniquely in the form $$\mathbf x = \sum_{i=1}^n x_i \mathbf{e}_i.$$

Orientation
The fact that real numbers, unlike many other fields, constitute an ordered field yields an orientation structure on $R^{n}$. Any full-rank linear map of $R^{n}$ to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix. If one permutes coordinates (or, in other words, elements of the basis), the resulting orientation will depend on the parity of the permutation.

Diffeomorphisms of $R^{n}$ or domains in it, by their virtue to avoid zero Jacobian, are also classified to orientation-preserving and orientation-reversing. It has important consequences for the theory of differential forms, whose applications include electrodynamics.

Another manifestation of this structure is that the point reflection in $R^{n}$ has different properties depending on evenness of $n$. For even $n$ it preserves orientation, while for odd $n$ it is reversed (see also improper rotation).

Affine space
$R^{n}$ understood as an affine space is the same space, where $R^{n}$ as a vector space acts by translations. Conversely, a vector has to be understood as a "difference between two points", usually illustrated by a directed line segment connecting two points. The distinction says that there is no canonical choice of where the origin should go in an affine $n$-space, because it can be translated anywhere.

Convexity


In a real vector space, such as $(n + 1)$, one can define a convex cone, which contains all non-negative linear combinations of its vectors. Corresponding concept in an affine space is a convex set, which allows only convex combinations (non-negative linear combinations that sum to 1).

In the language of universal algebra, a vector space is an algebra over the universal vector space $(n + 1)$ of finite sequences of coefficients, corresponding to finite sums of vectors, while an affine space is an algebra over the universal affine hyperplane in this space (of finite sequences summing to 1), a cone is an algebra over the universal orthant (of finite sequences of nonnegative numbers), and a convex set is an algebra over the universal simplex (of finite sequences of nonnegative numbers summing to 1). This geometrizes the axioms in terms of "sums with (possible) restrictions on the coordinates".

Another concept from convex analysis is a convex function from $R^{n}$ to real numbers, which is defined through an inequality between its value on a convex combination of points and sum of values in those points with the same coefficients.

Euclidean space
The dot product $$\mathbf{x}\cdot\mathbf{y} = \sum_{i=1}^n x_iy_i = x_1y_1+x_2y_2+\cdots+x_ny_n$$ defines the norm $R^{∞}$ on the vector space $R^{n}$. If every vector has its Euclidean norm, then for any pair of points the distance $$d(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\| = \sqrt{\sum_{i=1}^n (x_i - y_i)^2}$$ is defined, providing a metric space structure on $|x| = √x &sdot; x$ in addition to its affine structure.

As for vector space structure, the dot product and Euclidean distance usually are assumed to exist in $R^{n}$ without special explanations. However, the real $n$-space and a Euclidean $n$-space are distinct objects, strictly speaking. Any Euclidean $n$-space has a coordinate system where the dot product and Euclidean distance have the form shown above, called Cartesian. But there are many Cartesian coordinate systems on a Euclidean space.

Conversely, the above formula for the Euclidean metric defines the standard Euclidean structure on $R^{n}$, but it is not the only possible one. Actually, any positive-definite quadratic form $q$ defines its own "distance" $R^{n}$, but it is not very different from the Euclidean one in the sense that $$\exist C_1 > 0,\ \exist C_2 > 0,\ \forall \mathbf{x}, \mathbf{y} \in \mathbb{R}^n: C_1 d(\mathbf{x}, \mathbf{y}) \le \sqrt{q(\mathbf{x} - \mathbf{y})} \le C_2 d(\mathbf{x}, \mathbf{y}). $$ Such a change of the metric preserves some of its properties, for example the property of being a complete metric space. This also implies that any full-rank linear transformation of $R^{n}$, or its affine transformation, does not magnify distances more than by some fixed $√q(x − y)$, and does not make distances smaller than $R^{n}$ times, a fixed finite number times smaller.

The aforementioned equivalence of metric functions remains valid if $C_{2}$ is replaced with $1 / C_{1}$, where $M$ is any convex positive homogeneous function of degree 1, i.e. a vector norm (see Minkowski distance for useful examples). Because of this fact that any "natural" metric on $√q(x − y)$ is not especially different from the Euclidean metric, $M(x − y)$ is not always distinguished from a Euclidean $R^{n}$-space even in professional mathematical works.

In algebraic and differential geometry
Although the definition of a manifold does not require that its model space should be $R^{n}$, this choice is the most common, and almost exclusive one in differential geometry.

On the other hand, Whitney embedding theorems state that any real differentiable $m$-dimensional manifold can be embedded into $n$.

Other appearances
Other structures considered on $R^{n}$ include the one of a pseudo-Euclidean space, symplectic structure (even $n$), and contact structure (odd $n$). All these structures, although can be defined in a coordinate-free manner, admit standard (and reasonably simple) forms in coordinates.

$R^{2m}$ is also a real vector subspace of $R^{n}$ which is invariant to complex conjugation; see also complexification.

Polytopes in Rn
There are three families of polytopes which have simple representations in $R^{n}$ spaces, for any $n$, and can be used to visualize any affine coordinate system in a real $n$-space. Vertices of a hypercube have coordinates $C^{n}$ where each $x_{k}$ takes on one of only two values, typically 0 or 1. However, any two numbers can be chosen instead of 0 and 1, for example and 1. An $n$-hypercube can be thought of as the Cartesian product of $n$ identical intervals (such as the unit interval $[0,1]$) on the real line. As an $n$-dimensional subset it can be described with a system of $R^{n}$ inequalities: $$\begin{matrix} 0 \le x_1 \le 1 \\ \vdots \\ 0 \le x_n \le 1 \end{matrix}$$ for $[0,1]$, and $$\begin{matrix} \vdots \\ \end{matrix}$$ for $[−1,1]$.
 * x_1| \le 1 \\
 * x_n| \le 1

Each vertex of the cross-polytope has, for some $k$, the $x_{k}$ coordinate equal to ±1 and all other coordinates equal to 0 (such that it is the $k$th standard basis vector up to sign). This is a dual polytope of hypercube. As an $n$-dimensional subset it can be described with a single inequality which uses the absolute value operation: $$\sum_{k=1}^n |x_k| \le 1\,,$$ but this can be expressed with a system of $(x_{1}, x_{2}, ..., x_{n})$ linear inequalities as well.

The third polytope with simply enumerable coordinates is the standard simplex, whose vertices are $n$ standard basis vectors and the origin $2n$. As an $n$-dimensional subset it is described with a system of $2^{n}$ linear inequalities: $$\begin{matrix} 0 \le x_1 \\ \vdots \\ 0 \le x_n \\ \sum\limits_{k=1}^n x_k \le 1 \end{matrix}$$ Replacement of all "≤" with "<" gives interiors of these polytopes.

Topological properties
The topological structure of $(0, 0, ..., 0)$ (called standard topology, Euclidean topology, or usual topology) can be obtained not only from Cartesian product. It is also identical to the natural topology induced by Euclidean metric discussed above: a set is open in the Euclidean topology if and only if it contains an open ball around each of its points. Also, $n + 1$ is a linear topological space (see continuity of linear maps above), and there is only one possible (non-trivial) topology compatible with its linear structure. As there are many open linear maps from $R^{n}$ to itself which are not isometries, there can be many Euclidean structures on $R^{n}$ which correspond to the same topology. Actually, it does not depend much even on the linear structure: there are many non-linear diffeomorphisms (and other homeomorphisms) of $R^{n}$ onto itself, or its parts such as a Euclidean open ball or the interior of a hypercube).

$R^{n}$ has the topological dimension $n$.

An important result on the topology of $R^{n}$, that is far from superficial, is Brouwer's invariance of domain. Any subset of $R^{n}$ (with its subspace topology) that is homeomorphic to another open subset of $R^{n}$ is itself open. An immediate consequence of this is that $R^{n}$ is not homeomorphic to $R^{n}$ if $R^{m}$ – an intuitively "obvious" result which is nonetheless difficult to prove.

Despite the difference in topological dimension, and contrary to a naïve perception, it is possible to map a lesser-dimensional real space continuously and surjectively onto $R^{n}$. A continuous (although not smooth) space-filling curve (an image of $m ≠ n$) is possible.

n ≤ 1
Cases of $R^{n}$ do not offer anything new: $R^{1}$ is the real line, whereas $R^{0}$ (the space containing the empty column vector) is a singleton, understood as a zero vector space. However, it is useful to include these as trivial cases of theories that describe different $n$.

n = 2


The case of (x,y) where x and y are real numbers has been developed as the Cartesian plane P. Further structure has been attached with Euclidean vectors representing directed line segments in P. The plane has also been developed as the field extension $$\mathbf{C}$$ by appending roots of X2 + 1 = 0 to the real field $$\mathbf{R}.$$ The root i acts on P as a quarter turn with counterclockwise orientation. This root generates the group $$\{i, -1, -i, +1\} \equiv \mathbf{Z}/4\mathbf{Z}$$. When (x,y) is written x + y i it is a complex number.

Another group action by $$\mathbf{Z}/2\mathbf{Z}$$, where the actor has been expressed as j, uses the line y=x for the involution of flipping the plane (x,y) ↦ (y,x), an exchange of coordinates. In this case points of P are written x + y j and called split-complex numbers. These numbers, with the coordinate-wise addition and multiplication according to jj=+1, form a ring that is not a field.

Another ring structure on P uses a nilpotent e to write x + y e for (x,y). The action of e on P reduces the plane to a line: It can be decomposed into the projection into the x-coordinate, then quarter-turning the result to the y-axis: e (x + y e) = x e since e2 = 0. A number x + y e is a dual number. The dual numbers form a ring, but, since e has no multiplicative inverse, it does not generate a group so the action is not a group action.

Excluding (0,0) from P makes [x : y] projective coordinates which describe the real projective line, a one-dimensional space. Since the origin is excluded, at least one of the ratios x/y and y/x exists. Then [x : y] = [x/y : 1] or [x : y] = [1 : y/x]. The projective line P1(R) is a topological manifold covered by two coordinate charts, [z : 1] → z or [1 : z] → z, which form an atlas. For points covered by both charts the transition function is multiplicative inversion on an open neighborhood of the point, which provides a homeomorphism as required in a manifold. One application of the real projective line is found in Cayley–Klein metric geometry.

n = 4


$0 ≤ n ≤ 1$ can be imagined using the fact that points $R^{1}$, where each $x_{k}$ is either 0 or 1, are vertices of a tesseract (pictured), the 4-hypercube (see above).

The first major use of $R^{0}$ is a spacetime model: three spatial coordinates plus one temporal. This is usually associated with theory of relativity, although four dimensions were used for such models since Galilei. The choice of theory leads to different structure, though: in Galilean relativity the $t$ coordinate is privileged, but in Einsteinian relativity it is not. Special relativity is set in Minkowski space. General relativity uses curved spaces, which may be thought of as $R^{2}$ with a curved metric for most practical purposes. None of these structures provide a (positive-definite) metric on $R^{3}$.

Euclidean $R^{4}$ also attracts the attention of mathematicians, for example due to its relation to quaternions, a 4-dimensional real algebra themselves. See rotations in 4-dimensional Euclidean space for some information.

In differential geometry, $(x_{1}, x_{2}, x_{3}, x_{4})$ is the only case where $R^{4}$ admits a non-standard differential structure: see exotic R4.

Norms on $R^{4}$
One could define many norms on the vector space $R^{4}$. Some common examples are


 * the p-norm, defined by $\|\mathbf{x}\|_p := \sqrt[p]{\sum_{i=1}^n|x_i|^p}$ for all $$\mathbf{x} \in \mathbf{R}^n$$ where $$p$$ is a positive integer. The case $$p = 2$$ is very important, because it is exactly the Euclidean norm.
 * the $$\infty$$-norm or maximum norm, defined by $$\|\mathbf{x}\|_\infty:=\max \{x_1,\dots,x_n\}$$ for all $$\mathbf{x} \in \mathbf{R}^n$$. This is the limit of all the p-norms: $\|\mathbf{x}\|_\infty = \lim_{p \to \infty} \sqrt[p]{\sum_{i=1}^n|x_i|^p}$.

A really surprising and helpful result is that every norm defined on $R^{4}$ is equivalent. This means for two arbitrary norms $$\|\cdot\|$$ and $$\|\cdot\|'$$ on $n = 4$ you can always find positive real numbers $$\alpha,\beta > 0$$, such that $$\alpha \cdot \|\mathbf{x}\| \leq \|\mathbf{x}\|' \leq \beta\cdot\|\mathbf{x}\|$$ for all $$\mathbf{x} \in \R^n$$.

This defines an equivalence relation on the set of all norms on $R^{n}$. With this result you can check that a sequence of vectors in $R^{n}$ converges with $$\|\cdot\|$$ if and only if it converges with $$\|\cdot\|'$$.

Here is a sketch of what a proof of this result may look like:

Because of the equivalence relation it is enough to show that every norm on $R^{n}$ is equivalent to the Euclidean norm $$\|\cdot\|_2$$. Let $$\|\cdot\|$$ be an arbitrary norm on $R^{n}$. The proof is divided in two steps:

\leq \sqrt{\sum_{i=1}^n \|e_i\|^2} \cdot \sqrt{\sum_{i=1}^n |x_i|^2} = \beta \cdot \|\mathbf{x}\|_2,$$ where $\beta := \sqrt{\sum_{i=1}^n \|e_i\|^2}$.
 * We show that there exists a $$\beta > 0$$, such that $$\|\mathbf{x}\| \leq \beta \cdot \|\mathbf{x}\|_2$$ for all $$\mathbf{x} \in \mathbf{R}^n$$. In this step you use the fact that every $$\mathbf{x} = (x_1, \dots, x_n) \in \mathbf{R}^n$$ can be represented as a linear combination of the standard basis: $\mathbf{x} = \sum_{i=1}^n e_i \cdot x_i$ . Then with the Cauchy–Schwarz inequality $$\|\mathbf{x}\| = \left\|\sum_{i=1}^n e_i \cdot x_i \right\|\leq \sum_{i=1}^n \|e_i\| \cdot |x_i|
 * Now we have to find an $$\alpha > 0$$, such that $$\alpha\cdot\|\mathbf{x}\|_2 \leq \|\mathbf{x}\|$$ for all $$\mathbf{x} \in \mathbf{R}^n$$. Assume there is no such $$\alpha$$. Then there exists for every $$k \in \mathbf{N}$$ a $$\mathbf{x}_k \in \mathbf{R}^n$$, such that $$\|\mathbf{x}_k\|_2 > k \cdot \|\mathbf{x}_k\|$$. Define a second sequence $$(\tilde{\mathbf{x}}_k)_{k \in \mathbf{N}}$$ by $\tilde{\mathbf{x}}_k := \frac{\mathbf{x}_k}{\|\mathbf{x}_k\|_2}$ . This sequence is bounded because $$\|\tilde{\mathbf{x}}_k\|_2 = 1$$. So because of the Bolzano–Weierstrass theorem there exists a convergent subsequence $$(\tilde{\mathbf{x}}_{k_j})_{j\in\mathbf{N}}$$ with limit $$\mathbf{a} \in$$ $R^{n}$. Now we show that $$\|\mathbf{a}\|_2 = 1$$ but $$\mathbf{a} = \mathbf{0}$$, which is a contradiction. It is $$\|\mathbf{a}\| \leq \left\|\mathbf{a} - \tilde{\mathbf{x}}_{k_j}\right\| + \left\|\tilde{\mathbf{x}}_{k_j}\right\| \leq \beta \cdot \left\|\mathbf{a} - \tilde{\mathbf{x}}_{k_j}\right\|_2 + \frac{\|\mathbf{x}_{k_j}\|}{\|\mathbf{x}_{k_j}\|_2} \ \overset{j \to \infty}{\longrightarrow} \ 0,$$ because $$\|\mathbf{a}-\tilde{\mathbf{x}}_{k_j}\| \to 0$$ and $$0 \leq \frac{\|\mathbf{x}_{k_j}\|}{\|\mathbf{x}_{k_j}\|_2} < \frac{1}{k_j}$$, so $$\frac{\|\mathbf{x}_{k_j}\|}{\|\mathbf{x}_{k_j}\|_2} \to 0$$. This implies $$\|\mathbf{a}\| = 0$$, so $$\mathbf{a}= \mathbf{0}$$. On the other hand $$\|\mathbf{a}\|_2 = 1$$, because $$\|\mathbf{a}\|_2 = \left\| \lim_{j \to \infty}\tilde{\mathbf{x}}_{k_j} \right\|_2 = \lim_{j \to \infty} \left\| \tilde{\mathbf{x}}_{k_j} \right\|_2 = 1$$. This can not ever be true, so the assumption was false and there exists such a $$\alpha > 0$$.