Norm (mathematics)

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in an Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space.

The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm". A pseudonorm may satisfy the same axioms as a norm, with the equality replaced by an inequality "$$\,\leq\,$$" in the homogeneity axiom. It can also refer to a norm that can take infinite values, or to certain functions parametrised by a directed set.

Definition
Given a vector space $$X$$ over a subfield $$F$$ of the complex numbers $$\Complex,$$ a norm on $$X$$ is a real-valued function $$p : X \to \Reals$$ with the following properties, where $$|s|$$ denotes the usual absolute value of a scalar $$s$$:


 * 1) Subadditivity/Triangle inequality: $$p(x + y) \leq p(x) + p(y)$$ for all $$x, y \in X.$$
 * 2) Absolute homogeneity: $$p(s x) = |s| p(x)$$ for all $$x \in X$$ and all scalars $$s.$$
 * 3) Positive definiteness/positiveness/: for all $$x \in X,$$ if $$p(x) = 0$$ then $$x = 0.$$
 * 4) * Because property (2.) implies $$p(0) = 0,$$ some authors replace property (3.) with the equivalent condition: for every $$x \in X,$$ $$p(x) = 0$$ if and only if $$x = 0.$$

A seminorm on $$X$$ is a function $$p : X \to \Reals$$ that has properties (1.) and (2.) so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if $$p$$ is a norm (or more generally, a seminorm) then $$p(0) = 0$$ and that $$p$$ also has the following property:


 * 1) Non-negativity: $$p(x) \geq 0$$ for all $$x \in X.$$

Some authors include non-negativity as part of the definition of "norm", although this is not necessary. Although this article defined "" to be a synonym of "positive definite", some authors instead define "" to be a synonym of "non-negative"; these definitions are not equivalent.

Equivalent norms
Suppose that $$p$$ and $$q$$ are two norms (or seminorms) on a vector space $$X.$$ Then $$p$$ and $$q$$ are called equivalent, if there exist two positive real constants $$c$$ and $$C$$ with $$c > 0$$ such that for every vector $$x \in X,$$ $$c q(x) \leq p(x) \leq C q(x).$$ The relation "$$p$$ is equivalent to $$q$$" is reflexive, symmetric ($$c q \leq p \leq C q$$ implies $$\tfrac{1}{C} p \leq q \leq \tfrac{1}{c} p$$), and transitive and thus defines an equivalence relation on the set of all norms on $$X.$$ The norms $$p$$ and $$q$$ are equivalent if and only if they induce the same topology on $$X.$$ Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.

Notation
If a norm $$p : X \to \R$$ is given on a vector space $$X,$$ then the norm of a vector $$z \in X$$ is usually denoted by enclosing it within double vertical lines: $$\|z\| = p(z).$$ Such notation is also sometimes used if $$p$$ is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation $$|x|$$ with single vertical lines is also widespread.

Examples
Every (real or complex) vector space admits a norm: If $$x_{\bull} = \left(x_i\right)_{i \in I}$$ is a Hamel basis for a vector space $$X$$ then the real-valued map that sends $$x = \sum_{i \in I} s_i x_i \in X$$ (where all but finitely many of the scalars $$s_i$$ are $$0$$) to $$\sum_{i \in I} \left|s_i\right|$$ is a norm on $$X.$$ There are also a large number of norms that exhibit additional properties that make them useful for specific problems.

Absolute-value norm
The absolute value $$|x|$$ is a norm on the vector space formed by the real or complex numbers. The complex numbers form a one-dimensional vector space over themselve and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures.

Any norm $$p$$ on a one-dimensional vector space $$X$$ is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces $$f : \mathbb{F} \to X,$$ where $$\mathbb{F}$$ is either $$\R$$ or $$\Complex,$$ and norm-preserving means that $$|x| = p(f(x)).$$ This isomorphism is given by sending $$1 \isin \mathbb{F}$$ to a vector of norm $$1,$$ which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm.

Euclidean norm
On the $$n$$-dimensional Euclidean space $$\R^n,$$ the intuitive notion of length of the vector $$\boldsymbol{x} = \left(x_1, x_2, \ldots, x_n\right)$$ is captured by the formula $$\|\boldsymbol{x}\|_2 := \sqrt{x_1^2 + \cdots + x_n^2}.$$

This is the Euclidean norm, which gives the ordinary distance from the origin to the point X—a consequence of the Pythagorean theorem. This operation may also be referred to as "SRSS", which is an acronym for the square root of the sum of squares.

The Euclidean norm is by far the most commonly used norm on $$\R^n,$$ but there are other norms on this vector space as will be shown below. However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces.

The inner product of two vectors of a Euclidean vector space is the dot product of their coordinate vectors over an orthonormal basis. Hence, the Euclidean norm can be written in a coordinate-free way as $$\|\boldsymbol{x}\| := \sqrt{\boldsymbol{x} \cdot \boldsymbol{x}}.$$

The Euclidean norm is also called the quadratic norm, $$L^2$$ norm, $$\ell^2$$ norm, 2-norm, or square norm; see $L^p$ space. It defines a distance function called the Euclidean length, $$L^2$$ distance, or $$\ell^2$$ distance.

The set of vectors in $$\R^{n+1}$$ whose Euclidean norm is a given positive constant forms an $n$-sphere.

Euclidean norm of complex numbers
The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane $$\R^2.$$ This identification of the complex number $$x + i y$$ as a vector in the Euclidean plane, makes the quantity $\sqrt{x^2 + y^2}$ (as first suggested by Euler) the Euclidean norm associated with the complex number. For $$z = x +iy$$, the norm can also be written as $$\sqrt{\bar z z}$$ where $$\bar z$$ is the complex conjugate of $$z\,.$$

Quaternions and octonions
There are exactly four Euclidean Hurwitz algebras over the real numbers. These are the real numbers $$\R,$$ the complex numbers $$\Complex,$$ the quaternions $$\mathbb{H},$$ and lastly the octonions $$\mathbb{O},$$ where the dimensions of these spaces over the real numbers are $$1, 2, 4, \text{ and } 8,$$ respectively. The canonical norms on $$\R$$ and $$\Complex$$ are their absolute value functions, as discussed previously.

The canonical norm on $$\mathbb{H}$$ of quaternions is defined by $$\lVert q \rVert = \sqrt{\,qq^*~} = \sqrt{\,q^*q~} = \sqrt{\, a^2 + b^2 + c^2 + d^2 ~}$$ for every quaternion $$q = a + b\,\mathbf i + c\,\mathbf j + d\,\mathbf k$$ in $$\mathbb{H}.$$ This is the same as the Euclidean norm on $$\mathbb{H}$$ considered as the vector space $$\R^4.$$ Similarly, the canonical norm on the octonions is just the Euclidean norm on $$\R^8.$$

Finite-dimensional complex normed spaces
On an $$n$$-dimensional complex space $$\Complex^n,$$ the most common norm is $$\|\boldsymbol{z}\| := \sqrt{\left|z_1\right|^2 + \cdots + \left|z_n\right|^2} = \sqrt{z_1 \bar z_1 + \cdots + z_n \bar z_n}.$$

In this case, the norm can be expressed as the square root of the inner product of the vector and itself: $$\|\boldsymbol{x}\| := \sqrt{\boldsymbol{x}^H ~ \boldsymbol{x}},$$ where $$\boldsymbol{x}$$ is represented as a column vector $$\begin{bmatrix} x_1 \; x_2 \; \dots \; x_n \end{bmatrix}^{\rm T}$$ and $$\boldsymbol{x}^H$$ denotes its conjugate transpose.

This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence the formula in this case can also be written using the following notation: $$\|\boldsymbol{x}\| := \sqrt{\boldsymbol{x} \cdot \boldsymbol{x}}.$$

Taxicab norm or Manhattan norm
$$\|\boldsymbol{x}\|_1 := \sum_{i=1}^n \left|x_i\right|.$$ The name relates to the distance a taxi has to drive in a rectangular street grid (like that of the New York borough of Manhattan) to get from the origin to the point $$x.$$

The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope, which has dimension equal to the dimension of the vector space minus 1. The Taxicab norm is also called the $$\ell^1$$ norm. The distance derived from this norm is called the Manhattan distance or $$\ell^1$$ distance.

The 1-norm is simply the sum of the absolute values of the columns.

In contrast, $$\sum_{i=1}^n x_i$$ is not a norm because it may yield negative results.

p-norm
Let $$p \geq 1$$ be a real number. The $$p$$-norm (also called $$\ell^p$$-norm) of vector $$\mathbf{x} = (x_1, \ldots, x_n)$$ is $$\|\mathbf{x}\|_p := \left(\sum_{i=1}^n \left|x_i\right|^p\right)^{1/p}.$$ For $$p = 1,$$ we get the taxicab norm, for $$p = 2$$ we get the Euclidean norm, and as $$p$$ approaches $$\infty$$ the $$p$$-norm approaches the infinity norm or maximum norm: $$\|\mathbf{x}\|_\infty := \max_i \left|x_i\right|.$$ The $$p$$-norm is related to the generalized mean or power mean.

For $$p = 2,$$ the $$\|\,\cdot\,\|_2$$-norm is even induced by a canonical inner product $$\langle \,\cdot,\,\cdot\rangle,$$ meaning that $\|\mathbf{x}\|_2 = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle}$ for all vectors $$\mathbf{x}.$$ This inner product can be expressed in terms of the norm by using the polarization identity. On $$\ell^2,$$ this inner product is the  defined by $$\langle \left(x_n\right)_{n}, \left(y_n\right)_{n} \rangle_{\ell^2} ~=~ \sum_n \overline{x_n} y_n$$ while for the space $$L^2(X, \mu)$$ associated with a measure space $$(X, \Sigma, \mu),$$ which consists of all square-integrable functions, this inner product is $$\langle f, g \rangle_{L^2} = \int_X \overline{f(x)} g(x)\, \mathrm dx.$$

This definition is still of some interest for $$0 < p < 1,$$ but the resulting function does not define a norm, because it violates the triangle inequality. What is true for this case of $$0 < p < 1,$$ even in the measurable analog, is that the corresponding $$L^p$$ class is a vector space, and it is also true that the function $$\int_X |f(x) - g(x)|^p ~ \mathrm d \mu$$ (without $$p$$th root) defines a distance that makes $$L^p(X)$$ into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory and harmonic analysis. However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional.

The partial derivative of the $$p$$-norm is given by $$\frac{\partial}{\partial x_k} \|\mathbf{x}\|_p = \frac{x_k \left|x_k\right|^{p-2}} { \|\mathbf{x}\|_p^{p-1}}.$$

The derivative with respect to $$x,$$ therefore, is $$\frac{\partial \|\mathbf{x}\|_p}{\partial \mathbf{x}} =\frac{\mathbf{x} \circ |\mathbf{x}|^{p-2}} {\|\mathbf{x}\|^{p-1}_p}.$$ where $$\circ$$ denotes Hadamard product and $$|\cdot|$$ is used for absolute value of each component of the vector.

For the special case of $$p = 2,$$ this becomes $$\frac{\partial}{\partial x_k} \|\mathbf{x}\|_2 = \frac{x_k}{\|\mathbf{x}\|_2},$$ or $$\frac{\partial}{\partial \mathbf{x}} \|\mathbf{x}\|_2 = \frac{\mathbf{x}}{ \|\mathbf{x}\|_2}.$$

Maximum norm (special case of: infinity norm, uniform norm, or supremum norm)


If $$\mathbf{x}$$ is some vector such that $$\mathbf{x} = (x_1, x_2, \ldots ,x_n),$$ then: $$\|\mathbf{x}\|_\infty := \max \left(\left|x_1\right|, \ldots , \left|x_n\right|\right).$$

The set of vectors whose infinity norm is a given constant, $$c,$$ forms the surface of a hypercube with edge length $$2 c.$$

Zero norm
In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm $(x_n) \mapsto \sum_n{2^{-n} x_n/(1+x_n)}.$ Here we mean by F-norm some real-valued function $$\lVert \cdot \rVert$$ on an F-space with distance $$d,$$ such that $$\lVert x \rVert = d(x,0).$$ The F-norm described above is not a norm in the usual sense because it lacks the required homogeneity property.

Hamming distance of a vector from zero
In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the Hamming distance, which is important in coding and information theory. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero. However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness. When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous.

In signal processing and statistics, David Donoho referred to the zero "norm" with quotation marks. Following Donoho's notation, the zero "norm" of $$x$$ is simply the number of non-zero coordinates of $$x,$$ or the Hamming distance of the vector from zero. When this "norm" is localized to a bounded set, it is the limit of $$p$$-norms as $$p$$ approaches 0. Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous. Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument. Abusing terminology, some engineers omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function the $$L^0$$ norm, echoing the notation for the Lebesgue space of measurable functions.

Infinite dimensions
The generalization of the above norms to an infinite number of components leads to $\ell^p$ and $L^p$ spaces for $$p \ge 1\,,$$ with norms

$$\|x\|_p = \bigg(\sum_{i \in \N} \left|x_i\right|^p\bigg)^{1/p} \text{ and }\ \|f\|_{p,X} = \bigg(\int_X |f(x)|^p ~ \mathrm d x\bigg)^{1/p}$$

for complex-valued sequences and functions on $$X \sube \R^n$$ respectively, which can be further generalized (see Haar measure). These norms are also valid in the limit as $$p \rightarrow +\infty$$, giving a supremum norm, and are called $$\ell^\infty$$ and $$L^\infty\,.$$

Any inner product induces in a natural way the norm $\|x\| := \sqrt{\langle x, x\rangle}.$

Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article.

Generally, these norms do not give the same topologies. For example, an infinite-dimensional $$\ell^p$$ space gives a strictly finer topology than an infinite-dimensional $$\ell^q$$ space when $$p < q\,.$$

Composite norms
Other norms on $$\R^n$$ can be constructed by combining the above; for example $$\|x\| := 2 \left|x_1\right| + \sqrt{3 \left|x_2\right|^2 + \max (\left|x_3\right|, 2 \left|x_4\right|)^2}$$ is a norm on $$\R^4.$$

For any norm and any injective linear transformation $$A$$ we can define a new norm of $$x,$$ equal to $$\|A x\|.$$ In 2D, with $$A$$ a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. Each $$A$$ applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation.

In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).

There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally-symmetric convex body in $$\R^n$$ (centered at zero) defines a norm on $$\R^n$$ (see below).

All the above formulas also yield norms on $$\Complex^n$$ without modification.

There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms.

In abstract algebra
Let $$E$$ be a finite extension of a field $$k$$ of inseparable degree $$p^{\mu},$$ and let $$k$$ have algebraic closure $$K.$$ If the distinct embeddings of $$E$$ are $$\left\{\sigma_j\right\}_j,$$ then the Galois-theoretic norm of an element $$\alpha \in E$$ is the value $\left(\prod_j {\sigma_k(\alpha)}\right)^{p^{\mu}}.$   As that function is homogeneous of degree $[E : k]$, the Galois-theoretic norm is not a norm in the sense of this article. However, the $$[E : k]$$-th root of the norm (assuming that concept makes sense) is a norm.

Composition algebras
The concept of norm $$N(z)$$ in composition algebras does share the usual properties of a norm since null vectors are allowed. A composition algebra $$(A, {}^*, N)$$ consists of an algebra over a field $$A,$$ an involution $${}^*,$$ and a quadratic form $N(z) = z z^*$ called the "norm".

The characteristic feature of composition algebras is the homomorphism property of $$N$$: for the product $$w z$$ of two elements $$w$$ and $$z$$ of the composition algebra, its norm satisfies $$N(wz) = N(w) N(z).$$ In the case of division algebras $$\R,$$ $$\Complex,$$ $$\mathbb{H},$$ and $$\mathbb{O}$$ the composition algebra norm is the square of the norm discussed above. In those cases the norm is a definite quadratic form. In the split algebras the norm is an isotropic quadratic form.

Properties
For any norm $$p : X \to \R$$ on a vector space $$X,$$ the reverse triangle inequality holds: $$p(x \pm y) \geq |p(x) - p(y)| \text{ for all } x, y \in X.$$ If $$u : X \to Y$$ is a continuous linear map between normed spaces, then the norm of $$u$$ and the norm of the transpose of $$u$$ are equal.

For the $L^p$ norms, we have Hölder's inequality $$|\langle x, y \rangle| \leq \|x\|_p \|y\|_q \qquad \frac{1}{p} + \frac{1}{q} = 1.$$ A special case of this is the Cauchy–Schwarz inequality: $$\left|\langle x, y \rangle\right| \leq \|x\|_2 \|y\|_2.$$

Every norm is a seminorm and thus satisfies all properties of the latter. In turn, every seminorm is a sublinear function and thus satisfies all properties of the latter. In particular, every norm is a convex function.

Equivalence
The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any $$p$$-norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and $$p \geq 1$$ for a $$p$$-norm).

In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors $$\{v_n\}$$ is said to converge in norm to $$v,$$ if $$\left\|v_n - v\right\| \to 0$$ as $$n \to \infty.$$ Equivalently, the topology consists of all sets that can be represented as a union of open balls. If $$(X, \|\cdot\|)$$ is a normed space then $$\|x - y\| = \|x - z\| + \|z - y\| \text{ for all } x, y \in X \text{ and } z \in [x, y].$$

Two norms $$\|\cdot\|_\alpha$$ and $$\|\cdot\|_\beta$$ on a vector space $$X$$ are called  if they induce the same topology, which happens if and only if there exist positive real numbers $$C$$ and $$D$$ such that for all $$x \in X$$ $$C \|x\|_\alpha \leq \|x\|_\beta \leq D \|x\|_\alpha.$$ For instance, if $$p > r \geq 1$$ on $$\Complex^n,$$ then $$\|x\|_p \leq \|x\|_r \leq n^{(1/r-1/p)} \|x\|_p.$$

In particular, $$\|x\|_2 \leq \|x\|_1 \leq \sqrt{n} \|x\|_2$$ $$\|x\|_\infty \leq \|x\|_2 \leq \sqrt{n} \|x\|_\infty$$ $$\|x\|_\infty \leq \|x\|_1 \leq n \|x\|_\infty ,$$ That is, $$\|x\|_\infty \leq \|x\|_2 \leq \|x\|_1 \leq \sqrt{n} \|x\|_2 \leq n \|x\|_\infty.$$ If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent.

Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.

Classification of seminorms: absolutely convex absorbing sets
All seminorms on a vector space $$X$$ can be classified in terms of absolutely convex absorbing subsets $$A$$ of $$X.$$ To each such subset corresponds a seminorm $$p_A$$ called the gauge of $$A,$$ defined as $$p_A(x) := \inf \{r \in \R : r > 0, x \in r A\}$$ where $$\inf_{}$$ is the infimum, with the property that $$\left\{x \in X : p_A(x) < 1\right\} ~\subseteq~ A ~\subseteq~ \left\{x \in X : p_A(x) \leq 1\right\}.$$ Conversely:

Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family $$(p)$$ of seminorms $$p$$ that separates points: the collection of all finite intersections of sets $$\{p < 1/n\}$$ turns the space into a locally convex topological vector space so that every p is continuous.

Such a method is used to design weak and weak* topologies.

norm case:
 * Suppose now that $$(p)$$ contains a single $$p:$$ since $$(p)$$ is separating, $$p$$ is a norm, and $$A = \{p < 1\}$$ is its open unit ball. Then $$A$$ is an absolutely convex bounded neighbourhood of 0, and $$p = p_A$$ is continuous.


 * The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely:
 * If $$X$$ is an absolutely convex bounded neighbourhood of 0, the gauge $$g_X$$ (so that $$X = \{g_X < 1\}$$ is a norm.