Differential form

In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics.

For instance, the expression $f(x) dx$ is an example of a $1$-form, and can be integrated over an interval $[a, b]$ contained in the domain of $f$:
 * $$\int_a^b f(x)\,dx.$$

Similarly, the expression $f(x, y, z) dx ∧ dy + g(x, y, z) dz ∧ dx + h(x, y, z) dy ∧ dz$ is a $2$-form that can be integrated over a surface $S$:
 * $$\int_S (f(x,y,z)\,dx\wedge dy + g(x,y,z)\,dz\wedge dx + h(x,y,z)\,dy\wedge dz).$$

The symbol $∧$ denotes the exterior product, sometimes called the wedge product, of two differential forms. Likewise, a $3$-form $f(x, y, z) dx ∧ dy ∧ dz$ represents a volume element that can be integrated over a region of space. In general, a $k$-form is an object that may be integrated over a $k$-dimensional manifold, and is homogeneous of degree $k$ in the coordinate differentials $$dx, dy, \ldots.$$ On an $n$-dimensional manifold, the top-dimensional form ($n$-form) is called a volume form.

The differential forms form an alternating algebra. This implies that $$dy\wedge dx = -dx\wedge dy$$ and $$dx\wedge dx=0.$$ This alternating property reflects the orientation of the domain of integration.

The exterior derivative is an operation on differential forms that, given a $k$-form $$\varphi$$, produces a $(k+1)$-form $$d\varphi.$$ This operation extends the differential of a function (a function can be considered as a $0$-form, and its differential is $$df(x)=f'(x)dx$$). This allows expressing the fundamental theorem of calculus, the divergence theorem, Green's theorem, and Stokes' theorem as special cases of a single general result, the generalized Stokes theorem.

Differential $1$-forms are naturally dual to vector fields on a differentiable manifold, and the pairing between vector fields and $1$-forms is extended to arbitrary differential forms by the interior product. The algebra of differential forms along with the exterior derivative defined on it is preserved by the pullback under smooth functions between two manifolds. This feature allows geometrically invariant information to be moved from one space to another via the pullback, provided that the information is expressed in terms of differential forms. As an example, the change of variables formula for integration becomes a simple statement that an integral is preserved under pullback.

History
Differential forms are part of the field of differential geometry, influenced by linear algebra. Although the notion of a differential is quite old, the initial attempt at an algebraic organization of differential forms is usually credited to Élie Cartan with reference to his 1899 paper. Some aspects of the exterior algebra of differential forms appears in Hermann Grassmann's 1844 work, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics).

Concept
Differential forms provide an approach to multivariable calculus that is independent of coordinates.

Integration and orientation
A differential $k$-form can be integrated over an oriented manifold of dimension $k$. A differential $1$-form can be thought of as measuring an infinitesimal oriented length, or 1-dimensional oriented density. A differential $2$-form can be thought of as measuring an infinitesimal oriented area, or 2-dimensional oriented density. And so on.

Integration of differential forms is well-defined only on oriented manifolds. An example of a 1-dimensional manifold is an interval $[a, b]$, and intervals can be given an orientation: they are positively oriented if $a < b$, and negatively oriented otherwise. If $a < b$ then the integral of the differential $1$-form $f(x) dx$ over the interval $[a, b]$ (with its natural positive orientation) is
 * $$\int_a^b f(x) \,dx$$

which is the negative of the integral of the same differential form over the same interval, when equipped with the opposite orientation. That is:
 * $$\int_b^a f(x)\,dx = -\int_a^b f(x)\,dx.$$

This gives a geometrical context to the conventions for one-dimensional integrals, that the sign changes when the orientation of the interval is reversed. A standard explanation of this in one-variable integration theory is that, when the limits of integration are in the opposite order ($b < a$), the increment $dx$ is negative in the direction of integration.

More generally, an $m$-form is an oriented density that can be integrated over an $m$-dimensional oriented manifold. (For example, a $1$-form can be integrated over an oriented curve, a $2$-form can be integrated over an oriented surface, etc.) If $M$ is an oriented $m$-dimensional manifold, and $M$ is the same manifold with opposite orientation and $ω$ is an $m$-form, then one has:
 * $$\int_M \omega = - \int_{M'} \omega \,.$$

These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function $f$ with respect to a measure $μ$ and integrates over a subset $A$, without any notion of orientation; one writes $\int_A f\,d\mu = \int_{[a,b]} f\,d\mu$ to indicate integration over a subset $A$. This is a minor distinction in one dimension, but becomes subtler on higher-dimensional manifolds; see below for details.

Making the notion of an oriented density precise, and thus of a differential form, involves the exterior algebra. The differentials of a set of coordinates, $dx1$, ..., $dx$ can be used as a basis for all $1$-forms. Each of these represents a covector at each point on the manifold that may be thought of as measuring a small displacement in the corresponding coordinate direction. A general $1$-form is a linear combination of these differentials at every point on the manifold:
 * $$f_1\,dx^1+\cdots+f_n\,dx^n ,$$

where the $fk = fk(x1, ..., xn)$ are functions of all the coordinates. A differential $1$-form is integrated along an oriented curve as a line integral.

The expressions $dx ∧ dx$, where $i < j$ can be used as a basis at every point on the manifold for all $2$-forms. This may be thought of as an infinitesimal oriented square parallel to the $x$–$x$-plane. A general $2$-form is a linear combination of these at every point on the manifold: $\sum_{1 \leq i<j \leq n} f_{i,j} \, dx^i \wedge dx^j$, and it is integrated just like a surface integral.

A fundamental operation defined on differential forms is the exterior product (the symbol is the wedge $∧$). This is similar to the cross product from vector calculus, in that it is an alternating product. For instance,
 * $$dx^1\wedge dx^2=-dx^2\wedge dx^1$$

because the square whose first side is $dx1$ and second side is $dx2$ is to be regarded as having the opposite orientation as the square whose first side is $dx^{2}$ and whose second side is $dx1$. This is why we only need to sum over expressions $dx ∧ dx$, with $i < j$; for example: $a(dx ∧ dx) + b(dx ∧ dx) = (a − b) dx ∧ dx$. The exterior product allows higher-degree differential forms to be built out of lower-degree ones, in much the same way that the cross product in vector calculus allows one to compute the area vector of a parallelogram from vectors pointing up the two sides. Alternating also implies that $dx ∧ dx = 0$, in the same way that the cross product of parallel vectors, whose magnitude is the area of the parallelogram spanned by those vectors, is zero. In higher dimensions, $dx ∧ ⋅⋅⋅ ∧ dx = 0$ if any two of the indices $i_{1}$, ..., $i_{m}$ are equal, in the same way that the "volume" enclosed by a parallelotope whose edge vectors are linearly dependent is zero.

Multi-index notation
A common notation for the wedge product of elementary $k$-forms is so called multi-index notation: in an $n$-dimensional context, for $I = (i_1, i_2,\ldots, i_k), 1 \leq i_1 < i_2 < \cdots < i_k \leq n$, we define $dx^I := dx^{i_1} \wedge \cdots \wedge dx^{i_k} = \bigwedge_{i\in I} dx^i$. Another useful notation is obtained by defining the set of all strictly increasing multi-indices of length $k$, in a space of dimension $n$, denoted $\mathcal{J}_{k,n} := \{I=(i_1,\ldots,i_k):1\leq i_1<i_2<\cdots<i_k\leq n\}$. Then locally (wherever the coordinates apply), $$\{dx^I\}_{I \in \mathcal{J}_{k,n}}$$ spans the space of differential $k$-forms in a manifold $M$ of dimension $n$, when viewed as a module over the ring $C∞(M)$ of smooth functions on $M$. By calculating the size of $$\mathcal{J}_{k,n}$$ combinatorially, the module of $k$-forms on an $n$-dimensional manifold, and in general space of $k$-covectors on an $n$-dimensional vector space, is $n$ choose $k$: $ This also demonstrates that there are no nonzero differential forms of degree greater than the dimension of the underlying manifold.

The exterior derivative
In addition to the exterior product, there is also the exterior derivative operator $d$. The exterior derivative of a differential form is a generalization of the differential of a function, in the sense that the exterior derivative of $f ∈ C∞(M) = Ω0(M)$ is exactly the differential of $f$. When generalized to higher forms, if $ω = f dx$ is a simple $k$-form, then its exterior derivative $dω$ is a $(k + 1)$-form defined by taking the differential of the coefficient functions:
 * $$d\omega = \sum_{i=1}^n \frac{\partial f}{\partial x^i} \, dx^i \wedge dx^I.$$

with extension to general $k$-forms through linearity: if $\tau = \sum_{I \in \mathcal{J}_{k,n}} a_I \, dx^I \in \Omega^k(M)$ ,|undefined then its exterior derivative is
 * $$d\tau = \sum_{I \in \mathcal{J}_{k,n}}\left(\sum_{j=1}^n \frac{\partial a_I}{\partial x^j} \, dx^j\right)\wedge dx^I \in \Omega^{k+1}(M)$$

In $R^{3}$, with the Hodge star operator, the exterior derivative corresponds to gradient, curl, and divergence, although this correspondence, like the cross product, does not generalize to higher dimensions, and should be treated with some caution.

The exterior derivative itself applies in an arbitrary finite number of dimensions, and is a flexible and powerful tool with wide application in differential geometry, differential topology, and many areas in physics. Of note, although the above definition of the exterior derivative was defined with respect to local coordinates, it can be defined in an entirely coordinate-free manner, as an antiderivation of degree 1 on the exterior algebra of differential forms. The benefit of this more general approach is that it allows for a natural coordinate-free approach to integrate on manifolds. It also allows for a natural generalization of the fundamental theorem of calculus, called the (generalized) Stokes' theorem, which is a central result in the theory of integration on manifolds.

Differential calculus
Let $U$ be an open set in $R^{n}$. A differential $0$-form ("zero-form") is defined to be a smooth function $f$ on $U$ – the set of which is denoted $C∞(U)$. If $v$ is any vector in $R^{n}$, then $f$ has a directional derivative $∂_{v} f$, which is another function on $U$ whose value at a point $p ∈ U$ is the rate of change (at $p$) of $f$ in the $v$ direction:
 * $$ (\partial_\mathbf{v} f)(p) = \left. \frac{d}{dt} f(p+t\mathbf{v})\right|_{t=0} .$$

(This notion can be extended pointwise to the case that $v$ is a vector field on $U$ by evaluating $v$ at the point $p$ in the definition.)

In particular, if $v = ej$ is the $j$th coordinate vector then $∂v f$ is the partial derivative of $f$ with respect to the $j$th coordinate vector, i.e., $∂f / ∂x$, where $x$, $x$, ..., $x$ are the coordinate vectors in $U$. By their very definition, partial derivatives depend upon the choice of coordinates: if new coordinates $y$, $y$, ..., $y$ are introduced, then
 * $$\frac{\partial f}{\partial x^j} = \sum_{i=1}^n\frac{\partial y^i}{\partial x^j}\frac{\partial f}{\partial y^i} .$$

The first idea leading to differential forms is the observation that $∂_{v} f (p)$ is a linear function of $v$:


 * $$\begin{align}

(\partial_{\mathbf{v} + \mathbf{w}} f)(p) &= (\partial_\mathbf{v} f)(p) + (\partial_\mathbf{w} f)(p) \\ (\partial_{c \mathbf{v}} f)(p) &= c (\partial_\mathbf{v} f)(p) \end{align}$$

for any vectors $v$, $w$ and any real number $c$. At each point p, this linear map from $R^{n}$ to $R$ is denoted $df_{p}$ and called the derivative or differential of $f$ at $p$. Thus $df_{p}(v) = ∂_{v} f (p)$. Extended over the whole set, the object $df$ can be viewed as a function that takes a vector field on $U$, and returns a real-valued function whose value at each point is the derivative along the vector field of the function $f$. Note that at each $p$, the differential $df_{p}$ is not a real number, but a linear functional on tangent vectors, and a prototypical example of a differential $1$-form.

Since any vector $v$ is a linear combination $Σ vej$ of its components, $df$ is uniquely determined by $dfp(ej)$ for each $j$ and each $p ∈ U$, which are just the partial derivatives of $f$ on $U$. Thus $df$ provides a way of encoding the partial derivatives of $f$. It can be decoded by noticing that the coordinates $x$, $x2$, ..., $x$ are themselves functions on $U$, and so define differential $1$-forms $dx$, $dx$, ..., $dx$. Let $f = x$. Since $∂x / ∂x = δij$, the Kronecker delta function, it follows that

The meaning of this expression is given by evaluating both sides at an arbitrary point $$: on the right hand side, the sum is defined "pointwise", so that
 * $$df_p = \sum_{i=1}^n \frac{\partial f}{\partial x^i}(p) (dx^i)_p .$$

Applying both sides to $e_{j}$, the result on each side is the $p$th partial derivative of $j$ at $f$. Since $p$ and $p$ were arbitrary, this proves the formula $j$.

More generally, for any smooth functions $g_{i}$ and $h_{i}$ on $$, we define the differential $1$-form $α = Σ_{i} g_{i} dh_{i}$ pointwise by
 * $$\alpha_p = \sum_i g_i(p) (dh_i)_p$$

for each $p ∈ U$. Any differential $1$-form arises this way, and by using $U$ it follows that any differential $1$-form $$ on $α$ may be expressed in coordinates as
 * $$ \alpha = \sum_{i=1}^n f_i\, dx^i$$

for some smooth functions $f_{i}$ on $U$.

The second idea leading to differential forms arises from the following question: given a differential $1$-form $U$ on $α$, when does there exist a function $U$ on $f$ such that $α = df$? The above expansion reduces this question to the search for a function $U$ whose partial derivatives $∂f / ∂x$ are equal to $f$ given functions $f_{i}$. For $n > 1$, such a function does not always exist: any smooth function $n$ satisfies
 * $$ \frac{\partial^2 f}{\partial x^i \, \partial x^j} = \frac{\partial^2 f}{\partial x^j \, \partial x^i} ,$$

so it will be impossible to find such an $f$ unless
 * $$ \frac{\partial f_j}{\partial x^i} - \frac{\partial f_i}{\partial x^j} = 0$$

for all $f$ and $i$.

The skew-symmetry of the left hand side in $j$ and $i$ suggests introducing an antisymmetric product $∧$ on differential $1$-forms, the exterior product, so that these equations can be combined into a single condition
 * $$ \sum_{i,j=1}^n \frac{\partial f_j}{\partial x^i} \, dx^i \wedge dx^j = 0 ,$$

where $∧$ is defined so that:
 * $$ dx^i \wedge dx^j = - dx^j \wedge dx^i. $$

This is an example of a differential $2$-form. This $2$-form is called the exterior derivative $dα$ of $α = Σn j=1 f_{j} dx$. It is given by
 * $$ d\alpha = \sum_{j=1}^n df_j \wedge dx^j = \sum_{i,j=1}^n \frac{\partial f_j}{\partial x^i} \, dx^i \wedge dx^j .$$

To summarize: $dα = 0$ is a necessary condition for the existence of a function $j$ with $α = df$.

Differential $0$-forms, $1$-forms, and $2$-forms are special cases of differential forms. For each $f$, there is a space of differential $k$-forms, which can be expressed in terms of the coordinates as
 * $$ \sum_{i_1,i_2\ldots i_k=1}^n f_{i_1i_2\ldots i_k} \, dx^{i_1} \wedge dx^{i_2} \wedge\cdots \wedge dx^{i_k}$$

for a collection of functions $f_{i_{1}i_{2}⋅⋅⋅i_{k}}|undefined$. Antisymmetry, which was already present for $2$-forms, makes it possible to restrict the sum to those sets of indices for which $i_{1} < i_{2} < ... < i_{k−1} < i_{k}$.

Differential forms can be multiplied together using the exterior product, and for any differential $k$-form $k$, there is a differential $(k + 1)$-form $dα$ called the exterior derivative of $α$.

Differential forms, the exterior product and the exterior derivative are independent of a choice of coordinates. Consequently, they may be defined on any smooth manifold $α$. One way to do this is cover $M$ with coordinate charts and define a differential $M$-form on $k$ to be a family of differential $M$-forms on each chart which agree on the overlaps. However, there are more intrinsic definitions which make the independence of coordinates manifest.

Intrinsic definitions
Let $M$ be a smooth manifold. A smooth differential form of degree $k$ is a smooth section of the $k$th exterior power of the cotangent bundle of $M$. The set of all differential $k$-forms on a manifold $M$ is a vector space, often denoted $Ω^{k}(M)$.

The definition of a differential form may be restated as follows. At any point $p ∈ M$, a $k$-form $β$ defines an element
 * $$ \beta_p \in {\textstyle\bigwedge}^k T_p^* M,$$

where $T_{p}M$ is the tangent space to $M$ at $p$ and $T_{p}^{*}M$ is its dual space. This space is to the fiber at $p$ of the dual bundle of the $k$th exterior power of the tangent bundle of $M$. That is, $β$ is also a linear functional $\beta_p \colon {\textstyle\bigwedge}^k T_pM \to \mathbf{R}$, i.e. the dual of the $k$th exterior power is isomorphic to the $k$th exterior power of the dual:
 * $${\textstyle\bigwedge}^k T^*_p M \cong \Big({\textstyle\bigwedge}^k T_p M\Big)^*$$

By the universal property of exterior powers, this is equivalently an alternating multilinear map:
 * $$\beta_p\colon \bigoplus_{n=1}^k T_p M \to \mathbf{R}.$$

Consequently, a differential $k$-form may be evaluated against any $k$-tuple of tangent vectors to the same point $p$ of $M$. For example, a differential $1$-form $α$ assigns to each point $p ∈ M$ a linear functional $α_{p}$ on $T_{p}M$. In the presence of an inner product on $T_{p}M$ (induced by a Riemannian metric on $M$), $α_{p}$ may be represented as the inner product with a tangent vector $X_{p}$. Differential $1$-forms are sometimes called covariant vector fields, covector fields, or "dual vector fields", particularly within physics.

The exterior algebra may be embedded in the tensor algebra by means of the alternation map. The alternation map is defined as a mapping
 * $$\operatorname{Alt} \colon {\bigotimes}^k T^*M \to {\bigotimes}^k T^*M.$$

For a tensor $$\tau$$ at a point $p$,
 * $$\operatorname{Alt}(\tau_p)(x_1, \dots, x_k) = \frac{1}{k!}\sum_{\sigma \in S_k} \sgn(\sigma) \tau_p(x_{\sigma(1)}, \dots, x_{\sigma(k)}),$$

where $S_{k}$ is the symmetric group on $k$ elements. The alternation map is constant on the cosets of the ideal in the tensor algebra generated by the symmetric 2-forms, and therefore descends to an embedding
 * $$\operatorname{Alt} \colon {\textstyle\bigwedge}^k T^*M \to {\bigotimes}^k T^*M.$$

This map exhibits $β$ as a totally antisymmetric covariant tensor field of rank $k$. The differential forms on $M$ are in one-to-one correspondence with such tensor fields.

Operations
As well as the addition and multiplication by scalar operations which arise from the vector space structure, there are several other standard operations defined on differential forms. The most important operations are the exterior product of two differential forms, the exterior derivative of a single differential form, the interior product of a differential form and a vector field, the Lie derivative of a differential form with respect to a vector field and the covariant derivative of a differential form with respect to a vector field on a manifold with a defined connection.

Exterior product
The exterior product of a $k$-form $α$ and an $ℓ$-form $β$, denoted $α ∧ β$, is a ($k + ℓ$)-form. At each point $p$ of the manifold $M$, the forms $α$ and $β$ are elements of an exterior power of the cotangent space at $p$. When the exterior algebra is viewed as a quotient of the tensor algebra, the exterior product corresponds to the tensor product (modulo the equivalence relation defining the exterior algebra).

The antisymmetry inherent in the exterior algebra means that when $α ∧ β$ is viewed as a multilinear functional, it is alternating. However, when the exterior algebra is embedded as a subspace of the tensor algebra by means of the alternation map, the tensor product $α ⊗ β$ is not alternating. There is an explicit formula which describes the exterior product in this situation. The exterior product is
 * $$\alpha \wedge \beta = \operatorname{Alt}(\alpha \otimes \beta).$$

If the embedding of $${\textstyle\bigwedge}^n T^*M$$ into $${\bigotimes}^n T^*M$$ is done via the map $$n!\operatorname{Alt}$$ instead of $$\operatorname{Alt}$$, the exterior product is
 * $$\alpha \wedge \beta = \frac{(k + \ell)!}{k!\ell!}\operatorname{Alt}(\alpha \otimes \beta).$$

This description is useful for explicit computations. For example, if $k = ℓ = 1$, then $α ∧ β$ is the $2$-form whose value at a point $p$ is the alternating bilinear form defined by
 * $$ (\alpha\wedge\beta)_p(v,w)=\alpha_p(v)\beta_p(w) - \alpha_p(w)\beta_p(v)$$

for $v, w ∈ T_{p}M$.

The exterior product is bilinear: If $α$, $β$, and $γ$ are any differential forms, and if $f$ is any smooth function, then
 * $$\alpha \wedge (\beta + \gamma) = \alpha \wedge \beta + \alpha \wedge \gamma,$$
 * $$\alpha \wedge (f \cdot \beta) = f \cdot (\alpha \wedge \beta).$$

It is skew commutative (also known as graded commutative), meaning that it satisfies a variant of anticommutativity that depends on the degrees of the forms: if $α$ is a $k$-form and $β$ is an $ℓ$-form, then
 * $$\alpha \wedge \beta = (-1)^{k\ell} \beta \wedge \alpha .$$

One also has the graded Leibniz rule:"$d(\alpha\wedge\beta)=d\alpha\wedge\beta + (-1)^{k}\alpha\wedge d\beta.$"

Riemannian manifold
On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, the metric defines a fibre-wise isomorphism of the tangent and cotangent bundles. This makes it possible to convert vector fields to covector fields and vice versa. It also enables the definition of additional operations such as the Hodge star operator $$\star \colon \Omega^k(M)\ \stackrel{\sim}{\to}\ \Omega^{n-k}(M)$$ and the codifferential $$\delta\colon \Omega^k(M)\rightarrow \Omega^{k-1}(M)$$, which has degree $−1$ and is adjoint to the exterior differential $d$.

Vector field structures
On a pseudo-Riemannian manifold, $1$-forms can be identified with vector fields; vector fields have additional distinct algebraic structures, which are listed here for context and to avoid confusion.

Firstly, each (co)tangent space generates a Clifford algebra, where the product of a (co)vector with itself is given by the value of a quadratic form – in this case, the natural one induced by the metric. This algebra is distinct from the exterior algebra of differential forms, which can be viewed as a Clifford algebra where the quadratic form vanishes (since the exterior product of any vector with itself is zero). Clifford algebras are thus non-anticommutative ("quantum") deformations of the exterior algebra. They are studied in geometric algebra.

Another alternative is to consider vector fields as derivations. The (noncommutative) algebra of differential operators they generate is the Weyl algebra and is a noncommutative ("quantum") deformation of the symmetric algebra in the vector fields.

Exterior differential complex
One important property of the exterior derivative is that $d = 0$. This means that the exterior derivative defines a cochain complex:
 * $$0\ \to\ \Omega^0(M)\ \stackrel{d}{\to}\ \Omega^1(M)\ \stackrel{d}{\to}\ \Omega^2(M)\ \stackrel{d}{\to}\ \Omega^3(M)\ \to\ \cdots \ \to\ \Omega^n(M)\ \to \ 0.$$

This complex is called the de Rham complex, and its cohomology is by definition the de Rham cohomology of $M$. By the Poincaré lemma, the de Rham complex is locally exact except at $Ω^{0}(M)$. The kernel at $Ω^{0}(M)$ is the space of locally constant functions on $M$. Therefore, the complex is a resolution of the constant sheaf $R$, which in turn implies a form of de Rham's theorem: de Rham cohomology computes the sheaf cohomology of $R$.

Pullback
Suppose that $f : M → N$ is smooth. The differential of $f$ is a smooth map $df : TM → TN$ between the tangent bundles of $M$ and $N$. This map is also denoted $f_{∗}$ and called the pushforward. For any point $p ∈ M$ and any tangent vector $v &isin; T_{p}M$, there is a well-defined pushforward vector $f_{∗}(v)$ in $T_{f(p)}N$. However, the same is not true of a vector field. If $f$ is not injective, say because $q ∈ N$ has two or more preimages, then the vector field may determine two or more distinct vectors in $T_{q}N$. If $f$ is not surjective, then there will be a point $q &isin; N$ at which $f_{∗}$ does not determine any tangent vector at all. Since a vector field on $N$ determines, by definition, a unique tangent vector at every point of $N$, the pushforward of a vector field does not always exist.

By contrast, it is always possible to pull back a differential form. A differential form on $N$ may be viewed as a linear functional on each tangent space. Precomposing this functional with the differential $df : TM → TN$ defines a linear functional on each tangent space of $M$ and therefore a differential form on $M$. The existence of pullbacks is one of the key features of the theory of differential forms. It leads to the existence of pullback maps in other situations, such as pullback homomorphisms in de Rham cohomology.

Formally, let $f : M → N$ be smooth, and let $ω$ be a smooth $k$-form on $N$. Then there is a differential form $fω$ on $M$, called the pullback of $ω$, which captures the behavior of $ω$ as seen relative to $f$. To define the pullback, fix a point $p$ of $M$ and tangent vectors $v_{1}$, ..., $v_{k}$ to $M$ at $p$. The pullback of $ω$ is defined by the formula
 * $$(f^*\omega)_p(v_1, \ldots, v_k) = \omega_{f(p)}(f_*v_1, \ldots, f_*v_k).$$

There are several more abstract ways to view this definition. If $ω$ is a $1$-form on $N$, then it may be viewed as a section of the cotangent bundle $TN$ of $N$. Using to denote a dual map, the dual to the differential of $f$ is $(df) : T'N → T'M$. The pullback of $ω$ may be defined to be the composite
 * $$M\ \stackrel{f}{\to}\ N\ \stackrel{\omega}{\to}\ T^*N\ \stackrel{(df)^*}{\longrightarrow}\ T^*M.$$

This is a section of the cotangent bundle of $M$ and hence a differential $1$-form on $M$. In full generality, let $\bigwedge^k (df)^*$ denote the $k$th exterior power of the dual map to the differential. Then the pullback of a $k$-form $ω$ is the composite
 * $$M\ \stackrel{f}{\to}\ N\ \stackrel{\omega}{\to}\ {\textstyle\bigwedge}^k T^*N\ \stackrel{{\bigwedge}^k (df)^*}{\longrightarrow}\ {\textstyle\bigwedge}^k T^*M.$$

Another abstract way to view the pullback comes from viewing a $k$-form $ω$ as a linear functional on tangent spaces. From this point of view, $ω$ is a morphism of vector bundles
 * $${\textstyle\bigwedge}^k TN\ \stackrel{\omega}{\to}\ N \times \mathbf{R},$$

where $N × R$ is the trivial rank one bundle on $N$. The composite map
 * $${\textstyle\bigwedge}^k TM\ \stackrel{{\bigwedge}^k df}{\longrightarrow}\ {\textstyle\bigwedge}^k TN\ \stackrel{\omega}{\to}\ N \times \mathbf{R}$$

defines a linear functional on each tangent space of $M$, and therefore it factors through the trivial bundle $M × R$. The vector bundle morphism ${\textstyle\bigwedge}^k TM \to M \times \mathbf{R}$ defined in this way is $fω$.

Pullback respects all of the basic operations on forms. If $ω$ and $η$ are forms and $c$ is a real number, then
 * $$\begin{align}

f^*(c\omega) &= c(f^*\omega), \\ f^*(\omega + \eta) &= f^*\omega + f^*\eta, \\ f^*(\omega \wedge \eta) &= f^*\omega \wedge f^*\eta, \\ f^*(d\omega) &= d(f^*\omega). \end{align}$$

The pullback of a form can also be written in coordinates. Assume that $x^{1}$, ..., $x^{m}$ are coordinates on $M$, that $y^{1}$, ..., $y^{n}$ are coordinates on $N$, and that these coordinate systems are related by the formulas $y^{i} = f_{i}(x^{1}, ..., x^{m})$ for all $i$. Locally on $N$, $ω$ can be written as
 * $$\omega = \sum_{i_1 < \cdots < i_k} \omega_{i_1\cdots i_k} \, dy^{i_1} \wedge \cdots \wedge dy^{i_k},$$

where, for each choice of $i_{1}$, ..., $i_{k}$, $ω_{i_{1}⋅⋅⋅i_{k}}|undefined$ is a real-valued function of $y^{1}$, ..., $y^{n}$. Using the linearity of pullback and its compatibility with exterior product, the pullback of $ω$ has the formula
 * $$f^*\omega = \sum_{i_1 < \cdots < i_k} (\omega_{i_1\cdots i_k}\circ f) \, df_{i_1} \wedge \cdots \wedge df_{i_k}.$$

Each exterior derivative $df_{i}$ can be expanded in terms of $dx^{1}$, ..., $dx^{m}$. The resulting $k$-form can be written using Jacobian matrices:
 * $$ f^*\omega = \sum_{i_1 < \cdots < i_k} \sum_{j_1 < \cdots < j_k} (\omega_{i_1\cdots i_k}\circ f)\frac{\partial(f_{i_1}, \ldots, f_{i_k})}{\partial(x^{j_1}, \ldots, x^{j_k})} \, dx^{j_1} \wedge \cdots \wedge dx^{j_k}.$$

Here, $\frac{\partial(f_{i_1}, \ldots, f_{i_k})}{\partial(x^{j_1}, \ldots, x^{j_k})}$ denotes the determinant of the matrix whose entries are $\frac{\partial f_{i_m}}{\partial x^{j_n}}$, $$1\leq m,n\leq k$$.

Integration
A differential $k$-form can be integrated over an oriented $k$-dimensional manifold. When the $k$-form is defined on an $n$-dimensional manifold with $n &gt; k$, then the $k$-form can be integrated over oriented $k$-dimensional submanifolds. If $k = 0$, integration over oriented 0-dimensional submanifolds is just the summation of the integrand evaluated at points, according to the orientation of those points. Other values of $k = 1, 2, 3, ...$ correspond to line integrals, surface integrals, volume integrals, and so on. There are several equivalent ways to formally define the integral of a differential form, all of which depend on reducing to the case of Euclidean space.

Integration on Euclidean space
Let $U$ be an open subset of $R^{n}$. Give $R^{n}$ its standard orientation and $U$ the restriction of that orientation. Every smooth $n$-form $ω$ on $U$ has the form
 * $$\omega = f(x)\,dx^1 \wedge \cdots \wedge dx^n$$

for some smooth function $f : R^{n} → R$. Such a function has an integral in the usual Riemann or Lebesgue sense. This allows us to define the integral of $ω$ to be the integral of $f$:
 * $$\int_U \omega\ \stackrel{\text{def}}{=} \int_U f(x)\,dx^1 \cdots dx^n.$$

Fixing an orientation is necessary for this to be well-defined. The skew-symmetry of differential forms means that the integral of, say, $dx^{1} ∧ dx^{2}$ must be the negative of the integral of $dx^{2} ∧ dx^{1}$. Riemann and Lebesgue integrals cannot see this dependence on the ordering of the coordinates, so they leave the sign of the integral undetermined. The orientation resolves this ambiguity.

Integration over chains
Let $M$ be an $n$-manifold and $ω$ an $n$-form on $M$. First, assume that there is a parametrization of $M$ by an open subset of Euclidean space. That is, assume that there exists a diffeomorphism
 * $$\varphi \colon D \to M$$

where $D ⊆ R^{n}$. Give $M$ the orientation induced by $φ$. Then defines the integral of $ω$ over $M$ to be the integral of $φ^{∗}ω$ over $D$. In coordinates, this has the following expression. Fix an embedding of $M$ in $R^{I}$ with coordinates $x^{1}, ..., x^{I}$. Then
 * $$\omega = \sum_{i_1 < \cdots < i_n} a_{i_1,\ldots,i_n}({\mathbf x})\,dx^{i_1} \wedge \cdots \wedge dx^{i_n}.$$

Suppose that $φ$ is defined by
 * $$\varphi({\mathbf u}) = (x^1({\mathbf u}),\ldots,x^I({\mathbf u})).$$

Then the integral may be written in coordinates as
 * $$\int_M \omega = \int_D \sum_{i_1 < \cdots < i_n} a_{i_1,\ldots,i_n}(\varphi({\mathbf u})) \frac{\partial(x^{i_1},\ldots,x^{i_n})}{\partial(u^{1},\dots,u^{n})}\,du^1 \cdots du^n,$$

where
 * $$\frac{\partial(x^{i_1},\ldots,x^{i_n})}{\partial(u^{1},\ldots,u^{n})}$$

is the determinant of the Jacobian. The Jacobian exists because $φ$ is differentiable.

In general, an $n$-manifold cannot be parametrized by an open subset of $R^{n}$. But such a parametrization is always possible locally, so it is possible to define integrals over arbitrary manifolds by defining them as sums of integrals over collections of local parametrizations. Moreover, it is also possible to define parametrizations of $k$-dimensional subsets for $k &lt; n$, and this makes it possible to define integrals of $k$-forms. To make this precise, it is convenient to fix a standard domain $D$ in $R^{k}$, usually a cube or a simplex. A $k$-chain is a formal sum of smooth embeddings $D → M$. That is, it is a collection of smooth embeddings, each of which is assigned an integer multiplicity. Each smooth embedding determines a $k$-dimensional submanifold of $M$. If the chain is
 * $$c = \sum_{i=1}^r m_i \varphi_i,$$

then the integral of a $k$-form $ω$ over $c$ is defined to be the sum of the integrals over the terms of $c$:
 * $$\int_c \omega = \sum_{i=1}^r m_i \int_D \varphi_i^*\omega.$$

This approach to defining integration does not assign a direct meaning to integration over the whole manifold $M$. However, it is still possible to assign such a meaning indirectly because every smooth manifold may be smoothly triangulated in an essentially unique way, and the integral over $M$ may be defined to be the integral over the chain determined by a triangulation.

Integration using partitions of unity
There is another approach, expounded in, which does directly assign a meaning to integration over $M$, but this approach requires fixing an orientation of $M$. The integral of an $n$-form $ω$ on an $n$-dimensional manifold is defined by working in charts. Suppose first that $ω$ is supported on a single positively oriented chart. On this chart, it may be pulled back to an $n$-form on an open subset of $R^{n}$. Here, the form has a well-defined Riemann or Lebesgue integral as before. The change of variables formula and the assumption that the chart is positively oriented together ensure that the integral of $ω$ is independent of the chosen chart. In the general case, use a partition of unity to write $ω$ as a sum of $n$-forms, each of which is supported in a single positively oriented chart, and define the integral of $ω$ to be the sum of the integrals of each term in the partition of unity.

It is also possible to integrate $k$-forms on oriented $k$-dimensional submanifolds using this more intrinsic approach. The form is pulled back to the submanifold, where the integral is defined using charts as before. For example, given a path $γ(t) : [0, 1] → R^{2}$, integrating a $1$-form on the path is simply pulling back the form to a form $f(t) dt$ on $[0, 1]$, and this integral is the integral of the function $f(t)$ on the interval.

Integration along fibers
Fubini's theorem states that the integral over a set that is a product may be computed as an iterated integral over the two factors in the product. This suggests that the integral of a differential form over a product ought to be computable as an iterated integral as well. The geometric flexibility of differential forms ensures that this is possible not just for products, but in more general situations as well. Under some hypotheses, it is possible to integrate along the fibers of a smooth map, and the analog of Fubini's theorem is the case where this map is the projection from a product to one of its factors.

Because integrating a differential form over a submanifold requires fixing an orientation, a prerequisite to integration along fibers is the existence of a well-defined orientation on those fibers. Let $M$ and $N$ be two orientable manifolds of pure dimensions $m$ and $n$, respectively. Suppose that $f : M → N$ is a surjective submersion. This implies that each fiber $f(y)$ is $(m &minus; n)$-dimensional and that, around each point of $M$, there is a chart on which $f$ looks like the projection from a product onto one of its factors. Fix $x ∈ M$ and set $y = f(x)$. Suppose that
 * $$\begin{align}

\omega_x &\in {\textstyle\bigwedge}^m T_x^*M, \\ \eta_y &\in {\textstyle\bigwedge}^n T_y^*N, \end{align}$$ and that $η_{y}$ does not vanish. Following, there is a unique
 * $$\sigma_x \in {\textstyle\bigwedge}^{m-n} T_x^*(f^{-1}(y))$$

which may be thought of as the fibral part of $ω_{x}$ with respect to $η_{y}$. More precisely, define $j : f(y) → M$ to be the inclusion. Then $σ_{x}$ is defined by the property that
 * $$\omega_x = (f^*\eta_y)_x \wedge \sigma'_x \in {\textstyle\bigwedge}^m T_x^*M,$$

where
 * $$\sigma'_x \in {\textstyle\bigwedge}^{m-n} T_x^*M$$

is any $(m &minus; n)$-covector for which
 * $$\sigma_x = j^*\sigma'_x.$$

The form $σ_{x}$ may also be notated $ω_{x} / η_{y}$.

Moreover, for fixed $y$, $σ_{x}$ varies smoothly with respect to $x$. That is, suppose that
 * $$\omega \colon f^{-1}(y) \to T^*M$$

is a smooth section of the projection map; we say that $ω$ is a smooth differential $m$-form on $M$ along $f(y)$. Then there is a smooth differential $(m &minus; n)$-form $σ$ on $f(y)$ such that, at each $x ∈ f(y)$,
 * $$\sigma_x = \omega_x / \eta_y.$$

This form is denoted $ω / η_{y}$. The same construction works if $ω$ is an $m$-form in a neighborhood of the fiber, and the same notation is used. A consequence is that each fiber $f(y)$ is orientable. In particular, a choice of orientation forms on $M$ and $N$ defines an orientation of every fiber of $f$.

The analog of Fubini's theorem is as follows. As before, $M$ and $N$ are two orientable manifolds of pure dimensions $m$ and $n$, and $f : M → N$ is a surjective submersion. Fix orientations of $M$ and $N$, and give each fiber of $f$ the induced orientation. Let $ω$ be an $m$-form on $M$, and let $η$ be an $n$-form on $N$ that is almost everywhere positive with respect to the orientation of $N$. Then, for almost every $y ∈ N$, the form $ω / η_{y}$ is a well-defined integrable $m &minus; n$ form on $f(y)$. Moreover, there is an integrable $n$-form on $N$ defined by
 * $$y \mapsto \bigg(\int_{f^{-1}(y)} \omega / \eta_y\bigg)\,\eta_y.$$

Denote this form by
 * $$\bigg(\int_{f^{-1}(y)} \omega / \eta\bigg)\,\eta.$$

Then proves the generalized Fubini formula
 * $$\int_M \omega = \int_N \bigg(\int_{f^{-1}(y)} \omega / \eta\bigg)\,\eta.$$

It is also possible to integrate forms of other degrees along the fibers of a submersion. Assume the same hypotheses as before, and let $α$ be a compactly supported $(m &minus; n + k)$-form on $M$. Then there is a $k$-form $γ$ on $N$ which is the result of integrating $α$ along the fibers of $f$. The form $α$ is defined by specifying, at each $y ∈ N$, how $γ$ pairs with each $k$-vector $v$ at $y$, and the value of that pairing is an integral over $f(y)$ that depends only on $α$, $v$, and the orientations of $M$ and $N$. More precisely, at each $y ∈ N$, there is an isomorphism
 * $${\textstyle\bigwedge}^k T_yN \to {\textstyle\bigwedge}^{n-k} T_y^*N$$

defined by the interior product
 * $$\mathbf{v} \mapsto \mathbf{v}\,\lrcorner\,\zeta_y,$$

for any choice of volume form $ζ$ in the orientation of $N$. If $x ∈ f(y)$, then a $k$-vector $v$ at $y$ determines an $(n &minus; k)$-covector at $x$ by pullback:
 * $$f^*(\mathbf{v}\,\lrcorner\,\zeta_y) \in {\textstyle\bigwedge}^{n-k} T_x^*M.$$

Each of these covectors has an exterior product against $α$, so there is an $(m &minus; n)$-form $β_{v}$ on $M$ along $f(y)$ defined by
 * $$(\beta_{\mathbf{v}})_x = \left(\alpha_x \wedge f^*(\mathbf{v}\,\lrcorner\,\zeta_y)\right) \big/ \zeta_y \in {\textstyle\bigwedge}^{m-n} T_x^*M.$$

This form depends on the orientation of $N$ but not the choice of $ζ$. Then the $k$-form $γ$ is uniquely defined by the property
 * $$\langle\gamma_y, \mathbf{v}\rangle = \int_{f^{-1}(y)} \beta_{\mathbf{v}},$$

and $γ$ is smooth. This form also denoted $α^{♭}$ and called the integral of $α$ along the fibers of $f$. Integration along fibers is important for the construction of Gysin maps in de Rham cohomology.

Integration along fibers satisfies the projection formula. If $λ$ is any $ℓ$-form on $N$, then
 * $$\alpha^\flat \wedge \lambda = (\alpha \wedge f^*\lambda)^\flat.$$

Stokes's theorem
The fundamental relationship between the exterior derivative and integration is given by the Stokes' theorem: If $ω$ is an ($n − 1$)-form with compact support on $M$ and $∂M$ denotes the boundary of $M$ with its induced orientation, then
 * $$\int_M d\omega = \int_{\partial M} \omega.$$

A key consequence of this is that "the integral of a closed form over homologous chains is equal": If $ω$ is a closed $k$-form and $M$ and $N$ are $k$-chains that are homologous (such that $M − N$ is the boundary of a $(k + 1)$-chain $W$), then $$\textstyle{\int_M \omega = \int_N \omega}$$, since the difference is the integral $$\textstyle\int_W d\omega = \int_W 0 = 0$$.

For example, if $ω = df$ is the derivative of a potential function on the plane or $R^{n}$, then the integral of $ω$ over a path from $a$ to $b$ does not depend on the choice of path (the integral is $f(b) − f(a)$), since different paths with given endpoints are homotopic, hence homologous (a weaker condition). This case is called the gradient theorem, and generalizes the fundamental theorem of calculus. This path independence is very useful in contour integration.

This theorem also underlies the duality between de Rham cohomology and the homology of chains.

Relation with measures
On a general differentiable manifold (without additional structure), differential forms cannot be integrated over subsets of the manifold; this distinction is key to the distinction between differential forms, which are integrated over chains or oriented submanifolds, and measures, which are integrated over subsets. The simplest example is attempting to integrate the $1$-form $dx$ over the interval $[0, 1]$. Assuming the usual distance (and thus measure) on the real line, this integral is either $1$ or $&minus;1$, depending on orientation: $$\textstyle{\int_0^1 dx = 1}$$, while $$\textstyle{\int_1^0 dx = - \int_0^1 dx = -1}$$. By contrast, the integral of the measure $|dx|$ on the interval is unambiguously $1$ (i.e. the integral of the constant function $1$ with respect to this measure is $1$). Similarly, under a change of coordinates a differential $n$-form changes by the Jacobian determinant $J$, while a measure changes by the absolute value of the Jacobian determinant, $|J|$, which further reflects the issue of orientation. For example, under the map $x ↦ −x$ on the line, the differential form $dx$ pulls back to $−dx$; orientation has reversed; while the Lebesgue measure, which here we denote $|dx|$, pulls back to $|dx|$; it does not change.

In the presence of the additional data of an orientation, it is possible to integrate $n$-forms (top-dimensional forms) over the entire manifold or over compact subsets; integration over the entire manifold corresponds to integrating the form over the fundamental class of the manifold, $[M]$. Formally, in the presence of an orientation, one may identify $n$-forms with densities on a manifold; densities in turn define a measure, and thus can be integrated.

On an orientable but not oriented manifold, there are two choices of orientation; either choice allows one to integrate $n$-forms over compact subsets, with the two choices differing by a sign. On non-orientable manifold, $n$-forms and densities cannot be identified —notably, any top-dimensional form must vanish somewhere (there are no volume forms on non-orientable manifolds), but there are nowhere-vanishing densities— thus while one can integrate densities over compact subsets, one cannot integrate $n$-forms. One can instead identify densities with top-dimensional pseudoforms.

Even in the presence of an orientation, there is in general no meaningful way to integrate $k$-forms over subsets for $k < n$ because there is no consistent way to use the ambient orientation to orient $k$-dimensional subsets. Geometrically, a $k$-dimensional subset can be turned around in place, yielding the same subset with the opposite orientation; for example, the horizontal axis in a plane can be rotated by 180 degrees. Compare the Gram determinant of a set of $k$ vectors in an $n$-dimensional space, which, unlike the determinant of $n$ vectors, is always positive, corresponding to a squared number. An orientation of a $k$-submanifold is therefore extra data not derivable from the ambient manifold.

On a Riemannian manifold, one may define a $k$-dimensional Hausdorff measure for any $k$ (integer or real), which may be integrated over $k$-dimensional subsets of the manifold. A function times this Hausdorff measure can then be integrated over $k$-dimensional subsets, providing a measure-theoretic analog to integration of $k$-forms. The $n$-dimensional Hausdorff measure yields a density, as above.

Currents
The differential form analog of a distribution or generalized function is called a current. The space of $k$-currents on $M$ is the dual space to an appropriate space of differential $k$-forms. Currents play the role of generalized domains of integration, similar to but even more flexible than chains.

Applications in physics
Differential forms arise in some important physical contexts. For example, in Maxwell's theory of electromagnetism, the Faraday 2-form, or electromagnetic field strength, is
 * $$\textbf{F} = \frac 1 2 f_{ab}\, dx^a \wedge dx^b\,,$$

where the $f_{ab}$ are formed from the electromagnetic fields $$\vec E$$ and $$\vec B$$; e.g., $f_{12} = E_{z}/c$, $f_{23} = −B_{z}$, or equivalent definitions.

This form is a special case of the curvature form on the $U(1)$ principal bundle on which both electromagnetism and general gauge theories may be described. The connection form for the principal bundle is the vector potential, typically denoted by $A$, when represented in some gauge. One then has
 * $$\textbf{F} = d\textbf{A}.$$

The current $3$-form is
 * $$ \textbf{J} = \frac 1 6 j^a\, \varepsilon_{abcd}\, dx^b \wedge dx^c \wedge dx^d\,,$$

where $j^{a}$ are the four components of the current density. (Here it is a matter of convention to write $F_{ab}$ instead of $f_{ab}$, i.e. to use capital letters, and to write $J^{a}$ instead of $j^{a}$. However, the vector rsp. tensor components and the above-mentioned forms have different physical dimensions. Moreover, by decision of an international commission of the International Union of Pure and Applied Physics, the magnetic polarization vector has been called $$\vec J$$ for several decades, and by some publishers $J$; i.e., the same name is used for different quantities.)

Using the above-mentioned definitions, Maxwell's equations can be written very compactly in geometrized units as
 * $$\begin{align}

d {\textbf{F}} &= \textbf{0} \\ d {\star \textbf{F}} &= \textbf{J}, \end{align}$$ where $$\star$$ denotes the Hodge star operator. Similar considerations describe the geometry of gauge theories in general.

The $2$-form $${\star} \mathbf{F}$$, which is dual to the Faraday form, is also called Maxwell 2-form.

Electromagnetism is an example of a $U(1)$ gauge theory. Here the Lie group is $U(1)$, the one-dimensional unitary group, which is in particular abelian. There are gauge theories, such as Yang–Mills theory, in which the Lie group is not abelian. In that case, one gets relations which are similar to those described here. The analog of the field $F$ in such theories is the curvature form of the connection, which is represented in a gauge by a Lie algebra-valued one-form $A$. The Yang–Mills field $F$ is then defined by
 * $$\mathbf{F} = d\mathbf{A} + \mathbf{A}\wedge\mathbf{A}.$$

In the abelian case, such as electromagnetism, $A ∧ A = 0$, but this does not hold in general. Likewise the field equations are modified by additional terms involving exterior products of $A$ and $F$, owing to the structure equations of the gauge group.

Applications in geometric measure theory
Numerous minimality results for complex analytic manifolds are based on the Wirtinger inequality for 2-forms. A succinct proof may be found in Herbert Federer's classic text Geometric Measure Theory. The Wirtinger inequality is also a key ingredient in Gromov's inequality for complex projective space in systolic geometry.