Dual number

In algebra, the dual numbers are a hypercomplex number system first introduced in the 19th century. They are expressions of the form $a + bε$, where $a$ and $b$ are real numbers, and $ε$ is a symbol taken to satisfy $$\varepsilon^2 = 0$$ with $$\varepsilon\neq 0$$.

Dual numbers can be added component-wise, and multiplied by the formula
 * $$ (a+b\varepsilon)(c+d\varepsilon) = ac + (ad+bc)\varepsilon, $$

which follows from the property $ε2 = 0$ and the fact that multiplication is a bilinear operation.

The dual numbers form a commutative algebra of dimension two over the reals, and also an Artinian local ring. They are one of the simplest examples of a ring that has nonzero nilpotent elements.

History
Dual numbers were introduced in 1873 by William Clifford, and were used at the beginning of the twentieth century by the German mathematician Eduard Study, who used them to represent the dual angle which measures the relative position of two skew lines in space. Study defined a dual angle as $θ + dε$, where $θ$ is the angle between the directions of two lines in three-dimensional space and $d$ is a distance between them. The $n$-dimensional generalization, the Grassmann number, was introduced by Hermann Grassmann in the late 19th century.

Modern definition
In modern algebra, the algebra of dual numbers is often defined as the quotient of a polynomial ring over the real numbers $$(\mathbb{R})$$ by the principal ideal generated by the square of the indeterminate, that is
 * $$\mathbb{R}[X]/\left\langle X^2 \right\rangle.$$

It may also be defined as the exterior algebra of a one-dimensional vector space with $$\varepsilon$$ as its basis element.

Division
Division of dual numbers is defined when the real part of the denominator is non-zero. The division process is analogous to complex division in that the denominator is multiplied by its conjugate in order to cancel the non-real parts.

Therefore, to evaluate an expression of the form
 * $$\frac{a + b\varepsilon}{c + d\varepsilon}$$

we multiply the numerator and denominator by the conjugate of the denominator:
 * $$\begin{align}

\frac{a + b\varepsilon}{c + d\varepsilon} &= \frac{(a + b\varepsilon)(c - d\varepsilon)}{(c + d\varepsilon)(c - d\varepsilon)}\\[5pt] &= \frac{ac - ad\varepsilon + bc\varepsilon - bd\varepsilon^2}{c^2 + cd\varepsilon - cd\varepsilon - d^2\varepsilon^2}\\[5pt] &= \frac{ac - ad\varepsilon + bc\varepsilon - 0}{c^2 - 0}\\[5pt] &= \frac{ac + \varepsilon(bc - ad)}{c^2}\\[5pt] &= \frac{a}{c} + \frac{bc - ad}{c^2}\varepsilon \end{align}$$

which is defined when $c$ is non-zero.

If, on the other hand, $c$ is zero while $d$ is not, then the equation
 * $${a + b\varepsilon = (x + y\varepsilon) d\varepsilon} = {xd\varepsilon + 0}$$

This means that the non-real part of the "quotient" is arbitrary and division is therefore not defined for purely nonreal dual numbers. Indeed, they are (trivially) zero divisors and clearly form an ideal of the associative algebra (and thus ring) of the dual numbers.
 * 1) has no solution if $a$ is nonzero
 * 2) is otherwise solved by any dual number of the form $b⁄d + yε$.

Matrix representation
The dual number $$a + b \epsilon$$ can be represented by the square matrix $$\begin{pmatrix}a & b \\ 0 & a \end{pmatrix}$$. In this representation the matrix $$\begin{pmatrix}0 & 1 \\ 0 & 0 \end{pmatrix}$$ squares to the zero matrix, corresponding to the dual number $$\varepsilon$$.

There are other ways to represent dual numbers as square matrices. They consist of representing the dual number $$1$$ by the identity matrix, and $$\epsilon$$ by any matrix whose square is the zero matrix; that is, in the case of $2×2$ matrices, any nonzero matrix of the form
 * $$\begin{pmatrix}a & b \\ c & -a \end{pmatrix}$$

with $$a^2+bc=0.$$

Differentiation
One application of dual numbers is automatic differentiation. Any polynomial


 * $$P(x) = p_0 + p_1x + p_2x^2 + \cdots + p_nx^n$$

with real coefficients can be extended to a function of a dual-number-valued argument,


 * $$\begin{align}

P(a + b\varepsilon) &= p_0 + p_1(a + b\varepsilon) + \cdots + p_n(a + b\varepsilon)^n \\[2mu] &= p_0 + p_1 a + p_2 a^2 + \cdots + p_n a^n + p_1 b\varepsilon + 2 p_2 a b\varepsilon + \cdots + n p_n a^{n-1} b\varepsilon \\[5mu] &= P(a) + bP'(a)\varepsilon, \end{align}$$

where $$P'$$ is the derivative of $$P.$$

More generally, any (analytic) real function can be extended to the dual numbers via its Taylor series:


 * $$f(a + b\varepsilon) = \sum_{n=0}^\infty \frac{f^{(n)} (a)b^n \varepsilon^n}{n!} = f(a) + bf'(a)\varepsilon,$$

since all terms involving $ε^{2}$ or greater powers are trivially $0$ by the definition of $ε$.

By computing compositions of these functions over the dual numbers and examining the coefficient of $ε$ in the result we find we have automatically computed the derivative of the composition.

A similar method works for polynomials of $n$ variables, using the exterior algebra of an $n$-dimensional vector space.

Geometry
The "unit circle" of dual numbers consists of those with $a = ±1$ since these satisfy $zz* = 1$ where $z* = a − bε$. However, note that
 * $$ e^{b \varepsilon} = \sum^\infty_{n=0} \frac{\left(b\varepsilon\right)^n}{n!} = 1 + b \varepsilon,$$

so the exponential map applied to the $ε$-axis covers only half the "circle".

Let $z = a + bε$. If $a ≠ 0$ and $m = b⁄a$, then $z = a(1 + mε)$ is the polar decomposition of the dual number $z$, and the slope $m$ is its angular part. The concept of a rotation in the dual number plane is equivalent to a vertical shear mapping since $(1 + pε)(1 + qε) = 1 + (p + q)ε$.

In absolute space and time the Galilean transformation
 * $$\left(t', x'\right) = (t, x)\begin{pmatrix} 1 & v \\0 & 1 \end{pmatrix}\,,$$

that is
 * $$t' = t,\quad x' = vt + x,$$

relates the resting coordinates system to a moving frame of reference of velocity $v$. With dual numbers $t + xε$ representing events along one space dimension and time, the same transformation is effected with multiplication by $1 + vε$.

Cycles
Given two dual numbers $p$ and $q$, they determine the set of $z$ such that the difference in slopes ("Galilean angle") between the lines from $z$ to $p$ and $q$ is constant. This set is a cycle in the dual number plane; since the equation setting the difference in slopes of the lines to a constant is a quadratic equation in the real part of $z$, a cycle is a parabola. The "cyclic rotation" of the dual number plane occurs as a motion of its projective line. According to Isaak Yaglom, the cycle $Z = {z : y = αx^{2} }$ is invariant under the composition of the shear
 * $$x_1 = x ,\quad y_1 = vx + y $$

with the translation
 * $$x' = x_1 = \frac{v}{2a} ,\quad  y' = y_1 + \frac{v^2}{4a}. $$

Applications in mechanics
Dual numbers find applications in mechanics, notably for kinematic synthesis. For example, the dual numbers make it possible to transform the input/output equations of a four-bar spherical linkage, which includes only rotoid joints, into a four-bar spatial mechanism (rotoid, rotoid, rotoid, cylindrical). The dualized angles are made of a primitive part, the angles, and a dual part, which has units of length. See screw theory for more.

Algebraic geometry
In modern algebraic geometry, the dual numbers over a field $$k$$ (by which we mean the ring $$k[\varepsilon]/(\varepsilon^2)$$) may be used to define the tangent vectors to the points of a $$k$$-scheme. Since the field $$k$$ can be chosen intrinsically, it is possible to speak simply of the tangent vectors to a scheme. This allows notions from differential geometry to be imported into algebraic geometry.

In detail: The ring of dual numbers may be thought of as the ring of functions on the "first-order neighborhood of a point" – namely, the $$ k$$-scheme $$ \operatorname{Spec} (k[\varepsilon]/(\varepsilon^2))$$. Then, given a $$ k$$-scheme $$ X$$, $$ k$$-points of the scheme are in 1-1 correspondence with maps $$ \operatorname{Spec} k \to X $$, while tangent vectors are in 1-1 correspondence with maps $$ \operatorname{Spec} (k[\varepsilon]/(\varepsilon^2)) \to X $$.

The field $$k$$ above can be chosen intrinsically to be a residue field. To wit: Given a point $$x$$ on a scheme $$S$$, consider the stalk $$S_x$$. Observe that $$S_x$$ is a local ring with a unique maximal ideal, which is denoted $$\mathfrak m_x$$. Then simply let $$k = S_x / \mathfrak m_x$$.

Generalizations
This construction can be carried out more generally: for a commutative ring $R$ one can define the dual numbers over $R$ as the quotient of the polynomial ring $R[X]$ by the ideal $(X^{2})$: the image of $X$ then has square equal to zero and corresponds to the element $ε$ from above.

Arbitrary module of elements of zero square
There is a more general construction of the dual numbers. Given a commutative ring $$R$$ and a module $$M$$, there is a ring $$R[M]$$ called the ring of dual numbers which has the following structures:

It is the $$R$$-module $$R \oplus M$$ with the multiplication defined by $$(r, i) \cdot \left(r', i'\right) = \left(rr', ri' + r'i\right)$$ for $$r, r' \in R$$ and $$i, i' \in I.$$

The algebra of dual numbers is the special case where $$M = R$$ and $$\varepsilon = (0, 1).$$

Superspace
Dual numbers find applications in physics, where they constitute one of the simplest non-trivial examples of a superspace. Equivalently, they are supernumbers with just one generator; supernumbers generalize the concept to $n$ distinct generators $ε$, each anti-commuting, possibly taking $n$ to infinity. Superspace generalizes supernumbers slightly, by allowing multiple commuting dimensions.

The motivation for introducing dual numbers into physics follows from the Pauli exclusion principle for fermions. The direction along $ε$ is termed the "fermionic" direction, and the real component is termed the "bosonic" direction. The fermionic direction earns this name from the fact that fermions obey the Pauli exclusion principle: under the exchange of coordinates, the quantum mechanical wave function changes sign, and thus vanishes if two coordinates are brought together; this physical idea is captured by the algebraic relation $ε^{2} = 0$.

Projective line
The idea of a projective line over dual numbers was advanced by Grünwald and Corrado Segre.

Just as the Riemann sphere needs a north pole point at infinity to close up the complex projective line, so a line at infinity succeeds in closing up the plane of dual numbers to a cylinder.

Suppose $D$ is the ring of dual numbers $x + yε$ and $U$ is the subset with $x ≠ 0$. Then $U$ is the group of units of $D$. Let $B = {(a, b) ∈ D × D : a ∈ U or b ∈ U}$. A relation is defined on B as follows: $(a, b) ~ (c, d)$ when there is a $u$ in $U$ such that $ua = c$ and $ub = d$. This relation is in fact an equivalence relation. The points of the projective line over $D$ are equivalence classes in $B$ under this relation: $P(D) = B/~$. They are represented with projective coordinates $[a, b]$.

Consider the embedding $D → P(D)$ by $z → [z, 1]$. Then points $[1, n]$, for $n^{2} = 0$, are in $P(D)$ but are not the image of any point under the embedding. $P(D)$ is mapped onto a cylinder by projection: Take a cylinder tangent to the double number plane on the line ${yε : y ∈ R}$, $ε^{2} = 0$. Now take the opposite line on the cylinder for the axis of a pencil of planes. The planes intersecting the dual number plane and cylinder provide a correspondence of points between these surfaces. The plane parallel to the dual number plane corresponds to points $[1, n]$, $n^{2} = 0$ in the projective line over dual numbers.