Differential algebra

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations.

A natural example of a differential field is the field of rational functions in one variable over the complex numbers, $$\mathbb{C}(t),$$ where the derivation is differentiation with respect to $$t.$$ More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation.

History
Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups.

Definition
A derivation $ \partial $ on a ring $ \mathcal{R} $  is a function $$\partial : R \to R\,$$ such that $$\partial(r_1 + r_2) = \partial r_1 + \partial r_2$$ and
 * $$\partial(r_1 r_2) = (\partial r_1) r_2 + r_1 (\partial r_2)\quad$$ (Leibniz product rule),

for every $$r_1$$ and $$r_2$$ in $$R.$$

A derivation is linear over the integers since these identities imply $$\partial (0)=\partial (1) = 0$$ and $$\partial (-r)=-\partial (r).$$

A differential ring is a commutative ring $$R$$ equipped with one or more derivations that commute pairwise; that is, for every pair of derivations and every $$r\in R.$$ When there is only one derivation one talks often of an ordinary differential ring ; otherwise, one talks of a partial differential ring.

A differential field is differentiable ring that is also a field. A differential algebra $$A$$ over a differential field $$K$$ is a differential ring that contains $$K$$ as a subring such that the restriction to $$K$$ of the derivations of $$A$$ equal the derivations of $$K.$$ (A more general definition is given below, which covers the case where $$K$$ is not a field, and is essentially equivalent when $$K$$ is a field.)

A Witt algebra is a differential ring that contains the field $$\Q$$ of the rational numbers. Equivalently, this is a differential algebra over $$\Q,$$ since $$\Q$$ can be considered as a differential field on which every derivation is the zero function.

The constants of a differential ring are the elements $$r$$ such that $$\partial r=0$$ for every derivation $$\partial.$$ The constants of a differential ring form a subring and the constants of a differentiable field form a subfield. This meaning of "constant" generalizes the concept of a constant function, and must not be confused with the common meaning of a constant.

Basic formulas
In the following identities, $$\delta$$ is a derivation of a differential ring $$R.$$
 * If $$r\in R$$ and $$c$$ is a constant in $$R$$ (that is, $$\delta c=0$$), then
 * If $$r\in R$$ and $$u$$ is a unit in $$R,$$ then
 * If $$n$$ is a nonnegative integer and $$r\in R$$ then
 * If $$u_1, \ldots, u_n$$ are units in $$R,$$ and $$e_1, \ldots, e_n$$ are integers, one has the logarithmic derivative identity:

Higher-order derivations
A derivation operator or higher-order derivation is the composition of several derivations. As the derivations of a differential ring are supposed to commute, the order of the derivations does not matter, and a derivation operator may be written as

where $$\delta_1, \ldots, \delta_n$$ are the derivations under consideration, $$e_1, \ldots, e_n$$ are nonnegative integers, and the exponent of a derivation denotes the number of times this derivation is composed in the operator.

The sum $$o=e_1+ \cdots +e_n$$ is called the order of derivation. If $$o=1$$ the derivation operator is one of the original derivations. If $$o=0$$, one has the identity function, which is generally considered as the unique derivation operator of order zero. With these conventions, the derivation operators form a free commutative monoid on the set of derivations under consideration.

A derivative of an element $$x$$ of a differential ring is the application of a derivation operator to $$x,$$ that is, with the above notation, $$\delta_1^{e_1} \circ \cdots \circ \delta_n^{e_n}(x).$$ A proper derivative is a derivative of positive order.

Differential ideals
A differential ideal $$I$$ of a differential ring $$R$$ is an ideal of the ring $$R$$ that is closed (stable) under the derivations of the ring; that is, $ \partial x\in I,$ for every derivation $$\partial$$ and every $$x\in I.$$ A differential ideal is said proper if it is not the whole ring. For avoiding confusion, an ideal that is not a differential ideal is sometimes called an algebraic ideal.

The radical of a differential ideal is the same as its radical as an algebraic ideal, that is, the set of the ring elements that have a power in the ideal. The radical of a differential ideal is also a differential ideal. A radical or perfect differential ideal is a differential ideal that equals its radical. A prime differential ideal is a differential ideal that is prime in the usual sense; that is, if a product belongs to the ideal, at least one of the factors belongs to the ideal. A prime differential ideal is always a radical differential ideal.

A discovery of Ritt is that, although the classical theory of algebraic ideals does not work for differential ideals, a large part of it can be extended to radical differential ideals, and this makes them fundamental in differential algebra.

The intersection of any family of differential ideals is a differential ideal, and the intersection of any family of radical differential ideals is a radical differential ideal. It follows that, given a subset $$S$$ of a differential ring, there are three ideals generated by it, which are the intersections of, respectively, all algebraic ideals, all differential ideals, and all radical differential ideals that contain it.

The algebraic ideal generated by $$S$$ is the set of the finite linear combinations of elements of $$S,$$ and is commonly denoted as $$(S)$$ or $$\langle S \rangle.$$

The differential ideal generated by $$S$$ is the set of the finite linear combinations of elements of $$S$$ and of the derivatives of any order of these elements; it is commonly denoted as $$[S].$$ When $$S$$ is finite, $$[S]$$ is generally not finitely generated as an algebraic ideal.

The radical differential ideal generated by $$S$$ is commonly denoted as $$\{S\}.$$ There is no known way to characterize its element in a similar way as for the two other cases.

Differential polynomials
A differential polynomial over a differential field $$K$$ is a formalization of the concept of differential equation such that the known functions appearing in the equation belong to $$K,$$ and the indeterminates are symbols for the unknown functions.

So, let $$K$$ be a differential field, which is typically (but not necessarily) a field of rational fractions $$K(X)=K(x_1,\ldots ,x_n)$$ (fractions of multivariate polynomials), equipped with derivations $$\partial_i$$ such that $$\partial_i x_i=1$$ and $$\partial_i x_j=0$$ if $$i\neq j$$ (the usual partial derivatives).

For defining the ring $ K \{ Y \}= K \{ y_1, \ldots, y_n \}$ of differential polynomials over $$K$$ with indeterminates in $$Y=\{y_1,\ldots, y_n\}$$ with derivations $$\partial_1, \ldots, \partial_n,$$ one introduces an infinity of new indeterminates of the form $$\Delta y_i,$$ where $$\Delta$$ is any derivation operator of order higher than $1$. With this notation, $$K \{ Y \}$$ is the set of polynomials in all these indeterminates, with the natural derivations (each polynomial involves only a finite number of indeterminates). In particular, if $$n=1,$$ one has
 * $$K\{y\}=K\left[y, \partial y, \partial^2 y, \partial^3 y, \ldots\right].$$

Even when $$n=1,$$ a ring of differential polynomials is not Noetherian. This makes the theory of this generalization of polynomial rings difficult. However, two facts allow such a generalization.

Firstly, a finite number of differential polynomials involves together a finite number of indeterminates. Its follows that every property of polynomials that involves a finite number of polynomials remains true for differential polynomials. In particular, greatest common divisors exist, and a ring of differential polynomials is a unique factorization domain.

The second fact is that, if the field $$K$$ contains the field of rational numbers, the rings of differential polynomials over $$K$$ satisfy the ascending chain condition on radical differential ideals. This Ritt’s theorem is implied by its generalization, sometimes called the Ritt-Raudenbush basis theorem which asserts that if $$R$$ is a Ritt Algebra (that, is a differential ring containing the field of rational numbers), that satisfies the ascending chain condition on radical differential ideals, then the ring of differential polynomials $$R\{y\}$$ satisfies the same property (one passes from the univariate to the multivariate case by applying the theorem iteratively).

This Noetherian property implies that, in a ring of differential polynomials, every radical differential ideal $I$ is finitely generated as a radical differential ideal; this means that there exists a finite set $S$ of differential polynomials such that $I$ is the smallest radical differential idesl containing $S$. This allows representing a radical differential ideal by such a finite set of generators, and computing with these ideals. However, some usual computations of the algebraic case cannot be extended. In particular no algorithm is known for testing membership of an element in a radical differential ideal or the equality of two radical differential ideals.

Another consequence of the Noetherian property is that a radical differential ideal can be uniquely expressed as the intersection of a finite number of prime differential ideals, called essential prime components of the ideal.

Elimination methods
Elimination methods are algorithms that preferentially eliminate a specified set of derivatives from a set of differential equations, commonly done to better understand and solve sets of differential equations.

Categories of elimination methods include characteristic set methods, differential Gröbner bases methods and resultant based methods.

Common operations used in elimination algorithms include 1) ranking derivatives, polynomials, and polynomial sets, 2) identifying a polynomial's leading derivative, initial and separant, 3) polynomial reduction, and 4) creating special polynomial sets.

Ranking derivatives
The ranking of derivatives is a total order and an admisible order, defined as:
 * $ \forall p \in \Theta Y, \ \forall \theta_\mu \in \Theta : \theta_\mu p > p. $
 * $ \forall p,q \in \Theta Y, \ \forall \theta_\mu \in \Theta : p \ge q \Rightarrow \theta_\mu p \ge \theta_\mu q. $

Each derivative has an integer tuple, and a monomial order ranks the derivative by ranking the derivative's integer tuple. The integer tuple identifies the differential indeterminate, the derivative's multi-index and may identify the derivative's order. Types of ranking include:
 * Orderly ranking : $$ \forall y_i, y_j \in Y, \ \forall \theta_\mu, \theta_\nu \in \Theta \ : \ \operatorname{ord}(\theta_\mu) \ge \operatorname{ord}(\theta_\nu) \Rightarrow \theta_\mu y_i \ge \theta_\nu y_j$$
 * Elimination ranking : $$\forall y_i, y_j \in Y, \ \forall \theta_\mu, \theta_\nu \in \Theta \ : \ y_i \ge y_j \Rightarrow \theta_\mu y_i \ge \theta_\nu y_j$$

In this example, the integer tuple identifies the differential indeterminate and derivative's multi-index, and lexicographic monomial order, $ \ge_\text{lex}$, determines the derivative's rank.
 * $$\eta(\delta_1^{e_1} \circ \cdots \circ \delta_n^{e_n}(y_j))= (j, e_1, \ldots, e_n) $$.
 * $$ \eta(\theta_\mu y_j) \ge_\text{lex} \eta(\theta_\nu y_k) \Rightarrow \theta_\mu y_j \ge \theta_\nu y_k. $$

Leading derivative, initial and separant
This is the standard polynomial form: $$ p = a_d \cdot u_p^d+ a_{d-1} \cdot u_p^{d-1} + \cdots +a_1 \cdot u_p+ a_0 $$.
 * Leader or leading derivative is the polynomial's highest ranked derivative: $$u_p$$.
 * Coefficients $$a_d, \ldots, a_0$$ do not contain the leading derivative $u_p$.
 * Degree of polynomial is the leading derivative's greatest exponent: $$\deg_{u_p}(p) = d$$.
 * Initial is the coefficient: $$ I_p=a_d$$.
 * Rank is the leading derivative raised to the polynomial's degree: $$u_p^d$$.
 * Separant is the derivative: $$ S_p= \frac{\partial p}{\partial u_p}$$.

Separant set is $$S_A= \{ S_p \mid p \in A \} $$, initial set is $$I_A= \{ I_p \mid p \in A \} $$ and combined set is $H_A= S_A \cup I_A $.

Reduction
Partially reduced ( partial normal form ) polynomial $q$ with respect to polynomial $p$  indicates these polynomials are non-ground field elements, $ p,q \in \mathcal{K} \{ Y \} \setminus \mathcal{K}$, and $$q$$ contains no proper derivative of $$ u_p$$.

Partially reduced polynomial $q$ with respect to polynomial $p$  becomes reduced ( normal form ) polynomial $q$ with respect to $p$  if the degree of $u_p$  in $q$  is less than the degree of $u_{p}$  in $p$.

An autoreduced polynomial set has every polynomial reduced with respect to every other polynomial of the set. Every autoreduced set is finite. An autoreduced set is triangular meaning each polynomial element has a distinct leading derivative.

Ritt's reduction algorithm identifies integers $i_{A_{k}}, s_{A_{k}}$ and transforms a differential polynomial $f$  using pseudodivision to a lower or equally ranked remainder polynomial $ f_{red}$  that is reduced with respect to the autoreduced polynomial set $ A$. The algorithm's first step partially reduces the input polynomial and the algorithm's second step fully reduces the polynomial. The formula for reduction is:
 * $$ f_\text{red} \equiv \prod_{A_k \in A} I_{A_k}^{i_{A_k}} \cdot S_{A_k}^{i_{A_k}} \cdot f, \pmod{[A]} \text{ with  } i_{A_k}, s_{A_k} \in \mathbb{N}. $$

Ranking polynomial sets
Set $A$ is a differential chain if the rank of the leading derivatives is $u_{A_{1}} < \dots < u_{A_{m}} $  and $\forall i, \ A_{i}$  is reduced with respect to $A_{i+1}$

Autoreduced sets $A$ and $B$  each contain ranked polynomial elements. This procedure ranks two autoreduced sets by comparing pairs of identically indexed polynomials from both autoreduced sets.
 * $$A_1 < \cdots < A_m \in A $$ and $$B_1 < \cdots < B_n \in B $$ and $$ i,j,k \in \mathbb{N}$$.
 * $$ \text{rank } A < \text{rank } B $$ if there is a $$ k \le \operatorname{minimum}(m,n) $$ such that $$ A_i = B_i$$ for $ 1 \le i < k $ and $$ A_k < B_k $$.
 * $$ \operatorname{rank} A < \operatorname{rank} B $$ if $$ n < m $$ and $$A_i = B_i$$ for $$1 \le i \le n $$.
 * $$ \operatorname{rank} A = \operatorname{rank} B $$ if $$ n = m $$ and $$A_i = B_i$$ for $$1 \le i \le n $$.

Polynomial sets
A characteristic set $C$ is the lowest ranked autoreduced subset among all the ideal's autoreduced subsets whose subset polynomial separants are non-members of the ideal $\mathcal{I}$.

The delta polynomial applies to polynomial pair $p,q$ whose leaders share a common derivative, $\theta_{\alpha} u_{p}= \theta_{\beta} u_{q}$. The least common derivative operator for the polynomial pair's leading derivatives is $\theta_{pq}$, and the delta polynomial is:
 * $$\operatorname{\Delta - poly}(p,q)= S_{q} \cdot \frac{\theta_{pq} p}{\theta_{p}} - S_{p} \cdot \frac{\theta_{pq} q}{\theta_{q}} $$

A coherent set is a polynomial set that reduces its delta polynomial pairs to zero.

Regular system and regular ideal
A regular system $\Omega$ contains a autoreduced and coherent set of differential equations $A$  and a inequation set $H_{\Omega} \supseteq H_A$  with set $H_\Omega $  reduced with respect to the equation set.

Regular differential ideal $\mathcal{I}_\text{dif} $ and regular algebraic ideal $\mathcal{I}_\text{alg} $  are saturation ideals that arise from a regular system. Lazard's lemma states that the regular differential and regular algebraic ideals are radical ideals.
 * Regular differential ideal : $\mathcal{I}_\text{dif}=[A]:H_\Omega^\infty.$
 * Regular algebraic ideal : $\mathcal{I}_\text{alg}=(A):H_\Omega^\infty.$

Rosenfeld–Gröbner algorithm
The Rosenfeld–Gröbner algorithm decomposes the radical differential ideal as a finite intersection of regular radical differential ideals. These regular differential radical ideals, represented by characteristic sets, are not necessarily prime ideals and the representation is not necessarily minimal.

The membership problem is to determine if a differential polynomial $p$ is a member of an ideal generated from a set of differential polynomials $S$. The Rosenfeld–Gröbner algorithm generates sets of Gröbner bases. The algorithm determines that a polynomial is a member of the ideal if and only if the partially reduced remainder polynomial is a member of the algebraic ideal generated by the Gröbner bases.

The Rosenfeld–Gröbner algorithm facilitates creating Taylor series expansions of solutions to the differential equations.

Differential fields
Example 1: $(\operatorname{Mer}(\operatorname{f}(y), \partial_{y} )$ is the differential meromorphic function field with a single standard derivation.

Example 2: $(\mathbb{C} \{ y \}, (1+3 \cdot y + y^{2}) \cdot \partial_{y} ) $ is a differential field with a linear differential operator as the derivation.

Derivation
Define $E^{a}(p(y))=p(y+a)$ as shift operator $E^{a}$  for polynomial $p(y)$.

A shift-invariant operator $T$ commutes with the shift operator: $E^{a} \circ T=T \circ E^{a}$.

The Pincherle derivative, a derivation of shift-invariant operator $T$ , is $T^{\prime} = T \circ y - y \circ T $.

Constants
Ring of integers is $$(\mathbb{Z}. \delta)$$, and every integer is a constant.
 * The derivation of 1 is zero. $ \delta(1)=\delta(1 \cdot 1)=\delta(1) \cdot 1 + 1 \cdot \delta(1) = 2 \cdot \delta(1) \Rightarrow \delta(1)=0$.
 * Also, $$ \delta(m+1)=\delta(m)+\delta(1)=\delta(m) \Rightarrow \delta(m+1)=\delta(m) $$.
 * By induction, $$ \delta(1)=0 \ \wedge \ \delta(m+1)= \delta(m) \Rightarrow \forall \ m \in \mathbb{Z}, \ \delta(m)=0 $$.

Field of rational numbers is $$(\mathbb{Q}. \delta)$$, and every rational number is a constant.
 * Every rational number is a quotient of integers.
 * $$ \forall r \in \mathbb{Q}, \ \exists \ a \in \mathbb{Z}, \ b \in \mathbb{Z}/ \{ 0 \}, \ r=\frac{a}{b} $$
 * Apply the derivation formula for quotients recognizing that derivations of integers are zero:
 * $$ \delta (r)= \delta \left ( \frac{a}{b} \right ) = \frac{\delta(a) \cdot b - a \cdot \delta(b)}{b^{2}}=0 $$.

Differential subring
Constants form the subring of constants $(\mathbb{C}, \partial_{y}) \subset (\mathbb{C} \{ y \}, \partial_{y}) $.

Differential ideal
Element $\exp(y)$ simply generates differential ideal $ [\exp(y)] $  in the differential ring $(\mathbb{C} \{ y, \exp(y) \}, \partial_{y}) $.

Algebra over a differential ring
Any ring with identity is a $\operatorname{\mathcal{Z}-}$ algebra. Thus a differential ring is a $\operatorname{\mathcal{Z}-}$ algebra.

If ring $\mathcal{R}$ is a subring of the center of unital ring $\mathcal{M}$, then $\mathcal{M}$  is an $\operatorname{\mathcal{R}-}$ algebra. Thus, a differential ring is an algebra over its differential subring. This is the natural structure of an algebra over its subring.

Special and normal polynomials
Ring $(\mathbb{Q} \{ y, z \}, \partial_y) $ has irreducible polynomials, $p$  (normal, squarefree) and $q$  (special, ideal generator).
 * $ \partial_y(y)=1, \ \partial_y(z)=1+z^2, \ z=\tan(y)$
 * $p(y)=1+y^2, \ \partial_y(p)=2 \cdot y,\ \gcd(p, \partial_y(p))=1$
 * $q(z)=1+z^2, \ \partial_y(q)=2 \cdot z \cdot (1+z^2),\ \gcd(q, \partial_{y}(q))=q$

Ranking
Ring $(\mathbb{Q} \{ y_{1}, y_{2} \}, \delta)$ has derivatives $\delta(y_{1})=y_{1}^{\prime}$  and $\delta(y_{2})=y_{2}^{\prime}$
 * Map each derivative to an integer tuple: $\eta( \delta^{(i_{2})}(y_{i_{1}}) )=(i_{1}, i_{2})$.
 * Rank derivatives and integer tuples: $ y_{2}^{\prime \prime} \ (2,2) > y_{2}^{\prime} \ (2,1) > y_{2} \ (2,0) > y_{1}^{\prime \prime} \ (1,2) > y_{1}^{\prime} \ (1,1) > y_{1} \ (1,0) $.

Leading derivative and initial
The leading derivatives, and initials are:
 * $ p={\color{Blue} (y_{1}+ y_{1}^{\prime})} \cdot ({\color{Red} y_{2}^{\prime \prime}})^{2} + 3 \cdot y_{1}^{2} \cdot {\color{Red}y_{2}^{\prime \prime}} + (y_{1}^{\prime})^{2} $
 * $ q={\color{Blue}(y_{1}+ 3 \cdot y_{1}^{\prime})} \cdot {\color{Red} y_{2}^{\prime \prime}} + y_{1} \cdot y_{2}^{\prime} + (y_{1}^{\prime})^{2} $
 * $ r= {\color{Blue} (y_{1}+3)} \cdot ({\color{Red} y_{1}^{\prime \prime}})^{2} + y_{1}^{2} \cdot {\color{Red} y_{1}^{\prime \prime}}+ 2 \cdot y_{1} $

Separants

 * $ S_{p}= 2 \cdot (y_{1}+ y_{1}^{\prime}) \cdot y_{2}^{\prime \prime} + 3 \cdot y_{1}^{2}$.
 * $ S_{q}= y_{1}+ 3 \cdot y_{1}^{\prime}$
 * $ S_{r}= 2 \cdot (y_{1}+3) \cdot y_{1}^{\prime \prime} + y_{1}^{2}$

Autoreduced sets

 * Autoreduced sets are $\{ p, r \}$ and $ \{ q, r \}$ .  Each set is triangular with a distinct polynomial leading derivative.
 * The non-autoreduced set $ \{ p, q \} $ contains only partially reduced $p$  with respect to $q$ ; this set is non-triangular because the polynomials have the same leading derivative.

Symbolic integration
Symbolic integration uses algorithms involving polynomials and their derivatives such as Hermite reduction, Czichowski algorithm, Lazard-Rioboo-Trager algorithm, Horowitz-Ostrogradsky algorithm, squarefree factorization and splitting factorization to special and normal polynomials.

Differential equations
Differential algebra can determine if a set of differential polynomial equations has a solution. A total order ranking may identify algebraic constraints. An elimination ranking may determine if one or a selected group of independent variables can express the differential equations. Using triangular decomposition and elimination order, it may be possible to solve the differential equations one differential indeterminate at a time in a step-wise method. Another approach is to create a class of differential equations with a known solution form; matching a differential equation to its class identifies the equation's solution. Methods are available to facilitate the numerical integration of a differential-algebraic system of equations.

In a study of non-linear dynamical systems with chaos, researchers used differential elimination to reduce differential equations to ordinary differential equations involving a single state variable. They were successful in most cases, and this facilitated developing approximate solutions, efficiently evaluating chaos, and constructing Lyapunov functions. Researchers have applied differential elimination to understanding cellular biology, compartmental biochemical models, parameter estimation and quasi-steady state approximation (QSSA) for biochemical reactions. Using differential Gröbner bases, researchers have investigated non-classical symmetry properties of non-linear differential equations. Other applications include control theory, model theory, and algebraic geometry. Differential algebra also applies to differential-difference equations.<!--

Subrings
The $\operatorname{\Delta_{R} - \mathcal{R}}$ is a differential subring of $ \operatorname{\Delta_{S} - \mathcal{S}} $  if $ \mathcal{R}$  is a subring of $ \mathcal{S}$, and the derivation set $ \operatorname{\Delta_{R}} $  is the derivation set $ \operatorname{\Delta_{S}} $  restricted to $ \mathcal{R}$. An equivalent statement is $ \operatorname{\Delta_{S} - \mathcal{S}} $ is the differential overring of $ \operatorname{\Delta_{R} - \mathcal{R}} $.

The intersection of any family of differential subrings is a differential subring. The intersection of any set of differential subrings containing a common set is a differential subring, and the smallest differential subring containing a common set is the intersection of all subrings containing the common set.

Set $ \Theta A $ generates differential ring $\mathcal{R} \{ A \} $  over $\mathcal{R}$. This is the smallest differential subring containing differential subring $\mathcal{R} $ and set $ \Theta A $. A finitely generated differential subring arises from a finite set, and a simply generated differential subring arises from a single element. Adjoining or adding an element to the generator set extends the differential ring. Using the square bracket notation for ring extension, $\mathcal{R} \{ A \}=\mathcal{R} [ \Theta A ] $.

Set $ \Theta A $ generates differential field $\mathcal{F} \langle A \rangle $  over field $\mathcal{F}$. Using the parentheses notation for a field extension, $\mathcal{F} \langle A \rangle =\mathcal{F} ( \Theta A ) $.

A field $K$ is a closed differential field if each instance when a differential equation set's solution, $f_{i} \in K \{ y_{1}, \ldots y_{m} \}$  for $i \in \{ 1, \ldots, m \}$, occurs in field $L$  extended over $K$ , the  solution occurs in the field $K$. Any differential field may extend to a closed differential field. Differential Galois theory studies differential field extensions and the associated Galois group.

Ring homomorphism
A differential ring homomorphism is a map, $ \operatorname{f}: \mathcal{R} \to \mathcal{S} $ of differential rings that share the same derivation set, $ \Delta_{R}=\Delta_{S} $, and the ring homomorphism commutes with derivation, $ \forall r \in \mathcal{R}, \ \forall \delta \in \Delta \ : \ \delta (\operatorname{f}(r))= \operatorname{f}(\delta(r)) $.
 * The kernel is a differential ideal of $ \mathcal{R}$, and the image is a differential subring.
 * The ring $ \mathcal{S}$ is an extension of $ \mathcal{R}$, and $ \mathcal{R}$  is a subring of $ \mathcal{S}$  if the ring homomorphism is an  inclusion.
 * For differential ring $\mathcal{R} $ and differential ideal $\mathcal{I} $, the  canonical homomorphism maps the ring to the differential  residue ring: $ \operatorname{f}: \mathcal{R} \to \mathcal{R} / \mathcal{I} $.

Modules
A differential $ \operatorname{\mathcal{R} - module}$ or module over differential ring $ \operatorname{\Delta - \mathcal{R}} $ has module $ \mathcal{M} $  whose elements follow these sum and product derivation rules: $ \delta \in \Delta, \ r \in \mathcal{R}, \ u,v \in \mathcal{M} $ :
 * $ \delta(u+v)= \delta (u) + \delta (v) $
 * $ \delta(r \cdot u)= \delta (r) \cdot u + r \cdot \delta (u) $

A differential vector space is a differential module over a differential field.

A differential $ \operatorname{\mathcal{R}-algebra}$ or differential algebra over the $ \mathcal{R} $  is the ring $ \mathcal{M} $, the $ \operatorname{\mathcal{R}-algebra}$ , and a derivation set $\Delta$  that makes $ \mathcal{M} $  a differential ring and that follows this derivation product rule:
 * $$ \forall \delta \in \Delta, \ \forall r \in \mathcal{R}, \ \forall u \in \mathcal{M} \ : \ \delta(r \cdot u)= \delta (r) \cdot u + r \cdot \delta (u) $$. -->

Differential graded vector space
A $\operatorname{\mathbb{Z}-graded}$ vector space $V_{\bullet} $  is a collection of vector spaces $V_{m}$  with integer degree $|v|=m$  for $ v\in V_{m}$. A direct sum can represent this graded vector space:
 * $$V_{\bullet} = \bigoplus_{m \in \mathbb{Z}} V_{m}$$

A differential graded vector space or chain complex, is a graded vector space $V_{\bullet}$ with a differential map or boundary map $d_{m}: V_{m} \to V_{m-1}$  with $$ d_{m} \circ d_{m+1} = 0 $$.

A cochain complex is a graded vector space $V^{\bullet}$ with a differential map or coboundary map $d_{m}: V_{m} \to V_{m+1}$ with $$ d_{m+1} \circ d_{m} = 0 $$.

Differential graded algebra
A differential graded algebra is a graded algebra $A$ with a linear derivation $d: A \to A $  with $$d \circ d=0 $$ that follows the graded Leibniz product rule.
 * Graded Leibniz product rule: $$\forall a,b \in A, \ d(a \cdot b)=d(a) \cdot b + (-1)^{|a|} \cdot a \cdot d(b)$$ with $$|a|$$ the degree of vector $$a$$.

Lie algebra
A Lie algebra is a finite-dimensional real or complex vector space $\mathcal{g}$ with a bilinear bracket operator $[,]:\mathcal{g} \times \mathcal{g} \to \mathcal{g} $  with Skew symmetry and the Jacobi identity property. for all $$ X, Y, Z \in \mathcal{g}$$.
 * Skew symmetry: $$ [X,Y]= -[Y,X]$$
 * Jacobi identity property: $$ [X,[Y,Z]]+[Y,[Z,X]] + [Z,[X,Y]]=0 $$

The adjoint operator, $\operatorname{ad_{X}}(Y)=[Y,X]$ is a derivation of the bracket because the adjoint's effect on the binary bracket operation is analogous to the derivation's effect on the binary product operation. This is the inner derivation determined by $X$.
 * $$ \operatorname{ad}_{X}([Y,Z]) = [\operatorname{ad}_{X}(Y),Z] + [Y,\operatorname{ad}_{X}(Z)] $$

The universal enveloping algebra $U(\mathcal{g})$ of Lie algebra $\mathcal{g}$  is a maximal associative algebra with identity, generated by Lie algebra elements $\mathcal{g}$  and containing products defined by the bracket operation. Maximal means that a linear homomorphism maps the universal algebra to any other algebra that otherwise has these properties. The adjoint operator is a derivation following the Leibniz product rule. for all $$ X,Y,Z \in U(\mathcal{g}) $$.
 * Product in $$U(\mathcal{g})$$ : $$X \cdot Y - Y \cdot X = [X,Y]$$
 * Leibniz product rule: $$\operatorname{ad}_{X}( Y \cdot Z)=\operatorname{ad}_{X}(Y) \cdot Z + Y \cdot \operatorname{ad}_{X}(Z)$$

Weyl algebra
The Weyl algebra is an algebra $A_{n}(K)$  over a ring $K [p_{1}, q_{1}, \dots, p_{n}, q_{n}]$  with a specific noncommutative product:
 * $$ p_{i} \cdot q_{i} - q_{i} \cdot p_{i}=1, \ : \ i \in \{1, \dots, n \} $$.

All other indeterminate products are commutative for $i,j \in \{1, \dots, n \}$ :
 * $$ p_{i} \cdot q_{j} - q_{j} \cdot p_{i}=0 \text{ if  } i \ne j, \ p_{i} \cdot p_{j} - p_{j} \cdot p_{i}=0, \ q_{i} \cdot q_{j} - q_{j} \cdot q_{i}=0 $$.

A Weyl algebra can represent the derivations for a commutative ring's polynomials $f \in K[y_{1}, \ldots, y_{n}]$. The Weyl algebra's elements are endomorphisms, the elements $p_{1}, \ldots, p_{n}$ function as standard derivations, and map compositions generate linear differential operators. D-module is a related approach for understanding differential operators. The endomorphisms are:
 * $$ q_{j} (y_{k})= y_{j} \cdot y_{k}, \ q_{j}(c)= c \cdot y_{j} \text{ with  } c \in K, \ p_{j}(y_{j})=1, \ p_{j}(y_{k})=0 \text{  if  } j \ne k, \ p_{j}(c)= 0 \text{  with  } c \in K $$

Pseudodifferential operator ring
The associative, possibly noncommutative ring $A$ has derivation $d: A \to A $.

The pseudo-differential operator ring $A((\partial^{-1}))$ is a left $\operatorname{A-module}$ containing ring elements $L$ :
 * $$ a_i \in A, \ i,i_{\min} \in \mathbb{N}, \ |i_{\min}| > 0 \ : \ L= \sum_{i \ge i_{\min}}^n a_i \cdot \partial^i$$

The derivative operator is $ d(a) = \partial \circ a - a \circ \partial $.

The binomial coefficient is $$\Bigl( {i \atop k} \Bigr)$$.

Pseudo-differential operator multiplication is:
 * $$\sum_{i \ge i_{\min}}^n a_i \cdot \partial^i \cdot \sum_{j\ge j_{\min}}^m b_{i} \cdot \partial^j = \sum_{i,j;k \ge 0} \Bigl( {i \atop k} \Bigr) \cdot a_i \cdot d^k(b_j) \cdot \partial^{i+j-k}$$

Open problems
The Ritt problem asks is there an algorithm that determines if one prime differential ideal contains a second prime differential ideal when characteristic sets identify both ideals.

The Kolchin catenary conjecture states given a $d>0$ dimensional irreducible differential algebraic variety $ V$  and an arbitrary point $ p \in V$, a long gap chain of irreducible differential algebraic subvarieties occurs from $ p $  to V.

The Jacobi bound conjecture concerns the upper bound for the order of an differential variety's irreducible component. The polynomial's orders determine a Jacobi number, and the conjecture is the Jacobi number determines this bound.