Linear equation over a ring

In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or, typically "Noetherian integral domain".

In the case of a single equation, the problem splits in two parts. First, the ideal membership problem, which consists, given a non-homogeneous equation
 * $$a_1x_1 + \cdots + a_kx_k=b$$

with $$a_1, \ldots, a_k$$ and $b$ in a given ring $R$, to decide if it has a solution with $$x_1, \ldots, x_k$$ in $R$, and, if any, to provide one. This amounts to decide if $b$ belongs to the ideal generated by the $a_{i}$. The simplest instance of this problem is, for $k = 1$ and $b = 1$, to decide if $a$ is a unit in $R$.

The syzygy problem consists, given $k$ elements $$a_1, \ldots, a_k$$ in $R$, to provide a system of generators of the module of the syzygies of $$(a_1, \ldots, a_k),$$ that is a system of generators of the submodule of those elements $$(x_1, \ldots, x_k)$$ in $R^{k}$ that are solutions of the homogeneous equation
 * $$a_1x_1 + \cdots + a_kx_k=0.$$

The simplest case, when $k = 1$ amounts to find a system of generators of the annihilator of $a_{1}$.

Given a solution of the ideal membership problem, one obtains all the solutions by adding to it the elements of the module of syzygies. In other words, all the solutions are provided by the solution of these two partial problems.

In the case of several equations, the same decomposition into subproblems occurs. The first problem becomes the submodule membership problem. The second one is also called the syzygy problem.

A ring such that there are algorithms for the arithmetic operations (addition, subtraction, multiplication) and for the above problems may be called a computable ring, or effective ring. One may also say that linear algebra on the ring is effective.

The article considers the main rings for which linear algebra is effective.

Generalities
To be able to solve the syzygy problem, it is necessary that the module of syzygies is finitely generated, because it is impossible to output an infinite list. Therefore, the problems considered here make sense only for a Noetherian ring, or at least a coherent ring. In fact, this article is restricted to Noetherian integral domains because of the following result.


 * Given a Noetherian integral domain, if there are algorithms to solve the ideal membership problem and the syzygies problem for a single equation, then one may deduce from them algorithms for the similar problems concerning systems of equations.

This theorem is useful to prove the existence of algorithms. However, in practice, the algorithms for the systems are designed directly.

A field is an effective ring as soon one has algorithms for addition, subtraction, multiplication, and computation of multiplicative inverses. In fact, solving the submodule membership problem is what is commonly called solving the system, and solving the syzygy problem is the computation of the null space of the matrix of a system of linear equations. The basic algorithm for both problems is Gaussian elimination.

Properties of effective rings
Let $R$ be an effective commutative ring.
 * There is an algorithm for testing if an element $a$ is a zero divisor: this amounts to solving the linear equation $ax = 0$.
 * There is an algorithm for testing if an element $a$ is a unit, and if it is, computing its inverse: this amounts to solving the linear equation $ax = 1$.
 * Given an ideal $I$ generated by $a_{1}, ..., a_{k}$,
 * there is an algorithm for testing if two elements of $R$ have the same image in $R/I$: testing the equality of the images of $a$ and $b$ amounts to solving the equation $a = b + a_{1}&hairsp;z_{1} + ⋯ + a_{k}&thinsp;z_{k}$;
 * linear algebra is effective over $R/I$: for solving a linear system over $R/I$, it suffices to write it over $R$ and to add to one side of the $i$th equation $a_{1}&hairsp;z_{i,1} + ⋯ + a_{k}&thinsp;z_{i,&hairsp;k}$ (for $i = 1, ...$), where the $z_{i,&hairsp;j}$ are new unknowns.
 * Linear algebra is effective on the polynomial ring $$R[x_1, \ldots, x_n]$$ if and only if one has an algorithm that computes an upper bound of the degree of the polynomials that may occur when solving linear systems of equations: if one has solving algorithms, their outputs give the degrees. Conversely, if one knows an upper bound of the degrees occurring in a solution, one may write the unknown polynomials as polynomials with unknown coefficients. Then, as two polynomials are equal if and only if their coefficients are equal, the equations of the problem become linear equations in the coefficients, that can be solved over an effective ring.

Over the integers or a principal ideal domain
There are algorithms to solve all the problems addressed in this article over the integers. In other words, linear algebra is effective over the integers; see Linear Diophantine system for details.

More generally, linear algebra is effective on a principal ideal domain if there are algorithms for addition, subtraction and multiplication, and
 * Solving equations of the form $ax = b$, that is, testing whether $a$ is a divisor of $b$, and, if this is the case, computing the quotient $a/b$,
 * Computing Bézout's identity, that is, given $a$ and $b$, computing $s$ and $t$ such that $as + bt$ is a greatest common divisor of $a$ and $b$.

It is useful to extend to the general case the notion of a unimodular matrix by calling unimodular a square matrix whose determinant is a unit. This means that the determinant is invertible and implies that the unimodular matrices are exactly the invertible matrices such all entries of the inverse matrix belong to the domain.

The above two algorithms imply that given $a$ and $b$ in the principal ideal domain, there is an algorithm computing a unimodular matrix
 * $$\begin{bmatrix} s&t\\u&v \end{bmatrix}$$

such that
 * $$\begin{bmatrix} s&t\\u&v \end{bmatrix} \begin{bmatrix}  a\\b \end{bmatrix}

= \begin{bmatrix}\gcd(a,b)\\0 \end{bmatrix}. $$ (This algorithm is obtained by taking for $s$ and $t$ the coefficients of Bézout's identity, and for $u$ and $v$ the quotient of $−b$ and $a$ by $as + bt$; this choice implies that the determinant of the square matrix is $1$.)

Having such an algorithm, the Smith normal form of a matrix may be computed exactly as in the integer case, and this suffices to apply the described in Linear Diophantine system for getting an algorithm for solving every linear system.

The main case where this is commonly used is the case of linear systems over the ring of univariate polynomials over a field. In this case, the extended Euclidean algorithm may be used for computing the above unimodular matrix; see for details.

Over polynomials rings over a field
Linear algebra is effective on a polynomial ring $$k[x_1, \ldots, x_n]$$ over a field $k$. This has been first proved in 1926 by Grete Hermann. The algorithms resulting from Hermann's results are only of historical interest, as their computational complexity is too high for allowing effective computer computation.

Proofs that linear algebra is effective on polynomial rings and computer implementations are presently all based on Gröbner basis theory.