Puiseux series



In mathematics, Puiseux series are a generalization of power series that allow for negative and fractional exponents of the indeterminate. For example, the series
 * $$ \begin{align}

x^{-2} &+ 2x^{-1/2} + x^{1/3} + 2x^{11/6} + x^{8/3} + x^5 + \cdots\\ &=x^{-12/6}+ 2x^{-3/6} + x^{2/6} + 2x^{11/6} + x^{16/6} + x^{30/6} + \cdots \end{align}$$ is a Puiseux series in the indeterminate $x$. Puiseux series were first introduced by Isaac Newton in 1676 and rediscovered by Victor Puiseux in 1850.

The definition of a Puiseux series includes that the denominators of the exponents must be bounded. So, by reducing exponents to a common denominator $n$, a Puiseux series becomes a Laurent series in an $n$th root of the indeterminate. For example, the example above is a Laurent series in $$x^{1/6}.$$ Because a complex number has $n$ $n$th roots, a convergent Puiseux series typically defines $n$ functions in a neighborhood of $0$.

Puiseux's theorem, sometimes also called the Newton–Puiseux theorem, asserts that, given a polynomial equation $$P(x,y)=0$$ with complex coefficients, its solutions in $y$, viewed as functions of $x$, may be expanded as Puiseux series in $x$ that are convergent in some neighbourhood of $0$. In other words, every branch of an algebraic curve may be locally described by a Puiseux series in $x$ (or in $x − x0$ when considering branches above a neighborhood of $x0 ≠ 0$).

Using modern terminology, Puiseux's theorem asserts that the set of Puiseux series over an algebraically closed field of characteristic 0 is itself an algebraically closed field, called the field of Puiseux series. It is the algebraic closure of the field of formal Laurent series, which itself is the field of fractions of the ring of formal power series.

Definition
If $K$ is a field (such as the complex numbers), a Puiseux series with coefficients in $K$ is an expression of the form
 * $$f = \sum_{k=k_0}^{+\infty} c_k T^{k/n}$$

where $$n$$ is a positive integer and $$k_0$$ is an integer. In other words, Puiseux series differ from Laurent series in that they allow for fractional exponents of the indeterminate, as long as these fractional exponents have bounded denominator (here n). Just as with Laurent series, Puiseux series allow for negative exponents of the indeterminate as long as these negative exponents are bounded below (here by $$k_0$$). Addition and multiplication are as expected: for example,
 * $$ (T^{-1} + 2T^{-1/2} + T^{1/3} + \cdots) + (T^{-5/4} - T^{-1/2} + 2 + \cdots) = T^{-5/4} + T^{-1} + T^{-1/2} + 2 + \cdots$$

and
 * $$ (T^{-1} + 2T^{-1/2} + T^{1/3} + \cdots) \cdot (T^{-5/4} - T^{-1/2} + 2 + \cdots) = T^{-9/4} + 2T^{-7/4} - T^{-3/2} + T^{-11/12} + 4T^{-1/2} + \cdots.$$

One might define them by first "upgrading" the denominator of the exponents to some common denominator $N$ and then performing the operation in the corresponding field of formal Laurent series of $$T^{1/N}$$.

The Puiseux series with coefficients in $K$ form a field, which is the union
 * $$\bigcup_{n>0} K(\!(T^{1/n})\!)$$

of fields of formal Laurent series in $$T^{1/n}$$ (considered as an indeterminate).

This yields an alternative definition of the field of Puiseux series in terms of a direct limit. For every positive integer $n$, let $$T_n$$ be an indeterminate (meant to represent $T^{1/n}$ ), and $$K(\!(T_n)\!)$$ be the field of formal Laurent series in $$T_n.$$ If $m$ divides $n$, the mapping $$T_m \mapsto (T_n)^{n/m}$$ induces a field homomorphism $$K(\!(T_m)\!) \to K(\!(T_n)\!),$$ and these homomorphisms form a direct system that has the field of Puiseux series as a direct limit. The fact that every field homomorphism is injective shows that this direct limit can be identified with the above union, and that the two definitions are equivalent (up to an isomorphism).

Valuation
A nonzero Puiseux series $$f$$ can be uniquely written as
 * $$f = \sum_{k=k_0}^{+\infty} c_k T^{k/n}$$

with $$c_{k_0}\neq 0.$$ The valuation
 * $$v(f) = \frac {k_0}n$$

of $$f$$ is the smallest exponent for the natural order of the rational numbers, and the corresponding coefficient $c_{k_0}$ is called the initial coefficient or valuation coefficient of $$f$$. The valuation of the zero series is $$+\infty.$$

The function $v$ is a valuation and makes the Puiseux series a valued field, with the additive group $$\Q$$ of the rational numbers as its valuation group.

As for every valued fields, the valuation defines a ultrametric distance by the formula $$d(f,g)=\exp(-v(f-g)).$$ For this distance, the field of Puiseux series is a metric space. The notation
 * $$f = \sum_{k=k_0}^{+\infty} c_k T^{k/n}$$

expresses that a Puiseux is the limit of its partial sums. However, the field of Puiseux series is not complete; see below.

Convergent Puiseux series
Puiseux series provided by Newton–Puiseux theorem are convergent in the sense that there is a neighborhood of zero in which they are convergent (0 excluded if the valuation is negative). More precisely, let
 * $$f = \sum_{k=k_0}^{+\infty} c_k T^{k/n}$$

be a Puiseux series with complex coefficients. There is a real number $r$, called the radius of convergence such that the series converges if $T$ is substituted for a nonzero complex number $t$ of absolute value less than $r$, and $r$ is the largest number with this property. A Puiseux series is convergent if it has a nonzero radius of convergence.

Because a nonzero complex number has $n$ $n$th roots, some care must be taken for the substitution: a specific $n$th root of $t$, say $x$, must be chosen. Then the substitution consists of replacing $$T^{k/n}$$ by $$x^k$$ for every $k$.

The existence of the radius of convergence results from the similar existence for a power series, applied to $T^{-k_0/n}f,$ considered as a power series in $$T^{1/n}.$$

It is a part of Newton–Puiseux theorem that the provided Puiseux series have a positive radius of convergence, and thus define a (multivalued) analytic function in some neighborhood of zero (zero itself possibly excluded).

Valuation and order on coefficients
If the base field $$K$$ is ordered, then the field of Puiseux series over $$K$$ is also naturally (“lexicographically”) ordered as follows: a non-zero Puiseux series $$f$$ with 0 is declared positive whenever its valuation coefficient is so. Essentially, this means that any positive rational power of the indeterminate $$T$$ is made positive, but smaller than any positive element in the base field $$K$$.

If the base field $$K$$ is endowed with a valuation $$w$$, then we can construct a different valuation on the field of Puiseux series over $$K$$ by letting the valuation $$\hat w(f)$$ be $$\omega\cdot v + w(c_k),$$ where $$v=k/n$$ is the previously defined valuation ($$c_k$$ is the first non-zero coefficient) and $$\omega$$ is infinitely large (in other words, the value group of $$\hat w$$ is $$\Q \times \Gamma$$ ordered lexicographically, where $$\Gamma$$ is the value group of $$w$$). Essentially, this means that the previously defined valuation $$v$$ is corrected by an infinitesimal amount to take into account the valuation $$w$$ given on the base field.

Newton–Puiseux theorem
As early as 1671, Isaac Newton implicitly used Puiseux series and proved the following theorem for approximating with series the roots of algebraic equations whose coefficients are functions that are themselves approximated with series or polynomials. For this purpose, he introduced the Newton polygon, which remains a fundamental tool in this context. Newton worked with truncated series, and it is only in 1850 that Victor Puiseux introduced the concept of (non-truncated) Puiseux series and proved the theorem that is now known as Puiseux's theorem or Newton–Puiseux theorem. The theorem asserts that, given an algebraic equation whose coefficients are polynomials or, more generally, Puiseux series over a field of characteristic zero, every solution of the equation can be expressed as a Puiseux series. Moreover, the proof provides an algorithm for computing these Puiseux series, and, when working over the complex numbers, the resulting series are convergent.

In modern terminology, the theorem can be restated as: the field of Puiseux series over an algebraically closed field of characteristic zero, and the field of convergent Puiseux series over the complex numbers, are both algebraically closed.

Newton polygon
Let
 * $$P(y)=\sum_{a_i\neq 0} a_i(x) y^i$$

be a polynomial whose nonzero coefficients $$a_i(x)$$ are polynomials, power series, or even Puiseux series in $x$. In this section, the valuation $$v(a_i)$$ of $$a_i$$ is the lowest exponent of $x$ in $$a_i.$$ (Most of what follows applies more generally to coefficients in any valued ring.)

For computing the Puiseux series that are roots of $P$ (that is solutions of the functional equation $$P(y)=0$$), the first thing to do is to compute the valuation of the roots. This is the role of the Newton polygon.

Let consider, in a Cartesian plane, the points of coordinates $$(i, v(a_i)).$$ The Newton polygon of $P$ is the lower convex hull of these points. That is, the edges of the Newton polygon are the line segments joigning two of these points, such that all these points are not below the line supporting the segment (below is, as usually, relative to the value of the second coordinate).

Given a Puiseux series $$y_0$$ of valuation $$v_0$$, the valuation of $$P(y_0)$$ is at least the minimum of the numbers $$i v_0 + v(a_i),$$ and is equal to this minimum if this minimum is reached for only one $i$. So, for $$y_0$$ being a root of $P$, the minimum must be reached at least twice. That is, there must be two values $$i_1$$ and $$i_2$$ of $i$ such that $$i_1 v_0 + v(a_{i_1}) = i_2 v_0 + v(a_{i_2}),$$ and $$i v_0 + v(a_{i}) \ge i_1 v_0 + v(a_{i_1})$$ for every $i$.

That is, $$(i_1, v(a_{i_1})) $$ and $$(i_2, v(a_{i_2})) $$ must belong to an edge of the Newton polygon, and $$v_0=-\frac{v(a_{i_1})-v(a_{i_2})}{i_1-i_2}$$ must be the opposite of the slope of this edge. This is a rational number as soon as all valuations $$v(a_i)$$ are rational numbers, and this is the reason for introducing rational exponents in Puiseux series.

In summary, the valuation of a root of $P$ must be the opposite of a slope of an edge of the Newton polynomial.

The initial coefficient of a Puiseux series solution of $$P(y)=0$$ can easily be deduced. Let $$c_i$$ be the initial coefficient of $$a_i(x),$$ that is, the coefficient of $$x^{v(a_i)}$$ in $$a_i(x).$$ Let $$-v_0$$ be a slope of the Newton polygon, and $$\gamma x_0^{v_0}$$ be the initial term of a corresponding Puiseux series solution of $$P(y)=0.$$ If no cancellation would occur, then the initial coefficient of $$P(y)$$ would be $\sum_{i\in I}c_i \gamma^i,$ where $I$ is the set of the indices $i$ such that $$(i, v(a_i))$$ belongs to the edge of slope $$v_0$$ of the Newton polygon. So, for having a root, the initial coefficient $$\gamma$$ must be a nonzero root of the polynomial $$\chi(x)=\sum_{i\in I}c_i x^i$$ (this notation will be used in the next section).

In summary, the Newton polynomial allows an easy computation of all possible initial terms of Puiseux series that are solutions of $$P(y)=0.$$

The proof of Newton–Puiseux theorem will consist of starting from these initial terms for computing recursively the next terms of the Puiseux series solutions.

Constructive proof
Let suppose that the first term $$\gamma x^{v_0}$$ of a Puiseux series solution of $$P(y)=0$$ has been be computed by the method of the preceding section. It remains to compute $$z=y-\gamma x^{v_0}.$$ For this, we set $$y_0=\gamma x^{v_0},$$ and write the Taylor expansion of $P$ at $$z=y-y_0:$$
 * $$Q(z)=P(y_0+z)=P(y_0)+zP'(y_0)+\cdots + z^j\frac {P^{(j)}(y_0)} {j!} +\cdots$$

This is a polynomial in $z$ whose coefficients are Puiseux series in $x$. One may apply to it the method of the Newton polygon, and iterate for getting the terms of the Puiseux series, one after the other. But some care is required for insuring that $$v(z)>v_0,$$ and showing that one get a Puiseux series, that is, that the denominators of the exponents of $x$ remain bounded.

The derivation with respect to $y$ does not change the valuation in $x$ of the coefficients; that is,
 * $$v\left(P^{(j)}(y_0)z^j\right)\ge \min_i (v(a_i) + i v_0)+j(v(z)-v_0),$$

and the equality occurs if and only if $$\chi^{(j)}(\gamma)\neq 0,$$ where $$\chi(x)$$ is the polynomial of the preceding section. If $m$ is the multiplicity of $$\gamma$$ as a root of $$\chi,$$ it results that the inequality is an equality for $$j=m.$$ The terms such that $$j>m$$ can be forgotten as far as one is concerned by valuations, as $$v(z)>v_0$$ and $$j>m$$ imply
 * $$v\left(P^{(j)}(y_0)z^j\right) \ge \min_i (v(a_i) +iv_0)+j(v(z)-v_0) >

v\left(P^{(m)}(y_0)z^m\right).$$

This means that, for iterating the method of Newton polygon, one can and one must consider only the part of the Newton polygon whose first coordinates belongs to the interval $$[0, m].$$ Two cases have to be considered separately and will be the subject of next subsections, the so-called ramified case, where $m > 1$, and the regular case where $m = 1$.

Ramified case
The way of applying recursively the method of the Newton polygon has been described precedingly. As each application of the method may increase, in the ramified case, the denominators of exponents (valuations), it remains to prove that one reaches the regular case after a finite number of iterations (otherwise the denominators of the exponents of the resulting series would not be bounded, and this series would not be a Puiseux series. By the way, it will also be proved that one gets exactly as many Puiseux series solutions as expected, that is the degree of $$P(y)$$ in $y$.

Without loss of generality, one can suppose that $$P(0)\neq 0,$$ that is, $$a_0\neq 0.$$ Indeed, each factor $y$ of $$P(y)$$ provides a solution that is the zero Puiseux series, and such factors can be factored out.

As the characteristic is supposed to be zero, one can also suppose that $$P(y)$$ is a square-free polynomial, that is that the solutions of $$P(y)=0$$ are all different. Indeed, the square-free factorization uses only the operations of the field of coefficients for factoring $$P(y)$$ into square-free factors than can be solved separately. (The hypothesis of characteristic zero is needed, since, in characteristic $p$, the square-free decomposition can provide irreducible factors, such as $$y^p-x,$$ that have multiple roots over an algebraic extension.)

In this context, one defines the length of an edge of a Newton polygon as the difference of the abscissas of its end points. The length of a polygon is the sum of the lengths of its edges. With the hypothesis $$P(0)\neq 0,$$ the length of the Newton polygon of $P$ is its degree in $y$, that is the number of its roots. The length of an edge of the Newton polygon is the number of roots of a given valuation. This number equals the degree of the previously defined polynomial $$\chi(x).$$

The ramified case corresponds thus to two (or more) solutions that have the same initial term(s). As these solutions must be distinct (square-free hypothesis), they must be distinguished after a finite number of iterations. That is, one gets eventually a polynomial $$\chi(x)$$ that is square free, and the computation can continue as in the regular case for each root of $$\chi(x).$$

As the iteration of the regular case does not increase the denominators of the exponents, This shows that the method provides all solutions as Puiseux series, that is, that the field of Puiseux series over the complex numbers is an algebraically closed field that contains the univariate polynomial ring with complex coefficients.

Failure in positive characteristic
The Newton–Puiseux theorem is not valid over fields of positive characteristic. For example, the equation $$X^2 - X = T^{-1}$$ has solutions


 * $$X = T^{-1/2} + \frac{1}{2} + \frac{1}{8}T^{1/2} - \frac{1}{128}T^{3/2} + \cdots$$

and


 * $$X = -T^{-1/2} + \frac{1}{2} - \frac{1}{8}T^{1/2} + \frac{1}{128}T^{3/2} + \cdots$$

(one readily checks on the first few terms that the sum and product of these two series are 1 and $$-T^{-1}$$ respectively; this is valid whenever the base field K has characteristic different from 2).

As the powers of 2 in the denominators of the coefficients of the previous example might lead one to believe, the statement of the theorem is not true in positive characteristic. The example of the Artin–Schreier equation $$X^p - X = T^{-1}$$ shows this: reasoning with valuations shows that X should have valuation $-\frac{1}{p}$, and if we rewrite it as $$X = T^{-1/p} + X_1$$ then


 * $$X^p = T^{-1} + {X_1}^p,\text{ so }{X_1}^p - X_1 = T^{-1/p}$$

and one shows similarly that $$X_1$$ should have valuation $-\frac{1}{p^2}$, and proceeding in that way one obtains the series


 * $$T^{-1/p} + T^{-1/p^2} + T^{-1/p^3} + \cdots;$$

since this series makes no sense as a Puiseux series—because the exponents have unbounded denominators—the original equation has no solution. However, such Eisenstein equations are essentially the only ones not to have a solution, because, if $$K$$ is algebraically closed of characteristic $$p>0$$, then the field of Puiseux series over $$K$$ is the perfect closure of the maximal tamely ramified extension of $$K(\!(T)\!)$$.

Similarly to the case of algebraic closure, there is an analogous theorem for real closure: if $$K$$ is a real closed field, then the field of Puiseux series over $$K$$ is the real closure of the field of formal Laurent series over $$K$$. (This implies the former theorem since any algebraically closed field of characteristic zero is the unique quadratic extension of some real-closed field.)

There is also an analogous result for p-adic closure: if $$K$$ is a $$p$$-adically closed field with respect to a valuation $$w$$, then the field of Puiseux series over $$K$$ is also $$p$$-adically closed.

Algebraic curves
Let $$X$$ be an algebraic curve given by an affine equation $$F(x,y)=0$$ over an algebraically closed field $$K$$ of characteristic zero, and consider a point $$p$$ on $$X$$ which we can assume to be $$(0,0)$$. We also assume that $$X$$ is not the coordinate axis $$x=0$$. Then a Puiseux expansion of (the $$y$$ coordinate of) $$X$$ at $$p$$ is a Puiseux series $$f$$ having positive valuation such that $$F(x,f(x))=0$$.

More precisely, let us define the branches of $$X$$ at $$p$$ to be the points $$q$$ of the normalization $$Y$$ of $$X$$ which map to $$p$$. For each such $$q$$, there is a local coordinate $$t$$ of $$Y$$ at $$q$$ (which is a smooth point) such that the coordinates $$x$$ and $$y$$ can be expressed as formal power series of $$t$$, say $$x = t^n + \cdots$$ (since $$K$$ is algebraically closed, we can assume the valuation coefficient to be 1) and $$y = c t^k + \cdots$$: then there is a unique Puiseux series of the form $$f = c T^{k/n} + \cdots$$ (a power series in $$T^{1/n}$$), such that $$y(t)=f(x(t))$$ (the latter expression is meaningful since $$x(t)^{1/n} = t+\cdots$$ is a well-defined power series in $$t$$). This is a Puiseux expansion of $$X$$ at $$p$$ which is said to be associated to the branch given by $$q$$ (or simply, the Puiseux expansion of that branch of $$X$$), and each Puiseux expansion of $$X$$ at $$p$$ is given in this manner for a unique branch of $$X$$ at $$p$$.

This existence of a formal parametrization of the branches of an algebraic curve or function is also referred to as Puiseux's theorem: it has arguably the same mathematical content as the fact that the field of Puiseux series is algebraically closed and is a historically more accurate description of the original author's statement.

For example, the curve $$y^2 = x^3 + x^2$$ (whose normalization is a line with coordinate $$t$$ and map $$t \mapsto (t^2-1,t^3-t)$$) has two branches at the double point (0,0), corresponding to the points $$t=+1$$ and $$t=-1$$ on the normalization, whose Puiseux expansions are $y = x + \frac{1}{2}x^2 - \frac{1}{8}x^3 + \cdots$ and $y = - x - \frac{1}{2}x^2 + \frac{1}{8}x^3 + \cdots$  respectively (here, both are power series because the $$x$$ coordinate is étale at the corresponding points in the normalization). At the smooth point $$(-1,0)$$ (which is $$t=0$$ in the normalization), it has a single branch, given by the Puiseux expansion $$y = -(x+1)^{1/2} + (x+1)^{3/2}$$ (the $$x$$ coordinate ramifies at this point, so it is not a power series).

The curve $$y^2 = x^3$$ (whose normalization is again a line with coordinate $$t$$ and map $$t \mapsto (t^2,t^3)$$), on the other hand, has a single branch at the cusp point $$(0,0)$$, whose Puiseux expansion is $$y = x^{3/2}$$.

Analytic convergence
When $$K=\Complex$$ is the field of complex numbers, the Puiseux expansion of an algebraic curve (as defined above) is convergent in the sense that for a given choice of $$n$$-th root of $$x$$, they converge for small enough $$|x|$$, hence define an analytic parametrization of each branch of $$X$$ in the neighborhood of $$p$$ (more precisely, the parametrization is by the $$n$$-th root of $$x$$).

Levi-Civita field
The field of Puiseux series is not complete as a metric space. Its completion, called the Levi-Civita field, can be described as follows: it is the field of formal expressions of the form $f = \sum_e c_e T^e,$ where the support of the coefficients (that is, the set of e such that $$c_e \neq 0$$) is the range of an increasing sequence of rational numbers that either is finite or tends to $$+\infty$$. In other words, such series admit exponents of unbounded denominators, provided there are finitely many terms of exponent less than $$A$$ for any given bound $$A$$. For example, $\sum_{k=1}^{+\infty} T^{k+\frac{1}{k}}$ is not a Puiseux series, but it is the limit of a Cauchy sequence of Puiseux series; in particular, it is the limit of $\sum_{k=1}^{N} T^{k+\frac{1}{k}}$  as $$N \to +\infty$$. However, even this completion is still not "maximally complete" in the sense that it admits non-trivial extensions which are valued fields having the same value group and residue field, hence the opportunity of completing it even more.

Hahn series
Hahn series are a further (larger) generalization of Puiseux series, introduced by Hans Hahn in the course of the proof of his embedding theorem in 1907 and then studied by him in his approach to Hilbert's seventeenth problem. In a Hahn series, instead of requiring the exponents to have bounded denominator they are required to form a well-ordered subset of the value group (usually $$\Q$$ or $$\R$$). These were later further generalized by Anatoly Maltsev and Bernhard Neumann to a non-commutative setting (they are therefore sometimes known as Hahn–Mal'cev–Neumann series). Using Hahn series, it is possible to give a description of the algebraic closure of the field of power series in positive characteristic which is somewhat analogous to the field of Puiseux series.