User:Christos Markos/sandbox

A Seminal Theorem on Polynomial Remainder Sequences (PRS's)
To be able to appreciate the very important theorem by Anna Johnson Pell Wheeler — written together with Ruth L. Gordon, for whom no information is available — we need to define four polynomial remainder sequences, or prs's for short shown in Figure 1 below, for any pair of polynomials $$f, g$$ of degrees $$n, m$$, respectively, with $$n$$ ≥ $$m$$. The polynomials $$f, g$$ are always the first two polynomials in every prs.

The first prs is obtained by applying the Euclidean algorithm on the polynomials $$f, g$$. The polynomial remainder sequence obtained this way is called Euclidean prs.

The second prs is obtained by applying the modified Euclidean algorithm. The modification consists in negating at each iteration of the Euclidean algorithm  the polynomial remainder and using the negated polynomial in the next iteration. The modified Euclidean algorithm is of great importance because, when applied to $$f, g$$, where $$g = f'$$, the derivative of $$f$$, we obtain Sturm's theorem and the Sturm sequence of $$f$$, which can be used to isolate by bisection its real roots. To agree with the spirit of the Pell-Gordon article, the polynomial remainder sequence obtained this way is called modified Euclidean prs.

The two prs's defined above are closely related with two matrices introduced by James Joseph Sylvester in 1840 and 1853. The different forms of these two matrices can be seen in Sylvester matrix. We call $$sylvester1(f, g, x)$$, the Sylvester matrix of 1840 of dimensions $$(n + m)$$ × $$(n + m)$$ and $$sylvester2(f, g, x)$$ the Sylvester matrix of 1853 of dimensions $$2n$$ × $$2n$$. Recall that the determinant of $$sylvester1(f, g, x)$$ is the resultant of $$f, g$$. By analogy, and to agree with the title of the Pell-Gordon article, the determinant of $$sylvester2(f, g, x)$$ is called the modified resultant of $$f, g$$.

The resultant of $$f, g$$ may differ from the corresponding modified resultant in sign and by a constant factor. Determinants of submatrices of $$sylvester1(f, g, x)$$ are called subresultants and likewise determinants of submatrices of $$sylvester2(f, g, x)$$ are called modified subresultants.

For the polynomials $$f, g$$ we can now define — by a process known to Sylvester and others in the 19th century — two additional prs's, which can be obtained from $$sylvester1(f, g, x)$$ and $$sylvester2(f, g, x)$$.

The third prs is called subresultant prs of $$f, g$$, whereby the coefficients of the remainder polynomials are all determinants of appropriately selected submatrices of $$sylvester1(f, g, x)$$.

Likewise, the fourth prs is called modified subresultant prs of $$f, g$$, whereby the coefficients of the remainder polynomials are all determinants of appropriately selected submatrices of $$sylvester2(f, g, x)$$.

It is worth noting that the subresultant and modified subresultant prs's of $$f, g$$ can be more efficiently computed employing their Bezout matrix, which has dimensions $$n$$ × $$n$$.

Definition 1 below divides the prs's into two important categories.

Definition 1
A polynomial remainder sequence of two polynomials $$f, g$$ is called complete if the degree difference between any two consecutive polynomials is 1; otherwise, it is called incomplete.

In each prs, the signs of the leading coefficients of the polynomial remainders play an important role and so we have the following:

Definition 2
The sign sequence of a polynomial remainder sequence is the sequence of signs of the leading coefficients of its polynomials.

Regarding complete prs's James Joseph Sylvester observed in 1853 that the sign sequences of the subresultant prs of $$f, g$$ and that of the corresponding Euclidean prs are identical. And the same is true for the sign sequences of the modified subresultant prs of $$f, g$$ and that of the corresponding modified Euclidean (Sturmian) prs.

Additionally, James Joseph Sylvester had observed that the integer coefficients of the polynomial remainders in a subresultant prs are the smallest possible that can be obtained without computing their gcd and without resorting to rationals. The same is also true for the integer coefficients of the polynomial remainders in a modified subresultant prs, provided that the leading coefficient of $$f$$ is 1.

The main point to observe is that for complete prs's the Euclidean and modified Euclidean prs of $$f, g$$ can be computed in such a way that it becomes identical with the subresultant and modified subresultant prs, respectively, of $$f, g$$.

On the contrary, regarding incomplete prs's James Joseph Sylvester had observed in his 1853 article that the sign sequences of the subresultant prs of $$f, g$$ and that of the corresponding Euclidean prs may differ. And the same is true for the sign sequences of the modified subresultant prs of $$f, g$$ and that of the corresponding modified Euclidean (Sturmian) prs.

In other words, things become extremely more complicated in case of incomplete prs's, and Sylvester himself could not see how to compute the modified Euclidean prs from the modified subresultant prs. Sylvester was in good company because Van Vleck, a renowned mathematician of the late 19th early 20th century, was also not able to solve the problem.''

Therefore, in case of incomplete prs's, there was a big problem computing the Euclidean and modified Euclidean prs of $$f, g$$ from the subresultant and modified subresultant prs, respectively, of $$f, g$$; and vice-versa.

The answer to the problem came from Pell and Gordon in 1917, which — together with an observation by Sylvester of 1853 which was proved by Akritas and Malaschonok in 2015 — established a one-to-one correspondence between the modified subresultant prs of $$f, g$$, on one hand, and the corresponding Euclidean and modified Euclidean prs's on the other (see the arrows labelled PG – 1917 and SAM in Figure 1). This one-to-one correspondence unequivocally refutes the claim that Euclidean prs’s are “non signed” sequences and that the signs of their polynomials can be changed arbitrarily. In this context, see also http://planetmath.org/sturmstheorem, where the reader is cautioned that some computer algebra systems may normalize remainders from the Euclidean Algorithm which messes up the sign.

The theorem is stated below.

Theorem 1 (Pell-Gordon, 1917)
Let


 * $$f={{\alpha }_{0}}{{x}^{n}}\text{+ }{{a}_{1}}{{x}^{n-1}}+...+{{a}_{n}}$$

and


 * $$g={{b}_{0}}{{x}^{n}}+\text{ }{{b}_{1}}{{x}^{n-1}}+...+\text{ }{{b}_{n}}$$

be two polynomials of the n-th degree. Modify the process of finding the highest common factor of $$f$$ and $$g$$ by taking at each stage the negative of the remainder. Let the i-th modified remainder be


 * $${{R}^{(i)}}={{r}_{0}}^{(i)}{{x}^}+{{r}_{1}}^{(i)}{{x}^{{{m}_{i}}-1}}+...+{{r}_}^{(i)}$$

where $$({{m}_{i}}+1)$$ is the degree of the preceding remainder, and where the first $$({{p}_{i}}-1)$$ coefficients of $${{R}^{(i)}}$$ are zero, and the $${{p}_{i}}$$-th coefficient $${{\rho }_{i}}={{r}_{{{p}_{i}}-1}}$$ is different from zero. Then for $$k=0,1,...,{{m}_{i}}$$ the coefficients $${{r}_{k}}^{(i)}$$ are given by


 * $${{r}_{k}}^{(i)}=\frac{{{(-1)}^}{{(-1)}^{{{u}_{i-2}}}}...{{(-1)}^}{{(-1)}^}}{{{\rho }_{i-1}}^{{{p}_{i-1}}+1}{{\rho }_{i-2}}^{{{p}_{i-2}}+{{p}_{i-1}}}...{{\rho }_{i}}^{{{p}_{1}}+{{p}_{2}}}{{\rho }_{0}}^}\cdot Det(i,k)$$

(it is understood that $${{\rho }_{0}}={{b}_{0}}$$, $${{p}_{0}}=0$$, and that $${{a}_{i}}={{b}_{i}}=0$$ for $$i>n$$),

where
 * $${{u}_{i-1}}=1+2+...+{{p}_{i-1}}$$,
 * $${{u}_{i-1}}={{p}_{1}}+{{p}_{2}}+...+{{p}_{i-1}}$$

and


 * $$Det(i,k)=\left| \begin{matrix}

{{a}_{0}} & {{a}_{1}} & {{a}_{2}} & ... &. & . & ... & {{a}_{2{{u}_{i}}-1}} & {{a}_{2{{u}_{i}}-1}}+1+k \\ {{b}_{0}} & {{b}_{1}} & {{b}_{2}} & ... &. & . & ... & {{b}_{2{{u}_{i}}-1}} & {{b}_{2{{u}_{i}}-1}}+1+k \\ 0 & {{a}_{0}} & {{a}_{1}} & ... &. & . & ... & {{a}_{2{{u}_{i}}-1}}-1 & {{a}_{2{{u}_{i}}-1}}+k \\ 0 & {{b}_{0}} & {{b}_{1}} & ... &. & . & ... & {{b}_{2{{u}_{i}}-1}}-1 & {{b}_{2{{u}_{i}}-1}}+k \\ . & . & . & ... & . & . & ... & . & . \\   0 & 0 & 0 & ... & {{a}_{0}} & {{a}_{1}} & ... & {{a}_{{{u}_{i}}-1}} & {{a}_{{{u}_{i}}-1}}+1+k \\ 0 & 0 & 0 & ... & {{b}_{0}} & {{b}_{1}} & ... & {{b}_{{{u}_{i}}-1}} & {{b}_{{{u}_{i}}-1}}+1+k \\ \end{matrix} \right|$$

The proof of this theorem is by structural induction on the polynomials of the prs.

Theorem 1 led Akritas, Malaschonok and Vigklas to the discovery of another theorem, which establishes a one-to-one correspondence between subresultant prs’s, on one hand, and Euclidean and modified Euclidean prs’s on the other (see the arrows labelled AMV – 2015 in Figure 1). This one-to-one correspondence unequivocally refutes, a second time, the claim that Euclidean prs’s are “non signed” sequences and that the signs of their polynomials can be changed arbitrarily. The complete picture is given by Figure 1.