Talk:Extended Euclidean algorithm

untitled
needs links to Linear congruence theorem. Dmharvey Talk 5 July 2005 17:46 (UTC)

Unclear pseudocode
I cleaned up the pseudocode by simply replacing it with a functioning C program, instead. This should make things a lot more clear, plus anyone can easily compile it and run it themselves. If you see any problems with the program, email me at alex.woody@gmail.com. Thanks!

gcd/hcf
hello. Don't you think it is confusing to have this gcd/hcf thing all along the text?

I believe that if this hcf thing is really important at all, this alternative name should stick to that phrase in the first sentence.

If someone here really belives that HCF is the best name to call the gcd, then perhaps we can change the whole text to hcf, and keep gcd only in the first sentence. It's extremelly awkward to me, but it's better than having the operation called by two names all the time, don't you think?...

Every mathematical text have this problem with what nomenclature to adopt. This is to be defined on a prologue, and the rest of the text should use a unique notation and nomenclature. Bringing the problem to the body of the text is extremmlly confuse.


 * agree 100% ... should stick to gcd after introduction Dmharvey [[Image:User_dmharvey_sig.png]] Talk 9 July 2005 14:02 (UTC)

Also for the pseudocode below, it would be nice to mention that "<>" means not equals to. Does any programming language use that syntax. Is it a standard syntax used in psuedocode?


 * Several languages use it, mostly notably BASIC. It should use &ne; in the article, however. Deco 01:45, 14 December 2005 (UTC)

The two methods
Notice that the first method in article is iterative rather than recusive in nature: it employs the quotient in the same order they appear/ the algorithm proceed forward. Compare this with the second method described, which is really a recursion and notice how they use the quotient in the reverse order they appear/ it work backward. Thus it is reasonable that the pseudocode is so long and seemingly complicated in the first method: it uses recursion rather than iteration. I suggest the code and the function call can be rewritten in an iterative fashion. Thanks. --Lemontea 14:47, 26 January 2006 (UTC)

Leftover
I've refactored quite some sections of this article, here are something I would like to see get handled: That's all I can think up with now. Thanks. --Lemontea 15:15, 17 February 2006 (UTC)
 * 1) Cleanup the informal description part I've written. Proofread it, check its tone, etc.
 * 2) Help format the table and layout of this article in general.(e.g. the last table in the iterative method section could do some highlighting, just as the last table do)
 * 3) Gives commments on the style of the informal formulation section: is it suitable to intermix the example with the theory(that is, explain the theory, then demostrate it in action, then some theory again, and so on), or is it good to completely separate them? Partly because of the messy nature of this algorithm, and partly because of easy understanding and readibility(could have drowned off if it were pure theory), I've chosen the first one.
 * 4) Does The formal description part need a description of finding out x and y: just like those written out by bulleted list(see other algorithm page for examples), or does the pseudocode suffice?

Link to .exe file
I removed a link to an .exe file (presumably a Windows executable, compiled from one of the source code examples). Since there are other, safer, demonstrations, I think it's worth it to minimise the risk of Wikipedia being used to spread malicious software.

In case someone strongly disagrees, I left the original link commented out in the references section.

Robert 21:35, 11 May 2006 (UTC)

Matrix form
There is a representation of the EEA step -- that $$x_{i+1}=x_{i-1}-q_i x_i;$$ $$y_{i+1}=y_{i-1}-q_i y_i$$ thing -- as a matrix multiplication. Mine source is Gregory & Krishnamurthy, "Methods and Applications of Error-Free Computation", Springer, 1984. The respresentation itself is
 * $$\begin{bmatrix}x_{i+1} & y_{i+1}\end{bmatrix}=\begin{bmatrix}1 &-q_i\end{bmatrix}\begin{bmatrix}x_{i-1} & y_{i-1}\\x_{i} & y_{i}\end{bmatrix}$$

and is IMHO rather understandable and brings some more insight as the notation used in this article. --88.68.34.243 12:44, 9 June 2006 (UTC)

Soure code reference removed
I've remmoved the link to the C++ source code program as it seems like a broken one :

azi@magicb0x ~ $ g++ mulinv.cpp mulinv.cpp:3:16: error: ronh: No such file or directory mulinv.cpp: In function 'int main': mulinv.cpp:50: warning: converting to 'int' from 'double' mulinv.cpp:53: error: 'mod' was not declared in this scope mulinv.cpp:72: error: 'mod' was not declared in this scope mulinv.cpp:89: error: 'mod' was not declared in this scope azi@magicb0x ~ $

Table method
One should explain the table method more accurately. The method described does not return the values shown in the table, neither for using always 5 nor for using the divider...

divider:

1	0	120

0	1	23

1	-5	5

-3	16	3

7	-37	2

-10	53	1

5:

1	0	120

0	1	23

1	-5	5

-5	26	3

26	-135	2

-135	701	1

Under The recursive method
I'm not sure how to fix it (if it can be done), but the numbered steps at the end of this section overlap the table in every browser I have. This is mostly a suggestion, but it might make the page (slightly) easier to read.

Iterative method
I have cleaned up the iterative method section and used equations to express what was previously written only as text. I hope this new version is much clearer. Cintrom (talk) 18:42, 21 January 2008 (UTC)

Unclear wording
"In this case, the remainder in the last but one line (which is 1) indicates that the gcd is 1; that is, 120 and 23 are coprime (also called relatively prime)." As a reader, I'm completely unable to parse what the heck "in the last but one line (which is 1)" is supposed to mean. What is a in-the-last-but-one-line? —Preceding unsigned comment added by 146.113.43.88 (talk) 20:43, 7 February 2008 (UTC)
 * There are five lines. The last line is the fifth line. The last-but-one line is the fourth line. This might be clearer with hyphens. Algebraist 13:01, 8 March 2008 (UTC)

recursive version with gcd
I'd suggest the recursive version should return the gcd. Then the proof of correctness doesn't have to link off to another article. The code to return the triple becomes something like: euc' a 0 = (1, 0, a) euc' a b = (k,j-k*(div a b), g)  where (j, k, g) = euc' b (mod a b)

We could (but probably shouldn't) include a pretty print: euc a b = (show n) ++ "*" ++ (show a) ++ " + " ++ (show m) ++ "*" ++ (show b) ++ " = " ++ (show g)   where (n,m,g) = euc' a b

jbolden1517Talk 17:29, 11 December 2008 (UTC)

Negative gcd?
When using negative input in the algorithms of this page might produce a negative gcd. However the article on the Greatest common divisor states, that it "is the largest positive integer that divides both numbers without remainder". The problem is trivial to fix (if the result is negative, just multiply everything by -1), but I am unsure where on the page this should be noted. NB:The Euclidean algorithm page has the same problem. --caramdir (talk) 19:23, 3 March 2009 (UTC)


 * I agree with your point. But I want to make a bit more complicated.  As the number field gets more advanced which gcd you want isn't always clear.  For example
 * $$ gcd(2,8-2\sqrt{15}) = \sqrt{3} - \sqrt{5}\; in \; \mathbb{Z}[\sqrt 3, \sqrt 5]$$.

Do you want that or its negative? I think it might be better to say the gcd is not unique up to units but by convention we use positive integers preferentially in the Greatest common divisor article. jbolden1517Talk 04:22, 16 March 2009 (UTC)


 * Its of course true the in general does not speak about the gcd, but a. (Btw, the ring in your example should probably be $$\mathbb{Z}[\sqrt 3, \sqrt 5]$$.) However the Greatest common divisor and Euclidean algorithm articles almost exclusively speak about the gcd in the integers. Here the convention makes sense because otherwise you would have to replace all the "=" signs. It's probably best to include the info the the gcd is not unique in all three articles. caramdir (talk) 18:43, 16 March 2009 (UTC)


 * Thanks for the correction on the example, I fixed. OK so we say unique by convention....  jbolden1517Talk  19:06, 16 March 2009 (UTC)

Actually allowing negative values and also allowing negative results can be important to easily balance the GCD function with the EXTENDED function. Since the results need to match, i.e.:

GCD(A,B)=G, EXTENDED(A,B)=(E,F) A*E + B*F = G

Allowing negative results can give:

GCD(12,8)=4, EXTENDED(12,8)=(1,-1). GCD(-12,8)= -4, EXTENDED(-12,8)=(1,1).

So taking the ABS after the GCD, can destroy the relationship to EXTENDED.

Jan Burse (talk) 19:54, 1 July 2012 (UTC)

Revert of C implementation deletion?
My revert of User:Luckyblue1's deletion of the C implementation was itself reverted by Special:Contributions/130.86.206.33. All of the edits were without explanation, and so I was perhaps rather quick to assume vandalism, but then I didn't give an explanation myself either so maybe the latter one assumed I was the vandal here. I am just an occasional editor myself, so to avoid an edit war I am hereby asking here: Should the C implementation stay or not? --Ørjan (talk) 00:53, 21 November 2009 (UTC)

My answer to this is that the C implementation should remain cut. The existing pseudo-code is sufficient to develop a C implementation quickly and the mathematics can be easily followed and related to the mathematic algorithms developed elsewhere on the page. By contrast, the C implementation was opaque and provided little clarification. Inclusion of the C algorithm changes the page's purpose from providing explanation to providing an _efficient_ algorithm targeted at a particular language. Along these lines, we might include specific efficient algorithms for all languages, which is somewhat absurd. --Finog (talk) 16:59, 5 December 2009 (UTC)

The only reason I had the C implementation there in the first place was because the existing pseudocode at the time was horrible and inaccurate. If the C code compelled someone to write between pseudocode, then my job is done. My only fear is that the reason that code was taken down was because someone used it for something that they didn't want to get traced back to this page.

Euclid
Shouldn't this article state Euclids name and where he described this algorithm, somewhere? Or, if he did not come up with the extension then, shouldn't it state who did? 80.126.179.57 (talk) 13:03, 27 April 2010 (UTC)


 * Well Euclidean algorithm contains this quote, and there is a bibliography entry later for the book referenced:
 * The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently.
 * I'm not sure how to copy the information here as the articles use completely different referencing styles. --Ørjan (talk) 00:37, 28 April 2010 (UTC)

Horribly unclear
"Suppose $$d_i = d_{i-2} - k_{i-1} \cdot d_{i-1}$$. Then it must be that "

Hey! Where did k come from? it's not in the table!

"To find the third row of the table in the example, just notice that 120 divided by 23 goes 5 times plus a remainder. This gives us k, the multiplying factor for this row."

So .. which is k? The dividend, or the remainder? Both are 5. — Preceding unsigned comment added by Paul Murray (talk • contribs) 01:07, 27 January 2011 (UTC)


 * The sentence quoted is another way of saying "Let $$k_{i-1} = \frac{d_{i-2}-d_i}{d_{i-1}}$$"; but that may still be a bit obscure. The point is that you divide $$d_{i-2}$$ by $$d_{i-1}$$, call the integer quotient $$k_{i-1}$$, and call the remainder $$d_i$$. —Tamfang (talk) 04:08, 27 January 2011 (UTC)

Insufficient Proof
The so-called "Proof of correctness" is woefully lacking. It makes claims like, "we know that..." without first establishing or in any way justifying the conclusion. It then proceeds with steps which do not appear to follow from that bit which, "we know." This cannot be called a proof at all, and I suggest it be removed until something vaguely proof-like can be substituted. --Prestidigitator (talk) 06:48, 6 March 2011 (UTC)


 * I've redone the proof as a proper inductive argument, in the mean time proving termination, cleaning up the algorithm itself so that it avoids dividing a by b twice in (almost) every call, and proving ax + by is nonnegative. Hope this helps. Marc van Leeuwen (talk) 11:38, 6 March 2011 (UTC)


 * Definitely a lot more satisfying and correct from my vantage point Marc. Well done, and thank you!  --Prestidigitator (talk) 18:03, 6 March 2011 (UTC)

broken link
The "external link" "How to use the algorithm by hand" is broken. I don't recall what to do in that case, if s/o knows, please do it. &mdash; MFH:Talk 20:08, 7 August 2011 (UTC)


 * It should be deleted. -- X7q (talk) 20:23, 7 August 2011 (UTC)

Many Variables Case Probably Missleading
This case should be formulated a little bit more careful. For example if I take the equation:

1*x + 1*y + 2*z = 1

Then a triple obtained with the algorithm in the article would be

(1,0,0)

i.e.

-1 + 1*1 + 1*0 + 2*0 = 0

But I cannot take the triple and generalize it to:

(1+2*u, 0+2*v, 0-2*u-2*v)

The above would say x is odd and y is even. But there are of course also solutions where x is even and y is odd.

So the generalization has to already happen during the calculation of the triples somehow. So that for example also the following triple can be found:

(0,1,0)

Jan Burse (talk) 19:36, 1 July 2012 (UTC)

not working pseudocodes
for a=79 and b=166 codes return -21 and 10. But it should be 145 and -69. The proof: 79*(-21) MOD 166 gives -165 (wrong! it should be 1), but 79 * 145 MOD 166 gives right 1.--The foe (talk) 23:45, 29 July 2012 (UTC)
 * No, that's not an error. The problem is that you are using a MOD which gives a negative result when just the first argument is negative.  And your correction doesn't help; you just get a similar problem with 166*(-69) MOD 79 == -78 instead.
 * However it's mathematically more logical to use a MOD which always gives a nonnegative result when the second argument is positive; then you always have (x MOD z) == (y MOD z) precisely when x ≡ y (mod z). In the congruence view there is no real difference, as -165 ≡ 1 (mod 166).
 * Programming languages differ widely in how modulo behaves with negative numbers, see Modulo_operation. The C language even leaves it up to the implementation, to allow whichever is most efficient.  As I recall, it turns out that e.g. on x86 CPUs the version which gives negative numbers is a single machine code instruction, which means that it is the one C compilers are almost certain to use.  On the other hand, the more mathematically minded Haskell language provides both versions as rem and mod, with rem the x86 version and mod the more mathematical one. --Ørjan (talk) 20:01, 30 July 2012 (UTC)

Question : how to find GCD (232, 276) ?
Answer : by Euclid's algorithm and the lobbying method (your iterative method)

276 - 232•1 = 44; 232 - 44•5 = 12;  44 - 12•3 = 8;  12 - 8•1 = 4;  8 - 4•2 = 0. We claim that GCD (232, 276) = 4!!! and prove this by


 * (i) Walking through the five identities from right to left: 4 is a common divisor 0 and 4; 4 is a common divisor of 4 and 8; 4 is a common divisor of 8 and 12; 4 is a common divisor of 12 and 44; 4 is a common divisor of 44 and 232;  4 a common divisor of 232 and 276.


 * (ii) Walking through the identities from left to right : let d be a common divisor of 276 and 232; d a common divisor of 232 and 44; d a common divisor of 44 and 12; d a common divisor of 12 and 8; d a common divisor of 8 and 4.  Xoagus (talk) 20:10, 24 June 2013 (UTC)


 * Note, by completing these two itineraries one sees : 4 is a common divisor of 232 and 276 and each common divisor of 232 and 276 is a divisor of 4. This means 4 is the greatest common divisor of 232 and 276. — Preceding unsigned comment added by Xoagus (talk • contribs) 20:26, 24 June 2013 (UTC)


 * (iii) One more walk from right to left : 4 = 12 - 8•1 = 12 - [44 - 12•3] •1 = 12•4 - 44•1 = [232 - 44•5]•4 - 44•1 = 232•4 - 44•21 = 232•4 - [276 - 232•1] •21 = 232•(25) - 276•(21)


 * Note : by applying the Euclidean algorithm we found a solution 232•(25) - 276•(21) = GCD (232, 276) and also 232•(25) + 276•( -21) = GCD (232, 276) Bézout's identity. Xoagus (talk) 13:54, 25 June 2013 (UTC)

Definition of GCD (a, b) : For each pair a ≤ b ε Z+ there exist GCD (a, b) with the following properties :


 * GCD (a, b) ε Z+.


 * If b = ak, k ε Z+ then GCD (a, b) = a. If b - ak1 = r1 we continu the long divisions according Euclid's algorithm with a - r1k2 = r2 ; r1 - r2k3 = r3 etc. ...... untill we get rn-1 - rnkn+1 = 0. Then GCD (a, b) = rn. (see example)


 * GCD (a, b) is a common divisor of a and b. (see example)


 * Each common divisor d of the numbers a and b is a divisor of GCD (a, b) (see example).


 * Moreover the Diophantin equation ax - by = GCD (a,b) has an unique solution (x, y) in Z+. (see example) Further the Diophantin equation ax + by = GCD (a, b) has a solution (even infinite many) in Z. (see example) 84.24.10.61 (talk) 14:48, 26 June 2013 (UTC) Xoagus (talk) 15:21, 26 June 2013 (UTC)


 * We've gone through that before. That is not the definition of GCD.  Moreover, this is the wrong forum to talk about it.  — Arthur Rubin  (talk) 20:23, 26 June 2013 (UTC)

What is your definition of GCD (a, b) ? — Preceding unsigned comment added by Xoagus (talk • contribs) 10:02, 28 June 2013 (UTC)


 * The one from the GCD article; in Z, GCD(a, b) is the largest* common divisor of a and b.
 * Where "largest" is defined as any of:
 * Largest element of N, redefining "0" as larger than all positive natural numbers.
 * largest element of Z, redefining "0" as larger than all integers.
 * Largest element of N, with respect to the divisibility relation.
 * Largest element of Z, with respect to modified divisibility (&minusn is strictly less than n, rather than being equivalent)
 * But that has nothing to do with this article.
 * — Arthur Rubin (talk) 10:32, 28 June 2013 (UTC)

Are we still doing mathemetics ? 0 > 5 and thus -2 > 3? What is "0"? Is it the same as infinite ? If so is infinite a number ? What is the divisibility relation and what is the modified divisibility relation? Xoagus (talk) 18:50, 30 June 2013 (UTC)


 * The < I had in mind is ... < &minus;3 < &minus;2 < &minus;1 < 1 < 2 < 3 < ... < 0.
 * And, for modified divisibility, I suggest:
 * a D b if a divides b and (b does not divide a or (a+b = 0 and b is positive))).
 * But, if you don't know what the "divisibility relation" is, you know enough to comment sensibly about GCD. Please return when you've read some elementary number theory textbooks.  — Arthur Rubin  (talk) 19:26, 30 June 2013 (UTC)

The article deserves to be completely rewritten
The article introduces lengthy confusing explanations that make difficult to the reader to find where the algorithm is described. In particular, it seems that the editors of this article did ignore that an algorithm is like a theorem, it has to be stated separately from its explanation and its proof. The consequence, is that, although I have taught many times this algorithm, I am unable to decide, without a careful reading, if the algorithm(s) which is (are) described is(are) the one that is known under this name. What may understand a non-expert of the subject? I have rewritten the lead. I plan to rewrite the remainder of the article, but, because of the amount of needed work, I am not sure that this will be done soon. D.Lazard (talk) 16:42, 2 November 2013 (UTC)


 * I have began to rewrite the article, and put in "todo" list above the list of edits yet needed. D.Lazard (talk) 23:37, 3 November 2013 (UTC)

Recursive method
The recursive algorithm that is presented is not a tail recursion. It follows that, if implemented, it needs an auxiliary memory (in the execution stack) which is not constant and is proportional to the number of recursive calls. It follows that the implementation of this code is not recommended, and as such is not notable. Even if notable, presenting it here would give it a undue weight. There is a tail recursive version of the algorithm, which involve a function with 6 parameters. Its construction from the iterative version of the algorithm is a standard programming exercise, which does not provides any encyclopedic insight to the subject of the article. I'll thus remove the present recursive version and not replacing it by the tail recursive version. However, if someone judges that the tail recursive version is really needed, I'll not oppose to its insertion. D.Lazard (talk) 15:23, 4 November 2013 (UTC)

I for one actually found the recursive method easier to understand (since the regular GCD algorithm is usually presented recursively) than the iterative method and would like it to remain for that reason. Modified like jbolden1517 suggests it helps understanding even further. 2001:6B0:1:1041:21D:E0FF:FE52:2EFF (talk) 16:49, 25 November 2013 (UTC)

the definition of $$q_i$$
$$q_i$$ is not defined at all. Jackzhp (talk) 08:42, 20 April 2016 (UTC)
 * $$r_{i-1}=q_i*r_i+r_{i+1}$$ given $$r_{i-1}>r_i$$, That's to say $$r_{i+1}=r_{i-1}%r_i$$ and $$q_i=r_{i-1} / r_i$$ integer division, no decimals. Jackzhp (talk) 08:46, 20 April 2016 (UTC)

D.Lazard (talk) 08:52, 20 April 2016 (UTC)

overflow problem
Is the algorithm used to calculate $$r_{i+1},s_{i+1},t_{i+1}$$ safe? no overflow problem at all? For example, a program use all int type variables, then is the algorithm safe, we do not have to consider any possible overflow. Jackzhp (talk) 10:10, 20 April 2016 (UTC)
 * Please, do not edit other's post, and do not edit your posts after someone has answered, as you did recently.
 * To this question, the answer is yes, one may show that $$r_{i+1},s_{i+1},t_{i+1}$$ are never larger (in absolute value) than the largest input. D.Lazard (talk) 10:33, 20 April 2016 (UTC)
 * Do you know some reference where this result can be found in the literature? --NeoUrfahraner (talk) 16:10, 4 May 2016 (UTC)
 * This is certainly (in the text of in the exercises) of Knuth's The Art of Computer Programming, chapter 4. D.Lazard (talk) 17:30, 4 May 2016 (UTC)
 * Should the result be mentioned in the article? Actually it can simply be added to the "Proof" section by saying that $$|s_i|$$ an $$|t_i|$$ are increasing. --NeoUrfahraner (talk) 05:28, 5 May 2016 (UTC)
 * Yes, the result should be mentioned in the article. However, it deserve to be described in its own section, because of its important applications. Beside the overflow problem, that I find minor, it allows an accurate evaluation of the bit complexity of the algorithm. More important, this result is the basis of effective Thue's lemma, and therefore, it is commonly used for modular division and rational reconstruction. Also the polynomial analogue of this result makes easy to compute Padé approximants with the polynomial extended Euclidean algorithm. As you can see from the above links, a substantial editorial work is needed in this area. D.Lazard (talk) 14:55, 5 May 2016 (UTC)
 * I added the parts which IMHO can be simply added because they were already contained partially in the article. I agree that for further applications a section of its own wouldbe needed. --NeoUrfahraner (talk) 06:38, 6 May 2016 (UTC)

Binary GCD
I ran into a very clever modular inverse that used an extended form of the binary GCD algorithm. I'm surprised it's not referenced here -- seems like a useful thing to include, especially next time someone wants to take a stab at improving the article. - Taral (talk) 21:04, 11 October 2020 (UTC)
 * Please, provide a reference. Without it, we cannot decide whether it is reliable and notable enough for being included in WP, and, if it is, to decide whether it must be included (here or in Modular inverse). In any case, this article lacks a section "Generalization". In fact, the method used to pass from the Euclidean algorithm to the extended algorithm can be applied to many gcd algorithms, even to algorithms that use fast multiplication. This must appear in the article. D.Lazard (talk) 21:23, 11 October 2020 (UTC)

Part of proof -- that s_{k+1} and t_{k+1} are the quotients of b and a by their GCD -- is inadequate
As of the revision at time of writing, the relevant part says: Viewing this as a Bézout's identity, this shows that $$s_{k+1}$$ and $$t_{k+1}$$ are coprime. The relation $$as_{k+1}+bt_{k+1}=0$$ that has been proved above and Euclid's lemma shows that $$s_{k+1}$$ divides b and $$t_{k+1}$$ divides a. As they are coprime, they are, up to their sign the quotients of b and a by their greatest common divisor. While the conclusion is true, the argument as currently written suggests that we only need to know that in order to conclude that $$(s_{k+1},\ t_{k+1}) = \left(\tfrac{b}{\gcd(a,b)},\ \tfrac{a}{\gcd(a,b)}\right)$$. However, this is not true: a counterexample is: for which the above three conditions hold, but $$\gcd(a,b) = 5$$, so $$\left(\pm\tfrac{b}{\gcd(a,b)},\ \pm\tfrac{a}{\gcd(a,b)}\right) = (\pm 9, \pm 4) \neq (3, 2) = (s_{k+1}, t_{k+1})$$. It is in fact necessary to note that, in addition, $$\tfrac{b}{\gcd(a,b)} \mid s_{k+1}$$ and $$\tfrac{a}{\gcd{a,b}} \mid t_{k+1}$$ in order to make the desired conclusion. Without this, the proof is at best misleading and confusing, and at worst incorrect. My suggested replacement is: Viewing this as a Bézout's identity, this shows that $$s_{k+1}$$ and $$t_{k+1}$$ are coprime. The relation $$as_{k+1}+bt_{k+1}=0$$ from above is equivalent to $$\tfrac{a}{\gcd(a,b)} s_{k+1} = -\tfrac{b}{\gcd(a,b)} t_{k+1}$$, which by Euclid's lemma shows that
 * $$s_{k+1} \mid b$$
 * $$t_{k+1} \mid a$$
 * $$\gcd(s_{k+1}, t_{k+1}) = 1$$
 * $$a = 4 \times 5 = 20$$
 * $$b = 9 \times 5 = 45$$
 * $$s_{k+1} = 3$$
 * $$t_{k+1} = 2$$
 * $$\tfrac{a}{\gcd(a,b)} \mid t_{k+1}$$ and $$t_{k+1} \mid \tfrac{a}{\gcd(a,b)}$$, and hence $$t_{k+1} = \pm \tfrac{a}{\gcd(a,b)}$$; and
 * $$\tfrac{b}{\gcd(a,b)} \mid s_{k+1}$$ and $$s_{k+1} \mid \tfrac{b}{\gcd(a,b)}$$, and hence $$s_{k+1} = \pm \tfrac{b}{\gcd(a,b)}$$.

It also occurs to me that maybe $$x \mid y \ \ \wedge\ \ y \mid x \ \ \implies\ \  x = \pm y$$ might also not be obvious to readers, so it might be good to link to an article justifying this fact, if there is one.

Joseph Crowe (talk) 06:07, 24 October 2021 (UTC)
 * "Inadequate" is too strong for a proof for which only one step is lacking. Nevertheless you are right that the proof is incomplete. However your edit, although correct and adapted for a textbook, is not convenient for an encyclopedy: to much formulas that hide the main points (in Wikipedia, the proofs that are given are here for emphasizing the main points; the reader to whom a detailed proof is needed can find it in a textbook). For these reasons, I'll modifiy this part of the proof as follows:
 * The relation $$as_{k+1}+bt_{k+1}=0$$ that has been proved above and Euclid's lemma show that $$s_{k+1}$$ divides $b$, that is that $$b=ds_{k+1}$$ for some integer $d$. Dividing by $$s_{k+1}$$ the relation $$as_{k+1}+bt_{k+1}=0$$ gives $$a=-dt_{k+1}.$$ So, $$s_{k+1}$$ and $$-t_{k+1}$$ are coprime integers that are the quotients of $a$ and $b$ by a common factor, which is thus their greatest common divisor or its opposite.
 * D.Lazard (talk) 08:37, 24 October 2021 (UTC)


 * This is acceptable to me. (Although I think including $$\gcd(a,b) = \gcd(-d t_{k+1},\ d s_{k+1}) = \gcd(-d, d) = \pm d$$ could make it easier to understand.)
 * Joseph Crowe (talk) 13:31, 24 October 2021 (UTC)
 * Adding too many details in an encyclopedia is not always a good idea, since the useful details depend on the background and the way of thinking of the reader. D.Lazard (talk) 13:49, 24 October 2021 (UTC)

Integer Multiplication is sub-quadratic
In the section Extended Euclidean algorithm, it's claimed that the time needed for integer multiplication and division grows quadratically. This is false. Integer multiplication is well known to be sub-quadratic with many examples (see Multiplication algorithm). Most modern arbitrary precision packages implement a variety of multiplication methods (for example gmplib).

The run-time is used as further evidence that the alternate version of the extended gcd algorithm is less efficient than the first. There's a difference between "practical" optimization and big-O optimization. A quadratic algorithm might be practically faster because of how base operations are implemented in hardware or other constraints (memory access time, assembly instruction set, setup cost, etc.). Taking a hard stance about which implementation is better without any qualifications is false. I would recommend removing the latter sentences talking about optimization altogether. At best, a suggestion should be given as to what the various concerns are when picking algorithms for optimization purposes.

Python code
I thought I'd post my python code [based on pseudo-code] here just in case anyone runs into similar points of confusion.

def extended_euclidean_algorithm(a: int, b: int) -> tuple[tuple[int, int], int]: """For two ints, (a,b), solves: `a * x + b * y = gcd(a,b)`""" (old_r, r) = (a, b)       (old_x, x) = (1, 0) # We will back out y at the end. while r != 0: quotient = old_r // r # "div" is just integer division! (old_r, r) = (r, old_r - quotient * r)           (old_x, x) = (x, old_x - quotient * x)        # Assign the OLD values: when r==0 we've gone too far. # If output the most recent `x`, then we'd get a * x + b * y == 0 # See esp. the example table. x = old_x gcd = old_r # If "gcd < 0" then we know that's not true: If "gcd" is a       # common divisor, then so is -gcd>0. # Relevant for negative numbers. # This step is identical to multiplying both sides of the # equation by -1. if gcd < 0: gcd *= -1 x *= -1 # Fill in y (coeff on b) now that we know other values. if b != 0: y = (gcd - x * a) // b       else: y = 0 return (x, y), gcd Wobblesome (talk) 18:35, 7 September 2022 (UTC)

Isn't the `inverse(a,n)` pseudocode wrong?
Not too long ago I updated the pseudocode for calculating the `inverse(a,n)` (https://en.wikipedia.org/w/index.php?diff=1154654926&oldid=1150662331&title=Extended_Euclidean_algorithm). As far as I can tell, it doesn't work for negative values of `a` and (if I remember correctly) it won't return the smallest possible number unless a final `mod n` operation is performed before returning the value.

I'm not an expert in the field, although I did have some conviction over the edits given that's how I implemented the `inverse` function (in Rust), with successful results. Given the edit was reverted, I would like to ask User:D.Lazard if you've had a chance to implement either versions of the pseudocode (my edit or the reverted version)? If you could provide some step by step examples of how the current version would work with negative numbers, for example, or if you have an implementation in any programming language I'd like to take a look.

This is my working Rust implementation

pub fn mod_mult_inv(a: &BigInt, n: &BigInt) -> Option { if n < &BigInt::from(0) { panic!("Expected `modulus` to be positive."); }       // Ensure `a` is positive. The remaining code expects a positive `a` value. let a: BigInt = if a > &BigInt::from(0) { a.clone } else { // If `a` is negative, find a positive congruent using Euclidean // division. Using `a` or any of its congruents will result in the same // value being returned from this function. //           // Couldn't find a Euclidean division function, so using `mod_floor` // which produces the same results for a positive modulus, as is the // case here. See https://en.wikipedia.org/wiki/Modulo a.mod_floor(n) };       let mut t = BigInt::from(0); let mut r = n.clone; let mut new_t = BigInt::from(1); let mut new_r = a.clone; while new_r != BigInt::from(0) { let quotient = &r / &new_r; (t, new_t) = (new_t.clone, &t - &quotient * &new_t); (r, new_r) = (new_r.clone, &r - &quotient * &new_r) }       if r > BigInt::from(1) { None } else { Some(t.mod_floor(&n)) }   }

``` Aryzing (talk) 11:18, 6 October 2023 (UTC)