Talk:Kahan summation algorithm

Some Comments
The algorithm as described is, in fact, Kahan summation as it is described in, however, this algorithm only works for either values of y[i] of similar magnitude or in general for increasing y[i] or y[i] << s.

Higham's paper on the subject has a much more detailed analysis, including different summation techniques. I will try to add some of the main statements from that paper to this article if I find the time and there are no objections.

Pedro.Gonnet 16:51, 5 October 2006 (UTC)


 * And Bromskloss's version:


 * Somewhat surprisingly the latter is actually shorter by one instruction, apparently because in it the new values from the input array are added directly to variable c rather than being first loaded into a register. The issues you mentioned did not seem to affect these. Ossi (talk) 21:07, 11 March 2010 (UTC)


 * The first sentence referred to is "Yes, there is, and generally." And messing with registers can make a difference. The removed text noted that on the rather common ibm pc and clones with the floating-point arithmetic feature, the machine offers 80-bit floating-point arithmetic in registers which is not the same precision as that offered by the standard 32 or 64-bit variables in memory, and how this can make a difference. The code you have shown may be clear to those who are familiar with it, but for others, annotations would help. For instance, does sub a,b mean a - b, or b - a? Does the result appear in the first or second-named item? Or somewhere else? I have messed with various systems having different conventions. NickyMcLean (talk) 22:09, 11 March 2010 (UTC)


 * On x86 processors set in extended-precision mode, the registers always have precision greater than or equal to the precision declared by the programmer, never less. This can only degrade the accuracy if extended precision were only used for t and c but not for sum in the algorithm, which seems unlikely.  (Do you have any reputable source giving an example of any extant compiler whose actual optimizations spoil Kahan's algorithm under realistic circumstances?) The only possibility that occurs to me is when input(i) is a function rather than an array reference, and somehow causes a register spill of sum in the middle of the loop. (Kahan would probably just tell you to declare all local variables as long double on x86.) — Steven G. Johnson (talk) 23:19, 11 March 2010 (UTC)


 * The floating point type can obviously make a difference. I didn't see that mentioned in earlier versions but maybe I just missed it. Though I'm not entirely sure, I think that most compilers would only use the 80 bit registers for variables declared long double. As for your question, I think that sub a,b means b - a and the result appears in the second operand, but I'm not sure. You can look it up if you want. The code is just GNU Assembler for x86. It would be difficult for me to annotate the code since I really have no experience in assembly programming. Ossi (talk) 04:30, 12 March 2010 (UTC)
 * Not to mention that as much as it's abused, "AT&T syntax" Intel assembly isn't and never was a real thing outside of the people who ported AS to x86 a long time ago either being too lazy to write a new parser or unwilling to read Intel's manuals where the proper syntax for their assembly is defined. AT&T added arbitrary characters to opcodes in addition to size specifiers that normally only exist on instructions where their size can't be inferred in any other way, threw out the proper memory addressing syntax, and added needless % signs before register names and $ signs before immediates, where the only use of $ in intel syntax is a placeholder for the current address in the program.  I've been working mostly in assembly (and then mostly in x86) for something like 30 years now and AT&T is still almost unreadable to me, it's like trying to read a book with your eyes crossed.
 * The 80-bit registers are really a stack of registers that requires operations to be performed in a certain order, dating back to its origins as a separate co-processor, and it's still possible to turn down the precision of those for faster execution at the expense of lowered precision...  floating point operations on SSE / AVX registers within a single instruction can still be performed at the higher precision, it's just that they can't really be kept that way outside of the single instruction.  You can actually pull values off the floating point stack in their full 80-bit precision directly into memory but pretty much every compiler screws it up (especially for local stack variables) so it needs to be done in assembly and isn't a pretty solution.
 * Hopefully x86 will get a 128-bit float type like POWER at some point for scientific applications that really, really need it (or even simpler things like fractal generators that could use the precision while zooming in before they finally need to switch over to an arbitrary precision algorithm). A Shortfall Of Gravitas (talk) 19:32, 7 October 2023 (UTC)

Interesting as theoretical discussions of compiler optimizations are in this context, it can't go into the article without reputable sources. Personal essays and experiments are irrelevant here. — Steven G. Johnson (talk) 23:32, 11 March 2010 (UTC)


 * I don't think anyone was suggesting that we should add this to the article anyway. Ossi (talk) 04:14, 12 March 2010 (UTC)


 * NickyMcLean seems to be complaining about last month's removal from the article of a long discussion of problems caused by hypothetical compiler optimizations, which was removed because it had neither references nor even any concrete examples from real compilers. — Steven G. Johnson (talk) 16:15, 12 March 2010 (UTC)


 * I got curious and experimented with GCC again using optimization flags -O3 and -funsafe-math-optimizations. The article's version stayed the same but Bromskloss's version was actually optimized to (again just the loop part):


 * This appears to be simply the naive summation. So, GCC can really optimize this algorithm away, though probably only when explicitly given permission to do so. (I don't quite understand why someone would want to use -funsafe-math-optimizations flag.) This is a concrete example from a real compiler, but my experiments might count as Original Research. Ossi (talk) 05:07, 13 March 2010 (UTC)


 * I think this issue does deserve at least a short mention in the article. How good references do we need? What Every Computer Scientist Should Know About Floating-Point Arithmetic says: "An optimizer that believed floating-point arithmetic obeyed the laws of algebra would conclude that C = [T-S] - Y = [(S+Y)-S] - Y = 0, rendering the algorithm completely useless." I'd say that this is a trustworthy source, but it mentions no concrete example. Though we can see from my example above that unsafe optimizations can ruin this algorithm, I don't know any good references for an example. Do you think it would still be okay to insert one sentence to the article mentioning this issue? Ossi (talk) 16:52, 16 March 2010 (UTC)


 * I'm afraid that editor experiments with gcc are still original research. Being true is not sufficient to be on Wikipedia. — Steven G. Johnson (talk) 17:35, 16 March 2010 (UTC)


 * I also meant, do we need a concrete example? References for this as a hypothetical issue can be found (such as the one I gave above). Ossi (talk) 20:20, 16 March 2010 (UTC)


 * Okay; I've added a brief mention to the article citing the Goldberg source. I also added citations to documentation for various compilers indicating that (in most cases) they allow associativity transformations only when explicitly directed to do so by the user, although apparently the Intel compiler allows this by default (yikes!).  We can't explicitly say that they destroy Kahan's algorithm under these flags unless we find a published source for this, though, I think.  (Interestingly, however, the Microsoft compiler documentation explicitly discusses Kahan summation.) — Steven G. Johnson (talk) 21:47, 16 March 2010 (UTC)


 * So, one form of the expression can provoke a compiler to destroy the whole point of the procedure while another form does not. The register/variable issue arises only if they have different precisions. Consider the following pseudocode for the two statements ''t:=sum + y; c:=(t - sum) - y;

Load sum Add  y Store t      Thus t:=sum + y; Load  t Sub   sum Sub  y Store c      Thus c:=(t - sum) - y;
 * Suppose that the accumulator was 80-bit whereas the variables are not. A "keyhole optimisation" would note that when the accumulator's value was stored to t, in the code for the next expression it need not re-load the value just saved. If however precisions differed, then the whole point of the sequence would be disrupted. An equivalent argument applies to stack-oriented arithmetic, where "store t; load t;" would become "storeN t", for "store, no pop". Some compilers offer options allowing/disallowing this sort of register reuse.

However, I know of no texts mentioning these matters that might be plagiarised, and the experimental report of the behaviour of a compaq f90 compiler's implementation of the "SUM" intrinsic as with the GCC compiler is equally untexted "original research". NickyMcLean (talk) 22:00, 16 March 2010 (UTC)


 * Yes, I mentioned above another way in which differing precisions for different variables could cause problems. I'm curious to know, however, if any extant compilers generate code using different precisions in this way (from a straightforward implementation in which everything is declared as the same precision)? In any case, even if you find an example, complaining about this kind of difficulty is original research (unless we can find a reputable source).  Your use of the pejorative "plagiarize" seems to indicate a contempt of Wikipedia's reliable-source policy, but this goes to the heart of how WP operates; WP is purely a secondary/tertiary source that merely summarizes facts/opinions published elsewhere.   It is not a venue for publishing things that have not been published elsewhere, however true they might be.   See WP:NOR and WP:NOT.  (In an encyclopedia produced by mostly anonymous volunteers, the alternative is untenable, because original research puts editors in the position of arguing about truth rather than merely whether a statement is supported by a source.) — Steven G. Johnson (talk) 22:17, 16 March 2010 (UTC)


 * Well, "plagiarise" is perhaps a bit strong for the process of summarising other articles instead of flatly copying them: "irked" rather than "contempt", perhaps. If say a second person were to repeat some decried "original research", would it then not be original research? As for mixed precision arithmetic even when all variables are the same precision, I have suspicions of the Turbo Pascal compiler's usages as I vaguely recall some odd phrases in the description of its various provisions for floating-point arithmetic. But I'd have to do some tests to find out for sure as I doubt that published texts would be sufficiently clear on the details, and so, caught again. I do know that the IBM1130 used a 32-bit accumulator(acc+ext) in some parts of arithmetic for 16-bit integer arithmetic, and for floating-point, its 32-bit (acc+ext) register was used for the mantissa even though the storage form of the 32-bit fp number of course did not have a 32-bit mantissa. NickyMcLean (talk) 22:51, 16 March 2010 (UTC)


 * It doesn't matter how many people repeat it, it matters where it is published. See WP:RS.  If you care deeply about this issue, by all means do a comprehensive survey of the ways that compilers can break Kahan summation and which compilers commit these sins under which circumstances; there are likely to be several journals (or refereed conferences) where you could publish such a work, and then we can cite it and its results.   What you need to wrap your mind around is that Wikipedia is not the place to publish anything other than summaries of other publications (nor is summarizing cited articles at all inappropriate from the perspective of academic standards or plagiarism; comprehensive review articles are actually highly valued in science and academia). — Steven G. Johnson (talk) 23:25, 16 March 2010 (UTC)


 * As I recall reading somewhere, journals dislike unsolicited review articles. Such reviews indeed can be worthy, but the judicious assessment of the various items could well be regarded as shading into original research by sticklers; and if there was no originality or sign of effort made by the author, the journal would not want to publish. What I'm bothered by is the possibility that someone will come to this article and adapt the scheme to their needs, and, quite reasonably, also allow compiler optimisation and register re-use in the cheerful belief that all will be well. Despite the experimental results described, there being no texts to cite, there can be no warning in the article.NickyMcLean (talk) 04:11, 17 March 2010 (UTC)


 * Usually review articles are invited; I'm not sure what your point is...my point was that disparaging summaries of cited articles as "plagiarism" or "copying" or implying that this is somehow not a good scholarly practice is nonsense.  (If you wanted to survey compiler impacts on Kahan summation, that would not be a review precisely because there seems to be a dearth of literature on the subject.)  The article has a (sourced) warning that over-aggressive compilers can potentially cause problems, and cites the manuals of several compilers for relevant fp optimization documentation; without digging up further sources, that must suffice.  Think of it this way: if we can't find extensive warnings about compiler optimizations in the literature on Kahan summation, then evidently this has sufficed for programmers for a couple generations now.  — Steven G. Johnson (talk) 04:53, 17 March 2010 (UTC)

A variation
I have seen a variation of this method, as follows. I wonder if it is well known and how it compares. McKay (talk) 07:24, 2 April 2009 (UTC)

var sum = 0.0       //Accumulates approximate sum var c = 0.0         //Accumulates the error in the sum for i = 1 to n  t = sum + input[i]        //make approximate sum e = (t - sum) - input[i] //exact error in t if sum & t differ by less than a factor of 2 c = c + e                //accumulate errors sum = t    next i             return sum - c        //add accumulated error to answer


 * Hummm. Well, here is the article's version (as at the time of typing)

function kahanSum(input, n) var sum = input[1] var c = 0.0         //A running compensation for lost low-order bits. for i = 2 to n  y = input[i] - c    //So far, so good: c is zero. t = sum + y        //Alas, sum is big, y small, so low-order digits of y are lost. c = (t - sum) - y  //(t - sum) recovers the high-order part of y; subtracting y recovers -(low part of y) sum = t            //Algebraically, c should always be zero. Beware eagerly optimising compilers! next i              //Next time around, the lost low part will be added to y in a fresh attempt. return sum

Leaving aside the special-feature initialisation of sum, at first sight I'd suggest that the variant you describe might have trouble with the accumulated error steadily increasing as would happen with truncation-based arithmetic, and with rounding the errors would be of varying sign, but the magnitude of their sum would increase, that is, the possible magnitude of c would spread proportional to Sqrt(N). Whereas with the article's version the magnitude of the running deviation is kept small since whenever it becomes larger, its larger part is assimilated into sum. A proper assessment of these matters would require some careful analysis and explanation of the results, that some might decry as looking like Original Research. NickyMcLean (talk) 21:08, 2 April 2009 (UTC)


 * Yes, McKay's version has roughly O(sqrt(n)) error growth, not the O(1) of Kahan. — Steven G. Johnson (talk) 23:30, 11 March 2010 (UTC)


 * (Note that I corrected the last line of my version from "sum+c" to "sum-c".) You may be right, but in lots of simulations using 109 random numbers, both all of the same sign and of mixed signs, I didn't see an example where these two algorithms gave significantly different answers. Usually the answers were exactly the same and about 1000 times better than naive summation. I think the n1/2 growth of c only becomes significant when n is about &epsilon;-2, which is well beyond the practical range. McKay (talk) 05:38, 20 February 2017 (UTC)

Progress since Kahan
It would be nice if the article included some information on progress on compensated summation since Kahan's original algorithm. For example, this paper reviews a number of algorithms that improve upon Kahan's accuracy in various ways, e.g. obtaining an error proportional to the square of the machine precision or an error independent of the condition number of the sum (not just the length), albeit at greater computational expense:


 * Rump et al, "Accurate floating-point summation part I: faithful rounding", SIAM J. Sci. Comput. 31 (1), p. 189-224 (2008).

All that is mentioned right now is Shewchuk's work, and I'm not sure the description of his work is accurate; from the description in his paper, he's really doing arbitrary precision arithmetic, where the required precision (and hence the runtime and storage) are cleverly adapted as needed for a given computation (changed).

— Steven G. Johnson (talk) 23:46, 11 March 2010 (UTC)

O(1) error growth
Is the claimed error growth really true? What Every Computer Scientist Should Know About Floating-Point Arithmetic says (on page 46) "Suppose that ΣNundefinedxj is computed using the following algorithm ... Then the computed sum S is equal to Σxj(1+δj)+O(Nε2)Σ|xj|, where (δj≤2ε)." (ε is the machine epsilon.) So there seems to be a linearly growing term though with a small multiplier. Ossi (talk) 11:46, 12 March 2010 (UTC)


 * The corresponding forward error bound (as explained in Higham) is:


 * $$|\mathrm{error}| \leq \left[ 2\varepsilon + O(n\varepsilon^2) \right] \sum_{i=1}^n |x_i| $$


 * So, up to lowest-order O(ε), the error doesn't grow with n, but you're right that there is a higher-order O(ε2) growing term growing with n. However, this term only shows up in the rounded result if nε > 1 (which for double precision would mean n > 1015), so (as pointed out in Higham, who apparently assumes/knows that the constant factor in the O is of order unity), the bound is effectively independent of n in most practical cases. The relative error |error|/|sum| is also proportional to the condition number of the sum: $$(\sum |x_i|) / \left|\sum x_i\right|$$  Accurately performing ill-conditioned sums requires considerably more effort (see above section).


 * In contrast, naive summation has relative errors that grow at most as $$O(\varepsilon n)$$ multiplied by the condition number, and cascade summation has relative errors of at most $$O(\varepsilon \log n)$$ times the condition number. However, these are worst-case errors when the rounding errors are mostly in the same direction and are pretty unlikely; the root-mean-square case (for rounding errors with random signs) is a random walk and grows as $$O(\varepsilon \sqrt{n})$$ and  $$O(\varepsilon \sqrt{\log n})$$, respectively (see Tasche, Manfred and Zeuner, Hansmartin. (2000). Handbook of Analytic-Computational Methods in Applied Mathematics Boca Raton, FL: CRC Press).


 * I agree that a more detailed discussion of these issues belongs in the article. — Steven G. Johnson (talk) 15:57, 12 March 2010 (UTC)


 * Does the section I just added clarify things? — Steven G. Johnson (talk) 23:52, 12 March 2010 (UTC)


 * Yes, your section is very good and easily understood. I wonder if we should mention something about the error bounds in the introduction too. It currently mentions the growth of errors for naive but not for Kahan summation, which seems a bit backwards. Ossi (talk) 05:31, 13 March 2010 (UTC)


 * The introduction already says "With compensated summation, the worst-case error bound is independent of n". — Steven G. Johnson (talk) 16:43, 13 March 2010 (UTC)


 * Okay, I just missed that. Ossi (talk) 23:23, 13 March 2010 (UTC)

quadruple precision
At the end of the Example Working section it says "few systems supply quadruple precision". I think that the opposite is now true: most systems supply quadruple presision (i.e. 128 bit floating point). Does anyone disagree? McKay (talk) 06:25, 11 February 2011 (UTC)


 * Very few systems support it in hardware. A number of compilers for C and Fortran (but not all by any means) support it in software, and it is absent from many other languages; it depends on how you define "most". — Steven G. Johnson (talk) 02:34, 12 February 2011 (UTC)

Note that compliance with IEE 754-2008 (the nearest I can think of to the meaning of 'supply') is a property of the system and can be hardware, software or a combination of both. Now if we can define 'system' and 'most' we might be able to resolve this :-) — Preceding unsigned comment added by 129.67.148.60 (talk) 18:19, 18 May 2012 (UTC)

Python's fsum
It's claimed in the article that python's fsum uses a method by Shewchuk to attain exact rounding. However, based on Shewchuk's paper it would seem that the sum is not always rounded exactly. Does anyone have an opinion how to better describe fsum's accurasy? Ossi (talk) 14:43, 15 March 2012 (UTC)


 * Shewchuk's paper describes both "exact addition and multiplication" algorithms and "adaptive precision arithmetic" that satisfies any desired error bound, hence my understanding is it can be used to compute results more precise than double which can therefore be exactly rounded. The Python fsum documentation says that it achieves exactly rounded results as long at as the CPU is IEEE round-to-even double precision (and makes at most a one bit error for CPU's running in extended-precision mode).  Looking at the source code seems to confirm that they do indeed guarantee that the routine "correctly rounds the final result" (given IEEE arithmetic, and not including overflow situations) and provides some more information about the algorithms.  In particular, it seems to use a version of what Shewchuk's paper calls the "FAST-EXPANSION-SUM" algorithm. — Steven G. Johnson (talk) 17:53, 15 March 2012 (UTC)


 * Thank you for the link. It seems to me that based on the source code that it's just "GROW-EXPANSION" rather than "FAST-EXPANSION-SUM" which is used, but that is not important here. I didn't doubt that Shewchuk's algorithms could produce an exactly correct expansion. The reason I thought the sums wouldn't always be exactly rounded is that Shewchuk only discusses methods to approximate the expansions with a single floating point number in a way which aren't always exactly rounded (e.g. using COMPRESS). Python's source seems to use a method which is not from Shewchuk's paper. This was the source of my confusion. Ossi (talk) 00:07, 24 March 2012 (UTC)

Kahan-Babuška variation
See: http://cage.ugent.be/~klein/papers/floating-point.pdf --Amro (talk) 18:56, 30 June 2013 (UTC)

Rounding errors all in the same direction
The article currently has a sentence:

"This worst-case error is rarely observed in practice, however, because it only occurs if the rounding errors are all in the same direction."

I'm not sure whether it's worth changing, let alone removing, the sentence, but it is remarkably easy to accidentally end up with all rounding errors in the same direction when doing naive summation, especially with single-precision floating-point numbers.

For example, adding the value 1 to an initially zero accumulator 1,000,000,000,000 times will result in a value of 16,777,216, which is 2^24, because as soon as it reaches 2^24, (2^24)+1 rounds back down to 2^24, so it stays there forever. This sort of catastrophic roundoff error happens pretty much whenever summing 2^24 or more values of the same sign in single-precision. I've hit it in several different real-world applications before, and it's a case where either increased precision or Kahan summation is absolutely necessary. Ndickson (talk) 23:04, 25 July 2015 (UTC)


 * For what it's worth, my practice is to have the summation decide on a working average early on, say the first supplied number (or the average of some early numbers), and then work with summing (x(i) - w), in the hope (pious) that the resulting summation would not wander far from zero and thereby provoke the misfortune you mention. If seriously worried over the quality of the estimate of w, the approach would be to perform the summation in two passes. The idea here is that many positive numbers are being summed, all say around 12,345, so that otherwise the total just keeps on increasing - until it doesn't. The problem arises much sooner with single-precision numbers of course. NickyMcLean (talk) 11:16, 7 November 2015 (UTC)

comment
neumaier list misses a next i

I have found improvement in real*8 with g77 only up to 10^19, not up to 10^100

pietro151.29.209.13 (talk) 10:36, 28 September 2017 (UTC)

bogus "enhancements"
NeumaierSum as presented can be much worse than KahanSum. In particular its sum is just input[0]+input[1]+... If say all your inputs are 1, first you saturate sum, then you increase c until it has the same value as sum, and that's it, later additions are ignored, so you only gained 1 bit of precision compared to plain arithmetic... Maybe call them "alternatives" instead? — Preceding unsigned comment added by Mglisse (talk • contribs) 08:34, 21 February 2021 (UTC)


 * It will saturate in such cases. One can also find some bad cases for the KahanSum. For example add 10000 times this sequence of numbers: [1; 10^100; 1; -10^100]. The result should be 20000, but it will be zero. The Neumaier sum will return 20000.
 * The issue of saturation could be avoided by periodically recalculating sum and c as
 * [sum, c] = Fast2Sum(sum, c)
 * which also can be made as a part of the algorithm 0kats (talk) 06:17, 1 February 2023 (UTC)

differences between decimal and binary
A "Citation needed" was added October 2023 by Vincent Lefèvre saying "Not obvious as there are differences between decimal and binary. For instance, Fast2Sum, when used with &#124;a&#124; ≥ &#124;b&#124;, is not always an error-free transform in decimal." The sentence marked is "Computers typically use binary arithmetic, but the principle being illustrated is the same." in the "Worked example" section.

@Vincent Lefèvre, I'm not sure I understand what you mean by "transform" in this context, and the principles are the same regardless of "error-free" (c is only an estimate of the error). The "principle being illustrated is the same" statement doesn't mean that the results would be the same regardless of arithmetic model, but that the algorithm is the same regardless of radix and thus the illustration can use decimal.

To me, the statement is obvious, to the point where it could be removed if it causes confusion.
 * The algorithm does not imply any radix, thus mathematically radix is of no importance.
 * The algorithm does not require computer calculations, thus binary is not appointed the norm.
 * The original publication does hint the use of computers, but the actual environment is "finite-precision floating-point numbers" (from the article ingress), where "finite" is the source of the problem and "floating-point" is the workaround. A requirement for the "trick" is for the arithmetic to "normalize floating-point sums before rounding or truncating" and that is indeed the case in the example. What it means is that there are always enough valid "digits" in the temporary result, so that normalizing the exponent does not introduce errors.

With all this considered, I became brave and edited the section myself. I hope it clarified everything, because I also removed the "citation needed". JAGulin (talk) 09:54, 7 February 2024 (UTC)
 * @JAGulin: The term is "error-free transform" or "error-free transformation" (BTW, there should be a WP article on this, as this term has been used in the literature since at less 2009 and the concept is much older, see e.g. Rump's article Error-Free Transformations and ill-conditioned problems).
 * "The algorithm does not imply any radix, thus mathematically radix is of no importance." is a tautology. The fact the algorithm does not imply any radix is not obvious, and may even be regarded as incorrect since it uses the Fast2Sum algorithm, which is an error-free transform (when the condition on the inputs is satisfied) only in radix 2 or 3. That said, Fast2Sum is used here only as an approximation algorithm (since the condition on the inputs may not be satisfied), but anyway, a radix-independent error analysis would still be needed.
 * The algorithm is specifically designed to work with floating-point arithmetic in some fixed precision, so it requires a computer or an abstract machine that would behave like a computer; but as computers typically work in radix 2, algorithms based on floating-point features (like Fast2Sum here) often implicitly assume this radix. The cited article mentions "double-precision". Nowadays, this implies radix 2. Perhaps this wasn't the case in the past. I don't know. But we would need to make sure that Kahan wasn't implicitly assuming binary because this is what were on most computers when he used his algorithm. — Vincent Lefèvre (talk) 13:36, 7 February 2024 (UTC)
 * The algorithm is specifically designed to work with floating-point arithmetic in some fixed precision, so it requires a computer or an abstract machine that would behave like a computer; but as computers typically work in radix 2, algorithms based on floating-point features (like Fast2Sum here) often implicitly assume this radix. The cited article mentions "double-precision". Nowadays, this implies radix 2. Perhaps this wasn't the case in the past. I don't know. But we would need to make sure that Kahan wasn't implicitly assuming binary because this is what were on most computers when he used his algorithm. — Vincent Lefèvre (talk) 13:36, 7 February 2024 (UTC)