Talk:Methods of computing square roots

Reciprocal of the square root
This piece of code is a composite of a quirky square root starting estimate and Newton's method iterations. Newton's method is covered elsewhere; there's a section on Rough estimate already - the estimation code should go there. Also, I first saw this trick in Burroughs B6700 system intrinsics about 1979, and it predated my tenure there, so it's been around a long time. That's well before the drafting of IEEE 754 in 1985. Since the trick is based on linear approximation of an arc seqment of x2 which in the end, is how all estimates must be made, I'm certain that the method has been reinvented a number of times.

There're numerous issues with this code:
 * the result of type punning via pointer dereferencing in C/C++ is undefined
 * the result of bit-twiddling floating point numbers, bypassing the API, is undefined
 * we don't know whether values like zero, infinity and unnormalized floating point numbers, and big/little endian formats are correctly handled.
 * It definitely won't work on architectures like Unisys mainframes which are 48/96 bit, or 64-bit IEEE floats. Restructuring the expression to make it work in those cases is non-trivial.
 * since I can readily find, by testing incremental offsets from the original, a constant which reduces the maximum error, the original constant isn't optimal, probably resulting from trial and error. How does one verify something that's basically a plausible random number? That it works for a range of typical values is cold comfort.  (Because its only use is as an estimate, maybe we don't actually care that enumerable cases aren't handled(?)... they'll just converge slowly.)
 * because it requires a multiply to get back a square root, on architectures without fast multiply, it won't be such a quick estimate relative to others (if multiply and divide are roughly comparable, it'll be no faster than a random seed plus a Newton iteration).

I think we should include at least an outline of the derivation of the estimate expression, thus: a normalized floating point number is basically some power of the base multiplied by 1+k, where 0 <= k < 1. The '1' is not represented in the mantissa, but is implicit. The square root of a number around 1, i.e. 1+k, is (as a linear approximation) 1+k/2. Shifting the fp number represented as an integer down by 1 effectively divides k by 2, since the '1' is not represented. Subtracting that from a constant fixes up the 'smeared' mantissa and exponent, and leaves the sign bit flipped, so the result is an estimate of the reciprocal square root, which requires a multiply to re-vert back to the square root. It's just a quick way of guessing that gives you 1+ digits of precision, not an algorithm.

That cryptic constant is actually a composite of three bitfields, and twiddling it requires some understanding of what those fields are. It would be clearer, but a few more operations, to do that line as a pair of bitfield extract/inserts. But we're saving divides in the subsequent iterations, so the extra 1-cycle operations are a wash.

Undefined behaviour
The examples using unions are invalid C, as they invoke undefined behaviour. An easy solution that is probably even clearer for the purpose of example code would be to use memcpy, e.g.

float f;  uint32_t u;   memcpy (&u, &f, sizeof (u));

37.49.68.13 (talk) 13:08, 1 April 2022 (UTC)


 * Type punning with unions is undefined in C++, but not in C. This is a topic of much confusion.
 * The following is pulled from a footnote around section 6.5.2.3 of the C17 standard:
 * "If the member used to read the contents of a union object is not the same as the member last used to store a value in the object, the appropriate part of the object representation of the value is reinterpreted as an object representation in the new type as described in 6.2.6 (a process sometimes called “type punning”). This might be a trap representation."
 * This basically says, 'you may use a union to reinterpret the bits of one type into another but we're not going to promise that the new interpretation will be valid'
 * I will say that the C code in this article is rather clunky and may benefit from a bitfield to separate the different sections of the float representation so it is easier to read and understand, but I will have to flatly disagree with you that is more appropriate than a union in this code snippet. WillisHershey (talk) 17:24, 25 September 2023 (UTC)

Lucas sequence method - original research?
I could not find any relevant research papers on the use of Lucas sequences for computing real square roots. The closest I found is

G. Adj and F. Rodríguez-Henríquez, "Square Root Computation over Even Extension Fields," in IEEE Transactions on Computers, vol. 63, no. 11, pp. 2829-2841, Nov. 2014, doi: 10.1109/TC.2013.145.

which is concerned with square roots in finite fields and uses a different algorithm. Should this paragraph be removed as original research? Or it could also simply be made much shorter, by avoiding to repeat the properties of Lucas sequences. BlueRavel (talk) 23:27, 3 December 2023 (UTC)


 * I have searched, and I too failed to find any relevant source for this. It was posted into the article in 2009 without any explanation, by an editor who has never made any other substantial contribution, just one other very small edit. It looks as though it may well be original research, but whether it is or not, it is unsourced, so I have removed it. JBW (talk) 21:54, 5 December 2023 (UTC)

Merge "Approximations that depend on the floating point representation" into "Initial estimate"
I believe the section "Approximations that depend on the floating point representation" should be merged into "Initial estimate", since it is a special case of "Binary estimates". Merging would clear up the fact that the floating point trick gives an initial rough approximation, which is then typically iteratively improved.

I also believe the "Initial estimate" section should appear after the section on Heron's method, as the reader is likely more interested in the general idea of iterative refinement than in the details of how to obtain a good initial estimate in all possible ways.

Additionally, in my opinion the entirety of the article could benefit from some trimming/rewriting, as many sections contain redundant information, unnecessary details, and awkward formulations. BlueRavel (talk) 14:54, 4 December 2023 (UTC)


 * Your proposition makes sense to me, and I dont necessarily disagree. That said though, as a pure mathematician, I am uninclined to blur the lines between programmatical issues and mathematical problems.  I think maintaining a distinction is appropriate.  An analysis of the pure mathematical problem of initial estimation in these abstract reiterative processes is a decidedly distinct discussion from considerations in this programming language, or that programming language, or this architecture, or that architecture. The former is future-proofed, the latter is not. CogitoErgoCogitoSum (talk) 21:09, 11 February 2024 (UTC)

Useful addition??
Not sure if its useful, but I have found that, in general, $$\sqrt{x+2} \approx \frac{x+1}{\sqrt{x}}$$, and if $x=n2$ we get $$\sqrt{n^2+2} \approx n + \frac{1}{n}$$.

Similarly $$\sqrt{x+4} \approx \frac{x+2}{\sqrt{x}}$$.

I sometimes use this for quick pencil and paper calculations, if Im close enough to a convenient value.

Not sure if this is a known or established property, proven, bounded, or if its already in the article in some alternative capacity, or if its even appropriate for this article. I do know the taylor series approximation with two terms connects these expressions. CogitoErgoCogitoSum (talk) 21:05, 11 February 2024 (UTC)
 * There is nothing special about 2 and 4: $$\sqrt{x+2c} \approx \frac{x+c}{\sqrt{x}}$$ provided that c is small compared to x. This is, in fact, just the first two terms of the series given in the article under the section heading "Taylor series". JBW (talk) 01:45, 13 February 2024 (UTC)


 * I don't think they are useful. In the first, you have replaced a square root and an addition with a square root, an addition, and a division to get an approximate answer.  Bubba73 You talkin' to me? 08:02, 13 February 2024 (UTC)