Talk:Factor (Unix)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Range[edit]

The current article text states that

In 2008, GNU factor started to use GNU MP library for arbitrary precision arithmetic, allowing it to factor integers of any length, not limited by the machine's native integer type.

The “integers of any length” claim is probably right, but the “native integer type” part is definitely wrong, as factor was never limited to the range of its host platform's native integers. On the 16-bit PDP-11, whose native integers had maxima of, at best, 216 or 232 (unsigned), the Version 6 Unix man page for factor (here) documents a maximum input value of 256.

Without checking the Version 6 Unix source code, my very strong expectation is that the original Bell Labs factor was coded to use floating-point representations and operations. First, the PDP-11 FP11 “double” format (shown e.g. here) uses 56 bits of precision, 55 bits stored plus the 1 bit implicit in all normalized non-zero binary fractions, matching the 56-bit limit of the early factor. Second, my own experience with various Unix flavors in the 1980s and early 90s — very literally “original research” — showed that the performance of factor, seemingly an integer-only function, was very strongly affected by the presence or absence of floating-point hardware in the host processor.

I'll tweak the article wording to remove the incorrect reference to native integer format, but for now, with no worthy source to cite, I won't specifically mention floating-point.

50.181.30.121 (talk) 04:06, 7 April 2014 (UTC)[reply]