Talk:Arbitrary-precision arithmetic

Oldest comments
Didn't you want to talk about big floating-point numbers ? Are some people interessed ?

I moved the following HTML comments from the article source over to this talk page. They may be more useful here:


 *  


 *  

&mdash;Herbee 20:57, 23 April 2006 (UTC)

The newly-added Infinite Precision section.
Shouldn't this be arbitrary precision as well? I'm not familiar with the work being referenced, but I'm fairly certain that there is no such animal. Trying to get infinite precision off of 4/3 * pi just isn't happening on computers with finite memory any time soon. SnowFire 16:16, 22 May 2006 (UTC)


 * I agree. It's the usual confusion between "arbitrarily large" and "infinite". —Steven G. Johnson 16:45, 22 May 2006 (UTC)


 * The text basically describes symbolic algebra. Fredrik Johansson 16:52, 22 May 2006 (UTC)


 * You're right, upon closer reading symbolic algebra seems to be what was intended by the text. It's still misleading to call it "infinite precision", however, since (a) the symbolic expressions have arbitrary (limited by memory) but finite size, which directly corresponds to a form of finite precision, and (b) you're not really doing "arithmetic" until you calculate the numeric result, and that is done with arbitrary but finite precision.  Besides, if you Google for "infinite precision arithmetic", it seems that this term is mainly used as a synonym for arbitrary precision.  —Steven G. Johnson 16:59, 22 May 2006 (UTC)

I agree with all of the above given suitable context, but not with the text in the article which reads "...infinite-precision arithmetic, which is something of a misnomer: the number of digits of precision always remains finite...". I would argue that any representation for which there is a bijective mapping onto the set being modelled is precise. It's not infinite precision that's the problem, it's trying to represent elements of an infinite set on a finite state machine. IMHO, it is not an error to describe bignum arithmetic as "infinite precision" if it is operating over a finite set, an obvious example being the integers modulo N, for not-extremely-large N. --Will


 * Sure, and a one-bit integer is "infinite precision" in your sense, as long as you are only trying to represent the set {0,1}. As far as I can tell, however, that's simply not the way the term "precision" and especially "infinite precision" is used in the setting of numeric computation.  In this context, the "precision" of a number is the number of significant digits that are specified (or some equivalent measure of information content).    (Yes, the general English meaning is more vague than that...most English words are, but that doesn't prevent them from having a more, ahem, precise meaning in a technical context.) I think the meaning that you are describing would be better termed "perfect precision". —Steven G. Johnson 04:03, 28 May 2006 (UTC)


 * If a number is represented in a format which provides n base-d digits of precision (even this makes unnecessary assumptions about representation), then the error is proportional to d^-n. Thus n = -log(error), with n going to infinity as error goes to zero.  I would have said that if the error is exactly zero for all operations and for all operands, then the precision is infinite.  This is applicable to the modular arithmetic case (including the n=1 case you mentioned), but not for general symbolic algebra systems, where the size of the expression tree is artificially bounded by the machine's available state; I therefore don't agree with the section which keeps appearing.  I'm not saying that "infinite precision" is a more appropriate term than "arbitrary precision" for finite number systems, merely that it's not inaccurate.  But on reflection perhaps I was misinterpreting the sentence; which on re-reading is perhaps only dismissing it as a misnomer when used as a synonym for arbitrary precision. --Will (86.141.151.165 11:06, 28 May 2006 (UTC))


 * Be careful not to confuse accuracy/error with precision. 3.000000000000000 is very precise, but it is not a very accurate representation of, say, π.  —Steven G. Johnson 18:54, 28 May 2006 (UTC)

You are wrong. 4/3*pi is a computable number, thus it can be represented in finite manner in the memory of the computer. There is an example in the references section. Sounds like you don't know what you're talking about.  Grue  06:17, 28 May 2006 (UTC)


 * It's the same thing that happens if I type 4/3*Pi in Mathematica. Symbolic algebra: expressions evaluate to expressions, with pointers to code to compute the atomic elements numerically with any desired precision. The only difference between Mathematica and the Lisp library is that the latter shows a numerical value instead of the underlying expression in the REPL. Fredrik Johansson 09:43, 28 May 2006 (UTC)


 * The set of computable reals is infinitely large. The set of states for a machine running mathematica (or any similar package) is finite.  Therefore there exist pairs of distinct computable numbers which have identical representation within the machine.  Where this happens, there is nonzero error.  You can't have infinite precision if there's error. --Will (86.141.151.165 11:16, 28 May 2006 (UTC))
 * No, some computable numbers are just too complicated to fit in memory and they can't be represented. So, only finite set of different computable numbers fits in any given computer. As for symbolic computation, I think there are some noticeable differences. In case of π, there is a symbol for a number, but many computable numbers don't have any symbol nor can be represented as a finite composition of arithmetic operations on numbers that have a symbol (there are also numbers that have a symbol, but are not computable, like Chaitin's constant).  Grue   12:44, 28 May 2006 (UTC)
 * Grue, this article is about arithmetic and that means computation of a numeric result. By definition, every computable number can be represented by a finite-length (though possibly very large) computer program that computes it to any arbitrary (but not infinite) precision in finite time.  The representation of a number by a computer program is perfectly precise in the sense of determining a given real number uniquely, but does not provide infinite-precision arithmetic.  —Steven G. Johnson 19:15, 28 May 2006 (UTC)

Table needs cleanup
The table of arbitrary precision libraries needs improvement. For one thing, "Number Type" for several of the libraries does not actually list what number types are provided. It would also be useful to include a short overview of what features each library provides in addition to just the number types. For example, GMP implementing highly optimized low level operations, MPFR providing correctly rounded transcendental functions, etc. Fredrik Johansson 13:46, 3 October 2008 (UTC)


 * Those sound like useful suggestions, but I would refrain from saying that one library or another is "highly optimized" in the absence of benchmarks comparing several of the available libraries.  When someone claims that their code is "highly optimized" it usually means "I made it much faster than my first implementation," which doesn't mean that there isn't a completely different implementation that is much faster.  Put in terms of Wikipedia policy, a project's web site claiming that it is fast is not a reputable source unless they have published benchmarks backing them up.  —Steven G. Johnson (talk) 16:35, 3 October 2008 (UTC)

Proposed addition to list of arbitrary-precision libraries
I have an arbitrary-precision library I'd like to add to the list of arbitrary-precision libraries, but it would be a conflict of interest to add it myself, so I'm proposing that someone else add it, per the instructions here:

WP:SCOIC

The list I'm referring to is the table toward the end, where the first item in the list is "apfloat".

My understanding is that it's in the general Wikipedia interest for the list to be more complete than less, and that therefore any useful arbitrary-precision library should be included.

Also, the library I'd like to add to the list, xlPrecision, is unique in that it is designed to be easy and intuitive for Office VBA programmers (especially Excel VBA programmers) who may have no knowledge of C, C++, Java, or any of the other languages reperesented in the list, and who therefore might have difficulty using the other libraries, at least in the short term.

Here are the lines I'd like to see added to the list:


 * xlPrecision
 * Decimal
 * VBA
 * Commercial

(Sorry, I'm not sure how to format those lines to appear correctly. They way they appear in "edit this page" is the way they would be added to the table.)

Thanks, and let me know if you have any questions for me.

Greg (talk) 02:20, 22 February 2009 (UTC)

W3bbo's edits
W3bbo just now changed several mentions of "floats" to "decimals". This is incorrect, as most arbitrary-precision libraries use binary, not decimal, floats. Binary and decimal arithmetic have very different characteristics and purposes. Using "decimal" generically for floating-point numbers is sloppy, especially so for an article on a topic like this one. Fredrik Johansson 11:26, 4 September 2009 (UTC)


 * I agree. The term "decimals" implies that the calculations occur using decimal arithmetic.  I suspect that most many Arbitrary-precision packages use a binary representation (mantissa plus exponent) and only support decimal for input/output purposes.  I support changing the references to "decimals" back to "floats".  If some packages are shown to actually use decimal arithmetic internally, the term "decimals" would be appropriate for those.  -- Tcncv (talk) 23:36, 6 September 2009 (UTC)


 * As a follow-up, I took a look at a quick look at a half-dozen packages and based on a scan of their web pages, documentation or source code, I found three that appear to use a decimal-power radix (apfloat, MAPM, and W3b.Sine) and three that use binary (ARPREC, mpmath, TTMath). While this is only a small sample, it does indicate a mix or representations, enough to justify undoing W3bbo's change.  The original term "floats" is radix neutral, and if someone has time to research all of the packages, the more precise terms "decimal floats" or "binary floats" could be used. -- Tcncv (talk) 01:24, 7 September 2009 (UTC)


 * I made an attempt as updating the number types. It probably still needs work.  Some of the long descriptions have made me consider if it might be better to break the number type description into several columns.  One column could list the general representation (integer, float, rational, real), another could simply indicate built-in complex number space support (yes or no), and a third could indicate the internal radix used (decimal or binary).  That might simplify descriptions like "decimal float and decimal complex float", while retaining all of the information.  -- Tcncv (talk) 02:23, 7 September 2009 (UTC)


 * I suppose decimals suggests decimal arithmetic, but I think implies is too strong. Fortran, for example, uses SELECTED_REAL_KIND to specify desired precision in decimal digits. It does this without suggesting that the underlying arithmetic system is decimal.  (More recently, one is allowed to specify the radix, but precision is still in decimal digits.)  PL/I allows specification of precision in either binary or decimal digits, again without specifying the underlying radix, though on machines with float decimal hardware, it is a convenient way to ask for it.  In the fixed point case, PL/I allows for a binary or decimal scaling factor, which again it does without specifying the underlying radix. This can result in much multiply and divide by powers of ten, when not in decimal. Gah4 (talk) 23:50, 29 August 2019 (UTC)
 * That only selects from the available fixed-length precisions. It doesn't let you set it to whatever you need.  Bubba73 You talkin' to me? 00:30, 30 August 2019 (UTC)


 * Yes. But also, there is no restriction on the available precisions. In the fixed decimal case, IBM allows odd digits from 1 to 15. PL/I in the binary case takes the digits you specify, multiplies by 3.32, and rounds up to the next available size. Fortran allows for any integer radix greater than one. Specifying in decimal digits isn't too restrictive in most cases, even when the radix isn't 10. Gah4 (talk) 04:38, 30 August 2019 (UTC)


 * I haven't use Fortran since Fortran 77, so I haven't used a version with this feature. But I read about it, and it sounds like you put in what precision you need, and it picks the first one that is built into the system that meets or exceeds it. If you ask for a precision higher than a native one, it returns an error (from what I read).  Bubba73 You talkin' to me? 16:58, 30 August 2019 (UTC)

unbounded integers versus unlimited precision of floats
I found reading this article very confusing, since no distinction seems to be made between integer and floating point computations. Unbounded integers can of course be used to implement floating point arithmetic with a mantissa of varying length, but the two are not the same. Often one needs unbounded integers without any need to do floating point arithmetic; the factorial example is an illustration. In fact much of this article seems to be directed mostly at the case of integers, biut it never says so clearly.

Personally I find the notion of "precision" not suited to apply to integers, although I can see that the precision article says it does apply (and implies that a machine integer representing the number 17 on a 64-bit machine does so with more precision than one on a 32-bit machine). But even if I accept that the current article should apply to the case of representing integers of arbitrary length, then I insist that is should talk about that and about representing real numbers with extensible precision separately. The integer case falls into the category of exact algebraic computation, but the real number case fundamentally cannot. The latter case involves problems that do not apply to the former, such as deciding just where to stop the development of decimals if a number sqrt(2) is used within a computation. The article would in my eyes improve a lot if these issues were addressed. Marc van Leeuwen (talk) 13:42, 6 September 2009 (UTC)


 * I agree that talk of precision when applied to integers is odd and shouldn't occur, and any such occurrences should be rephrased. However, with floating-point (or fixed point but with fractional parts) there can be mergers. For instance, in the computation of numbers such as pi, e, sqrt(2), the digit array of integers would represent the number with an integer part and a fractional part with the position of the split understood. Thus, in base 100, pi would be stored as the integers (03),(14),(15),(92),(65), (35), (89), (79), etc. in an array of integers, which array should start with index zero to correspond directly to the (negative) powers of the base (though some computer languages insist on a starting index of one), and there is an implied point after the first digit. This particular number is known to not exceed 99, and so there is no need to have higher order integer parts in its representation and during calculation. More generally, the point would have to "float" about, and some scheme for locating it in a general number would be needed. So, there are multi-element big numbers for pure integer calculations (no fractional part), whereas if fractional parts are needed then either a pair of integer values in p/q style or else a direct fractional representation of the number is used for fixed-point calculations and floating-point calculations.
 * All such numbers are indeed real numbers (actually, only rational numbers) but only with the pure integer forms are the axia of real arithmetic followed (until overflow at the high-order end), since all lengths are finite and even an arbitrary length is still finite. The methods and representations are the same for a pure integer or a fixed-point or a floating-point number. Computations with fractional numbers will very quickly cause overflow in the low-order end due to the precision requirement of real arithmetic as in 1/3 in base 100 not using a rational number representation.
 * Once an external decision is made as to how many fractional digits shall be held, much proceeds as if the number is an integer with a known power offset, as in the fixed-point approach, and for floating-point, the accountancy is explicitly performed. NickyMcLean (talk) 21:12, 6 September 2009 (UTC)


 * One is that scaled fixed point arithmetic isn't supplied by most programming languages, and isn't so well understood. If you want to calculate many digits of pi, it is much easier to do in scaled fixed point than in floating point. You know where the decimal (or binary) point is. PL/I is one of the few programming languages where REAL does not mean floating point, and constants with decimal points are not usually floating point constants.  PL/I, for example does use the term precision for the total number of digits, before and after the radix point, in fixed point arithmetic.  Gah4 (talk) 00:03, 30 August 2019 (UTC)

example code
What's the n variable for in the example pseudo code? There's a loop, for n = 1 to 365, with no comment. One pass per day for a year? or 365! ?TomCerul (talk) 21:25, 6 July 2010 (UTC)
 * Being as much as possible an actual example, some number had to be chosen rather than waffle about with "N" or somesuch. In 1970 the annual NZ death count due to road accidents was about 360, or an average of one per day. At the time I wondered about the probability of all deaths occurring on the same day, a Poisson distribution calculation that required 365! which of course immediately overflowed the capacity of IBM1130 floating point arithmetic (32 bit), thus leading to the computation of log(365!) via Stirling's formula. Then I wondered about accuracy, and wrote a prog. to compute factorials via exact multi-precision arithmetic. Thus the 365. Cheers.NickyMcLean (talk) 21:37, 7 July 2010 (UTC)
 * hrm, list format help...
 * That was funny and morbid.
 * You didn't really answer my question
 * Your answer was close enough that you did kind of answer my question
 * Cheers!
 * TomCerul (talk) 20:47, 9 July 2010 (UTC)


 * Huh? What do you mean by note 2? Anyway, why introduce blather about "FactorialLimit" rather than a plain constant? Perhaps I should have added a comment to the line for n:=1 to 365 do saying something like "Last factorial to compute". Actually, the prog. will run into trouble with the number of digits needed - multiplying by 100 adds two digits, then by 101 adds a further two and a little, etc. so 365! will need a lot of digits. Thus the ending point is likely to be the croaked message "Overflow!". The initial version of the prog. I wrote in 1971 was set to quit once the number of digits filled one lineprinter line of 120 digits (allowing space for the "=nnn!"), though obviously, the number could be rolled over multiple lines if desired. Output to a disc file can produce a line of arbitrary length. NickyMcLean (talk) 04:22, 10 July 2010 (UTC)
 * The 365 was a Magic Number that my programmer OCD needed to label. :) Where you joking about that death rate math? TomCerul (talk) 17:51, 16 July 2010 (UTC)
 * Nope, I wasn't joking. I'd just been introduced to the Poisson distribution in applied mathematics and statistics and Physics, for instance with regard to Geiger-counter behaviour or earthquake occurrence, and indeed at the time, the annual death rate on the New Zealand roads was about 360. There surely must be attention paid to the likely number of calls on emergency services in any area and allowances for some level of unusual surge in admissions, but my thought was of course rather extreme. Although 365 is a magic number, its use in this case was relevant, though the context was not given in the article. Actually, all integers are magical (or special, or interesting). For if not all, one would be the largest magical integer, say n, and as such n + 1 would also be magical by association. Amongst my favourites are 396, 1103, 9801, and 26390, all appearing in Sirinivasa Ramanujan's astounding formula for 1/pi. NickyMcLean (talk) 01:41, 17 July 2010 (UTC)

Recent reverts
Everyone involved is advised to read Edit warring before this gets out of hand. Thanks! Guy Macon (talk) 21:55, 10 May 2011 (UTC)

dc
Text says "dc: the POSIX desk calculator". But dc doesn't seem to be part of POSIX standard. See http://pubs.opengroup.org/onlinepubs/9699919799/idx/id.html for a list of everything in POSIX starting with 'd', like dd or df. 18:13, 12 September 2011 (UTC) — Preceding unsigned comment added by 187.23.87.80 (talk)


 * Then it should probably be changed to, which is part of the posix standard. AJRobbins (talk) 16:35, 21 May 2014 (UTC)

Examples in many systems
An editor has restored his link showing many systems doing one calculation without supporting it (which is his burden). The link does not add reasonable content; there is little doubt that different systems should compute the same answer; that they do it with slightly different presentations is not of interest here. Calling the examples a case study does not increase the value to WP. Furthermore, the article would not include such multiple examples; the link is not about something the article should have at some point in the future. Glrx (talk) 22:22, 25 November 2011 (UTC)


 * The link goes a long way in obviating the need to clutter the page with examples in many languages which often happens to pages like this. The case studied needs to be solved using the technique in question. How different languages present their implementations of APA is of interest to some readers of WP, and, furthermore, the links in WP articles can have merit in and of themselves and exist to show content not necessarily duplicated on the page or that should in future appear on the page.
 * "That they do it in slightly different presentations" is of great interest to those studying language design for example and such nuances are the proper content of an encyclopedia as well as being used on other wikipedia topics.
 * As for adding the link without supporting it, links are often added to wp with only an edit comment. It is standard practice. This link is in dispute and so is being discussed in the talk page. That one user is not interested in this aspect of the subject should not additionally taint the article. --Paddy (talk) 23:13, 25 November 2011 (UTC)

List of implementations
This article focuses more on implementations than APA. The list of implementations should be spun off to something like List of arbitrary-precision arithmetic implementations.

I would delete the Arbitrary-precision arithmetic section; it's about implementation choices; it sounds more in extended precision rather than arbitrary precision.

The history section is computer-centric. The first algorithms (before computers) were about how to perform calculations -- stuff we were taught in grade school. You don't need a computer to do long division, but it does make sense to have a good trial divisor.

Knuth may have published The Art of Computer Programming (Vol 2 1969?) before White and Bill Gosper implemented bignums for Maclisp (Maclisp manual is 1979, but the implementation was earlier; arbitrary precision arithmetic was needed for Macsyma in the late 1960s). William A. Martin's 1967 thesis may have used bignums.

There's nothing about rationals.

Glrx (talk) 23:00, 25 November 2011 (UTC)


 * ✅. I agree, that section has degenerated into listcruft, I have split it out into List of arbitrary-precision arithmetic software. -- intgr [talk] 19:26, 13 November 2014 (UTC)
 * List or table isn't that big. The article isn't that big either.
 * There was no need for a split in 2011. Boh39083 (talk) 23:24, 18 November 2023 (UTC)

Link to article about undergrad project
There have been continuing additions/reverts of C-Y Wang's paper.


 * C.-Y. Wang et al., Arithmetic Operations Beyond Floating Point Number Precision, International Journal of Computational Science and Engineering, 2011, Vol. 6, No. 3, pp. 206-215.

The insertions have not argued the merit of the citation. This revert characterized the paper as a "Low relevance undergraduate project".

My talk page had the following discussion about this paper some time ago. Here is the text:


 * Please state the reasons that you removed the references for arbitrary precision after 2007 and only kept the original 2 references (without publish year)? Wiki said Please help improve this article by adding reliable references after 2007. The new papers cited reflect recent development and applications of arbitrary precision in scinetific and engineering fields. —Preceding unsigned comment added by 220.128.221.62 (talk) 19:29, 9 May 2011 (UTC)


 * The refs in arbitrary-precision arithmetic were removed because they were not used / cited in the article. They are after-the-fact additions that were not used to compile the original article and do not have inline citations pointing to them.
 * The notice at the top of the article is not asking for references after 2007. It states that in 2007 the article was marked because its text did not have inline citations to references. Nothing requires those inline citations to be post 2007.
 * Wikipedia is not a how to do it manual. WP:NOTHOWTO One of the references (one that appears to have a conflict of interest WP:COI with the editor) is a how to exercise for a class. Some other references appear to have little relevance to the main article; the article is not, for example, interested in FPGA implementations.
 * Glrx (talk) 20:14, 9 May 2011 (UTC)


 * Dear Mr/Ms. Glrx,
 * (1) Are you the editor of the page of arbitrary precision? (2) The papers written by academic researchers published in conference proceedings and journals as in the new references are very helpful to the wide reader in the world. (3) The texts can be edited, too, to reflect why those papers are necessary for readers with scientific and engineering backgrounds. Please read through all the papers before you brutally deleted them. (4) You seem to have a bad history of removing other people's editing, violating the value of Wiki. (5)I strongly object you being an editor if you are. Have not authored any journal or conference papers in the subjects of arbitrary precision? If not, please leave space for other experts. —Preceding unsigned comment added by Yuehwang (talk • contribs) 21:27, 9 May 2011 (UTC)

My take is the various 220.128.221.x editors have a WP:COI with the paper. Glrx (talk) 23:12, 24 December 2011 (UTC)


 * BTW, 220.12.221.59 also did this edit]. Glrx (talk) 23:26, 24 December 2011 (UTC)

my edit which was reverted
OK, it is correct that software/list entries which don't have any article can be listed in such lists/comparisons, but then please add a) a independent, third party, reliable reference to show me that is software exists and is notable and b) remove that spammy external links! Wikipedia is not a directory! mabdul 21:09, 17 January 2012 (UTC)


 * Generally, I agree with the notion of severely trimming the lists of implementations. WP is not a directory even if the paper it is printed on is cheap. For a topic to be an article, it must be notable. Notability is not a requirement for including something in an article, but there are similar concerns. Generally, we should not be giving something WP:UNDUE attention or serving as its advertising agent. Glrx (talk) 21:22, 17 January 2012 (UTC)

Citation for usage in encryption
I quote: "A common application is public-key cryptography (such as that in every modern Web browser), whose algorithms commonly employ arithmetic with integers having hundreds or thousands of digits."

My understanding of encryption was that it actually didn't use this, and this statement sounds like an assumption. Please prove me wrong I would very much like to find usage examples. DarkShroom (talk) 16:20, 30 March 2012 (UTC)


 * The RSA (algorithm) public-key cryptography uses very large integers. Key-size of 1024 bits is now considered weak. Researchers: 307-digit key crack endangers 1024-bit RSA Glrx (talk) 17:03, 31 March 2012 (UTC)


 * I suspect that it is right to say that, for example, DES doesn't use large integers, as it does bit manipulation, but not arithmetic on keys. But for the case of Elliptic-curve_cryptography, it does do arithmetic operations on them. You can consider either integers, or fixed point scaled arithmetic. Gah4 (talk) 00:10, 30 August 2019 (UTC)

Terminology
An editor just added a comment about more extensive floating point packages with arbitrary precision that include trig functions.

I have no problem with the notion of arbitrary precision integers/bignums -- the integers take the space they need. The article explains how integer calculations can remain exact; division may require using rationals.

That cannot be the same notion with floating point and typical FP operations: sqrt(2) would immediately run out of space; π has the same problem. Arbitrary-precision in the floating point context would be the user setting the sizes of the exponent and mantissa. For example, MPFR has a target precision. (And there's an awkward topic about how to compute  to a desired precision.)

The article never addresses the floating point issue. Symbolic algebra systems use symbols and rationals to avoid problems with division; they keep irrationals as symbolic quantities; the results are exact/have infinite precision. Floating point is not usually exact. The article mentions some packages have a pre-set precision. Even Fortran has.

There should be a clear notion of arbitrary precision arithmetic. I distinguish between arbitrary precision (integers/rationals that grow as needed; unbounded size; results exact), multiple precision (integers; fixed upper bound set by user; bounded size; division has a remainder; results exact; may overflow), and extended precision (floating point where user sets limits on exponent and mantissa; bounded size; results inexact).

I would restrict the current article to integers/rationals. Floating point belongs elsewhere.

It's not clear that some packages are appropriate even now (under definition that states "calculations are performed on numbers which digits of precision are limited only by the available memory of the host system"):


 * MathX is described as a template library for FIXED length arithmetic types. (Predefined up to 8192 bits.)
 * TTCalc is not arbitrary precision: 1024 bit mantissa and 128 bit exponent.

Although I can accept a notion of "arbitrary" where the user can set the precision parameters to any value he chooses, I think that is better fits the notion of multiple or extended precision.

Glrx (talk) 15:58, 12 September 2012 (UTC)

Second person
There are at least three uses of "we" that should be eliminated. Bubba73 You talkin' to me? 03:31, 16 June 2013 (UTC)

Claimed fact does not appear in reference
In Arbitrary-precision arithmetic, this phrase appears: "for example the √⅓ that appears in Gaussian integration." That article does not confirm this claimed fact. It does not appear to contain "√⅓" at all. David Spector (talk) 03:23, 8 September 2013 (UTC)

Huh? Try Gauss-Legendre (i.e. Gaussian but with with weight constant) and N = 2. NickyMcLean (talk) 09:09, 16 November 2014 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 1 one external link on Arbitrary-precision arithmetic. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20120401144624/http://www.rsa.com/rsalabs/node.asp?id=2218 to http://www.rsa.com/rsalabs/node.asp?id=2218

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.— InternetArchiveBot  (Report bug) 02:30, 17 October 2016 (UTC)

Third opinion on Big-O-Notation / Usage of Arbitrary-precision arithmetic
User User:Glrx keeps reverting edits on Big-O-Notation while clearly not understanding how Big-O works and furthermore tries to delete a perfectly fine example of application of arbitrary-precision arithmetic (Contrary to what he thinks, I indeed know the difference between arbitray and extended precision, since I am an active researcher in this field). Since he does not seem to understand my point, I would like to request a third opinion. --Cerotidinon (talk) 22:39, 28 August 2019 (UTC)


 * I looked at it a little. Can you briefly explain the details of the disagreement?  Bubba73 You talkin' to me? 22:52, 28 August 2019 (UTC)


 * For the first part: Big-O-Notation consists of three notations O for upper bounds, $$\Omega$$ for lower bounds and $$\Theta$$ for both lower and upper bound. Each of these notations always refer to the worst case running time, if not specified otherwise. If an algorithm has running time O(n^2) the worst case running time could be n^2, but it also could be n, n log(n), etc or even constant. Therefore the statement that it needs 10^12 computations only makes sense if n^2 is a lower bound, hence if either Omega or Theta is used. He seems to not understand that it is always about worst case running time (otherwise the notations wouldn't make any sense at all).
 * For the second part: I'm actually not sure. I think that it is about the point that radicals cannot be exactly represented by arbitrary-precision arithmetic (which is true), but it is still a valid application for arbitrary-precision arithmetic to find a near-optimal solution with high precision (and may even be done exactly with a little more effort, that is not "pure" arbitrary-precision anymore, but based on it). I don't know why he thinks I am confusing arbitrary-precision with extended precision and what exactly that would change. Thanks for your time! --Cerotidinon (talk) 23:20, 28 August 2019 (UTC)
 * Thinking about it, he might think that arbitrary-precision always leads to an exact representation and increases in precision automatically if needed. That, however, is not true. At least not in how the term is used in Wikipedia (as synonym to multiple-precision arithmetic). Arbitrary-precision arithmetic software as listed in the link I provided always needs manual increase in precision (such as MPFR), however, I already referenced that and he seems to have ignored it. --Cerotidinon (talk) 23:25, 28 August 2019 (UTC)


 * O, $$\Omega$$ and $$\Theta$$ are indeed bounds as you describe. The introduction of Extended precision states that it is not the same as arbitrary precision.  As far as I know, arbitrary precision and multiple precision mean the same thing.  An irrational number can't be represented exactly with any of them.  Bubba73 You talkin' to me? 23:44, 28 August 2019 (UTC)


 * So, the question remains (which I actually did not precisely state), whether his or my understanding is correct and, hence, whether the article should be changed to use Theta for the three occurrences where a lower bound is implied and include the example with the radical or it should use O and delete the example. --Cerotidinon (talk) 00:06, 29 August 2019 (UTC)


 * I need to look at the context of the edits. Also, I wish the other editor would voice his opinion. Bubba73 You talkin' to me? 00:55, 29 August 2019 (UTC)

There are two types of edits in question.

One of the edits is about using arbitrary precision calculations to find optimal approximations. That is a good statement, but the specific example value of $\sqrt{⅓}$ may mislead the reader into believing the value can be represented with "infinite precision". Arbitrary precision cannot exactly represent $\sqrt{⅓}$ because it is irrational. What does "the optimal value" for $\sqrt{⅓}$ mean? Isn't the optimal value for $\sqrt{⅓}$ the closest value to $\sqrt{⅓}$ for the number representation that being used? Instead of "optimal approximation" the issue is calculating a value to enough precision and then rounding it. The example is also a bit screwy because higher precision numbers are not needed to converge a square root. Newton's method can find an accurate root; one does not need to do the calculation with higher precision and then round the result. The specific example was also for Gaussian integration. Gaussian integration is a method of approximating the value of a definite integral. Why does one need to compute ultraprecise values for something that computes an approximation anyway? The value that one gets from calling  should be good enough for an approximate integration – especially if all the quadrature calculations will be done in that precision. The example may confuse readers, and the given use may not require extraordinary precision.

It's not a good example. I wish I had a better one. Arbitrary precision routines are used in symbolic mathematics (where calculations must be exact), cryptology (where numbers may be 1000s of bits long), and for calculating values to extreme precision (such as thousands of digits of pi).

The second type of edit is about order notation and involves two statements; Cerotidinon wants to use &Theta; rather than O notation. By and large, if readers know a little about order notation, they will know that O describes an upper bound and (if tight) shows the worst case time. Practically speaking, O bounds are assumed to be tight; we don't say O(N3) when we know the algorithm is O(N2). But of those readers familiar with order notation, fewer will know what &Omega; or &Theta; notation means. Most practical applications focus on the upper bound O with the presumption the bound is tight. We serve the readers by using notation that is familiar. WP is an encyclopedia; it is not an academic journal.

Arbitrary precision arithmetic worries about complexity, but the article is not focused on it. (The multiplication algorithm article is interested in fast multiplication algorithms.) Looking at sorting algorithm (which is concerned with complexity), most of the order statistics there use O notation. The heapsort article uses O notation in the infobox even when specifically describing worst case performance. That entry is where Cormen wants to use &Theta;. O notation is common and familiar and heavily used on WP; &Theta; is less common and less familiar.

Using &Theta; notation is a bit more precise because it is only appropriate if the bounds are tight, but most O notation bounds are tight.

Using &Theta; notation can be confusing to readers. I'm not sure how to express this well. I can say the running time for comparison is O(n). I can also say the worst case running time is O(n). There are no surprises, but look what happens when using &Theta;. I can say the worst case running time is &Theta;(n), but I cannot say the running time for comparison is &Theta;(n).

I reverted to the comparison runtime statement: Cerotidinon wants to use
 * The worst case is $O(N)$, but usually it will go much faster.
 * The worst case is $&Theta;(N)$, but usually it will go much faster.

The last sentence requires the "worst case" modifier, and the sentence is jarring. The first clause is giving an upper and lower bound for the algorithm's worst case performance, and the second clause is immediately saying the worst case lower bound does not apply in the usual case. Why raise the worst case lower bound at all? The first sentence (with O(n)) does not state a lower bound, so there is no double take. A plainer statement would be
 * The running time is $O(N)$, but usually the running time is much faster.

The whole statement is
 * Comparison is also very simple. Compare the high-order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case is $O(N)$, but usually it will go much faster.

Here's the second statement (using O): Cerotidinon wants the &Theta; version:
 * For multiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require $O(N^{2})$ operations, ….
 * For multiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require $$\Theta$(N^{2})$ operations, ….

Here, the &Theta; statement is false. Multiplication as taught in primary school requires N2 in the general case, but even primary students are taught that multiplication by 100 is just appending two zeros to the multiplicand. The word "require" does not flag the statement as a worst case statistic but rather a lower bound. Compare
 * The runtime for multiplication is O(N2). True.
 * The worst case runtime for multiplication is (O(N2). True.
 * The runtime for multiplication is &Theta;(N2). False.
 * The worst case runtime for multiplication is &Theta(N2). True.

So the second statement is both unfamiliar notation and false.

Cerotidinon claims O, &Omega;, and &Theta; are always statements about an upper bound: "Big-O-Notation consists of three notations O for upper bounds". I do not know what he means by that statement. O is an upper bound, but &Omega; is a lower bound, and &Theta; is both an upper and lower bound.

So I disagree with Cerotidinon's edits.

So what is the meaning of "arbitrary precision"? That's something that falls to definition, and there may be different definitions. The term "bignum" is more specific. Early Lisp implementations (such as Maclisp) had bignums, and they were numbers that were "limited only by the available memory". One could take any number and continually square it until the memory ran out. The integer result was always exact. The numbers never overflowed/wrapped around. Modern lisp implementations (such as Common Lisp), have both bignums and bignum rationals. If one uses such numbers, the results are always exact; one has "infinite precision". Rational numbers can be represented, but irrational number cannot. There's a price to be paid for exact values.

There's also the term "multiple precision", but it can mean two different things. it may be a bignum implementation where numbers may be arbitrarily large, or it may be operations on fixed-width numbers but where the fixed width may be much larger than a machine native ints. The width is set before the operations are made. One may be able to set the width to an arbitrarily large value (say 200,000 digits), but there is the risk of overflow. You get more precision, but you keep all the ugly features such as overflow.

Cerotidinon referred to a GNU floating point package GNU MPFR. It does not have an exact representation for values such as 1/3. The precision is not infinite. The package corresponds to increasing the floating point widths beyond a machine's single-, double-, and perhaps quadruple-precision floating point representations. It has more digits, but it still has all the ugly problems of floating point numbers: underflow, overflow, cancelation, and rounding.

Glrx (talk) 21:28, 29 August 2019 (UTC)
 * Thank you for your reply. When you say that the second statement is false, do you mean the third?  Bubba73 You talkin' to me? 22:40, 29 August 2019 (UTC)
 * No. I'm trying to refer back to the paragraph starting "Here's the second statement (using O):". That paragraph has the O and &Theta; versions of the second statement. I meant the &Theta; version of that statement is false for the reasons given in the immediately preceding paragraph (which has the false 3rd line). Here is the incorrect statement:
 * For multiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require $$\Theta$(N^{2})$ operations, ….
 * A primary school student can multiply 123 by 100 to get 12300 in about 2N steps (writing 5 digits down) rather than N2 steps, so quadratic time is not required (contradicting the lower bound imposed by "require $$\Theta$(N^{2})$"). The statement with O is true because it does not impose a lower bound on the number of operations. The student did not need to to do the full blown calculation of all the partial products (3 &times; 3) and then sum them.

123 &times; 100 --   000 (123 &times   0) 000 (123 &times  10) 123  (123 &times 100) --- 12300
 * Glrx (talk) 23:19, 29 August 2019 (UTC)


 * It seems to me that CS commonly uses O when one of the others might be more appropriate. This is especially obvious in descriptions of sorting algorithms. Except in the rare case where the actual distinction is being described, I would go for O. Gah4 (talk) 00:18, 30 August 2019 (UTC)


 * And as for multiply, it is assumed that each iteration takes a time proportional to its length. In the case of one size of machine word, it isn't so obvious to see. This comes up more in sort algorithms, but is true here, too. That is why primary school multiply is O(N2). Gah4 (talk) 04:48, 30 August 2019 (UTC)


 * Regarding the first statement: Multiple-precision _is_ used for Gaussian Integration. Numerical integration is frequently done with multiple-precision numbers. See for example the introduction of this paper: https://www.lirmm.fr/arith18/papers/fousse-GLQuad.pdf
 * "Numerical integration is readily available in most multiple precision numerical computation software (e.g. Pari/GP, MuPAD, Mathematica, Maple, . . . ). In those systems the precision can usually be tuned by the user for each computation (it is generally understood as the “working precision” but it may also be the number of digits displayed, when these two values differ)."
 * Furthermore, I understand if you consider Multiple-precison number types and Arbitrary-precision number types as two different things, but in the usage here on Wikipedia they are definitely not. I cited MPFR because it is in the List of arbitrary-precision arithmetic software. You are correct that it cannot represent 1/3 exactly. But it doesn't need to according to Wikipedia's definition. The example does not confuse, it explains precisely what Arbitrary/Multiple-Precision does.
 * Regarding the second statement: That Big-O-Notation is frequently used wrongly does not make it any more right. Big-O has a precise, completely undisputed definition. This is not debatable. If one of my students goes to Wikipedia, he should expect to find correct information there. If Wikipedia reproduces errors just because it is common that these errors are done, it does not live to the standards of an encyclopedia. In practice almost always the upper bound is of importance, that is why O(..) is commonly used.
 * Furthermore, asymptotic notation always(!) refers to a worst case if not specified otherwise. Everything else would not make any sense at all. Saying it would be not Theta(n) because there are cases where it runs faster is blatantly ignoring the maths behind this definition. With this argument you could easily say that you have sorting algorithms that run in O(n) since they don't do anything if a list is already sorted. Would you say it is incorrect that comparison-based sorting always has a lower bound of Omega(n log n)? Then you stand against at more than 50 years worth of computer science researchers. — Preceding unsigned comment added by Cerotidinon (talk • contribs) 05:38, 30 August 2019 (UTC)


 * OK, two different things. One is that often big-O is used, where $$\Theta$$ or $$\Omega$$ should properly be used. The other is that O is often used with a value that isn't worst case. Specifically, Quicksort is O(N2) worst case, but O(N log N) in more typical cases. (I suspect that you know this, but) the original Quicksort is O(N2) when the input is already in order. Besides that, it is usual to consider the infinite limit, though often enough N isn't all that large. If you want to go through all of WP and find all the O that should be $$\Theta$$ and change them, fine with me.  Also be sure to find an appropriate reference for each one. Gah4 (talk) 09:22, 30 August 2019 (UTC)


 * Thank you for your opinion. For Quicksort you have to mention explicitly that you're talking about average time or expected time if you say it is O(N log N) - and that is how it is consistently done. Writing "Quicksort runs in O(N log(N))" is wrong, I don't think this has to be debated. I replace wrong usages of O wherever I can find them if they don't get reverted... However, I don't agree that this needs additional references. O is only wrong if the respective sentence talks about lower bounds in the first place, so I do not add any additional information - if it needs a reference after that change, it needed a reference from the beginning. Nevertheless, I of course check to my best knowledge whether the mentioned bound is indeed a provable lower bound. --Cerotidinon (talk) 17:33, 30 August 2019 (UTC)


 * Thanks. Sometime, and somewhere which I don't remember, I made some change using an O notation, and it was corrected to use $$\Theta$$ instead. That is why I am slightly sensitive (but only slightly) to the distinction. Yes you don't want to say "Quicksort runs in O(N log(N))", however, when there is a list of O(N log(N)) sorts next to a list of O(N2), Quicksort is always in the former. Partly that is because there are improvements that reduce the chance of hitting the O(N2) case. Gah4 (talk) 18:51, 30 August 2019 (UTC)


 * As for the question of precise and precision, some time ago I bought the book Introduction to Precise Numerical Methods. I have been interested in multiple-precision arithmetic since close to when I first got interested in computing. I had access to a package that does radix 16 floating point arithmetic of specified length.  Anyway, the idea of the book isn't exact answers, but answers with known bounds. A more traditional method is to compute somewhat more digits than you need, throw away enough that you are pretty sure the ones you have are right. Well, I got the book when it was pretty cheap, the price has gone way up now. Oh, also, I used to know programs that didn't just give O values, but the actual polynomial for the run time as a function of N. Mostly useful when N isn't so large. Gah4 (talk) 19:13, 30 August 2019 (UTC)


 * Yes of course, in practice constants are of uttermost importance and often even more important than asymptotic worst case complexity (see for example the simplex algorithm). For small values of N insertion sort is more efficient than any of the optimal sorting algorithms. These things should certainly receive more attention, in particular here on Wikipedia. However, this is to some degree comparing apples and oranges. If applicability in practice is the topic, one should argue with actual functions. If asymptotic notation is used, it should be used correctly. If quicksort is listed in a list of N log N algorithms it should be (and I'm sure it is) noted that it is about average case or a random variant (where indeed asymptotic notation is generally used with respect to expected running time. That said, if there is no more objection until tomorrow, I will revert the reversions again to use the correct notation for lower bounds and retain the example on numerical integration. --Cerotidinon (talk) 19:33, 30 August 2019 (UTC)
 * Quick follow-up: Would you consider the book you referenced to be a reference for the claim that multiple-precision arithmetic is used to compute near-optimal coefficients for certain formulae? If so, I would add it to the sentence. --Cerotidinon (talk) 19:49, 30 August 2019 (UTC)


 * I don't see that as an example or suggestion in the book. Much of it is on differential equations. I suspect in cases like you mention, the traditional more digits than needed method usually works. It does remind me of stories about the constants in the library for the IBM Fortran H extended compiler, which supports REAL*16 and COMPLEX*32 (28 hexadecimal digit significand), which were supposed to be extra special, and maybe secret. (There are extra complications for getting the most accurate answers with radix other than two.) Gah4 (talk) 21:06, 30 August 2019 (UTC)


 * Oh, also, my TI-92 calculator can give exact answers for some irrational values. For arcsin(1) it gives π/2, and for sin(π/4) it gives √2/2. There is a “≈” key for when you want an approximate numerical value.  I did wonder some time ago about a floating point format with one bit indicating a multiple of pi. That would allow exact answers to some problems, but would convert to an approximate pi when needed. I am not sure how useful that is in the more general case, though. Not so obvious for the radical case. Gah4 (talk) 21:21, 30 August 2019 (UTC)


 * Your calculator is more likely to make use Symbolic computation to do that. --Cerotidinon (talk) 00:07, 31 August 2019 (UTC)

Cleanup needed
The article definitely needs some cleanup and structuring. Maybe it should be split into two: One for large integers, and one for large reals. Right now this article is classified as describing a floating point format, whereas most of the text is about unbounded integers. The most important application of big integers, cryptography, typically uses unsigned unbounded integers.Jost Riedel (talk) 23:00, 5 December 2022 (UTC)
 * The difference between fixed and floating point is an implementation detail. The first such system I knew and used on IBM S/360, written close to the beginning of S/360, uses base 16 floating point, with a 32 bit exponent. But in most actual cases, fixed point is fine.  Calculating pi, one knows exactly where the radix point is. Gah4 (talk) 05:48, 6 December 2022 (UTC)
 * The difference between fixed and floating point is an implementation detail. The first such system I knew and used on IBM S/360, written close to the beginning of S/360, uses base 16 floating point, with a 32 bit exponent. But in most actual cases, fixed point is fine.  Calculating pi, one knows exactly where the radix point is. Gah4 (talk) 05:48, 6 December 2022 (UTC)

This page needs review
Looking at just the begining there are some uncited sources and later thare is an N word joke. 69.40.114.216 (talk) 15:16, 19 November 2023 (UTC)

Human resources and management section
I think this whole section is problematic and should be removed. Bubba73 You talkin' to me? 06:16, 16 January 2024 (UTC)

Need to tweak the definition?
An editor just removed the mention of Java's BigInteger class, because its numbers are limited to 2^2147483647 and not the computer's available memory. That's still about 646,392,578 digits, which is a lot less than the memory of computers today. But how many times do you need integers with more than 646 million digits? Bubba73 You talkin' to me? 19:59, 13 April 2024 (UTC)
 * I've reverted this edit. Note that GMP also has a limitation (visible on 64-bit machines) due to the fact that in practice, it uses a 32-bit type for the size of mpz_t numbers: https://gmplib.org/list-archives/gmp-discuss/2012-March/004948.html
 * And I suspect that other software might have similar limitations (possibly undocumented and never tested). — Vincent Lefèvre (talk) 00:40, 14 April 2024 (UTC)
 * They probably do have limits that are below the total memory in the computer. A few years ago, the 2^2147483647 limit would have been larger than the computer's memory could hold, but not now.  Which is why that we probably need to change the definition a little bit to say that they can't be arbitrarily large, up to the limit of memory, because you need some convienent limit on the number of bytes, but say that they are much, much bigger than the standard integers, up to millions of digits.  Bubba73 You talkin' to me? 00:49, 14 April 2024 (UTC)
 * I added potentially, which seems to avoid the problem. Implementations of any program might have limits less than the available memory, but we don't need to note it every time. Not so many years ago, I NFS booted Sun3 SunOS from a 3TB disk. I suspect those working on SunOS in the 1980's and 1990's didn't worry too much about TB disks, but it does work. Except that df gives the wrong values in some cases. It is possible to write programs that use completely arbitrary sizes, but usually more work than it is worth. Gah4 (talk) 11:00, 14 April 2024 (UTC)
 * Hmm... I don't like "potentially" very much. I would have rather said "theoretically unlimited"; and in the body of the article, detail the practical limitations: the available memory of the machine, but also possibly implementation details, such as limits on basic integer types that express sizes. If one changes such types to 64 bits (with a possible need of code adaptation, but not to the point of changing the algorithms), the only limits should become the available memory of the machine. — Vincent Lefèvre (talk) 18:13, 14 April 2024 (UTC)
 * I think "potentially" is OK, but I get the point about "theorecically unlimited", because some sort of limit, either the memory of the computer or the number of digits that you bother to allow. Bubba73 You talkin' to me? 22:39, 14 April 2024 (UTC)
 * I suppose so. But are we going through every Wikipedia article that could possibly be more limited than memory requires? Yes this problem is well known. I very well remember the 2GB file problem due to C using signed offsets. And it applied even when you weren't using fseek, as the system didn't know that you weren't. My favorite, mostly from MSDOS days, was programs checking for available memory or disk space, doing signed arithmetic. A disk with between 2GB and 4GB available would fail the test. If you store the length of the length in a 32 bit word, you won't run out ever. But that makes programming much more complicated. Gah4 (talk) 22:56, 15 April 2024 (UTC)
 * I doubt that anyone would store the length of the length in practice. Using a 64-bit word for the length would be sufficient even in the distant future (given the physical limits) and more efficient than having to use the length of the length. — Vincent Lefèvre (talk) 02:00, 16 April 2024 (UTC)
 * The largest known prime has nearly 25 million digits, so it is feasible that the 654-million digit limit will come into play for some people (if not now, then before too long). But that limit could easily be fixed, and I imagine that many of the big integer packated do. Otherwise I agree with you totally.  Bubba73 You talkin' to me? 02:25, 16 April 2024 (UTC)
 * Pi is now 105 trillion digits, presumably not calculated with the above mentioned packages. As I understand it, the way to specify a value of unknown, and arbitrary, length is (n-1) zeroes (in whatever base) followed by n digits. Gah4 (talk) 05:08, 16 April 2024 (UTC)
 * I suppose so. But are we going through every Wikipedia article that could possibly be more limited than memory requires? Yes this problem is well known. I very well remember the 2GB file problem due to C using signed offsets. And it applied even when you weren't using fseek, as the system didn't know that you weren't. My favorite, mostly from MSDOS days, was programs checking for available memory or disk space, doing signed arithmetic. A disk with between 2GB and 4GB available would fail the test. If you store the length of the length in a 32 bit word, you won't run out ever. But that makes programming much more complicated. Gah4 (talk) 22:56, 15 April 2024 (UTC)
 * I doubt that anyone would store the length of the length in practice. Using a 64-bit word for the length would be sufficient even in the distant future (given the physical limits) and more efficient than having to use the length of the length. — Vincent Lefèvre (talk) 02:00, 16 April 2024 (UTC)
 * The largest known prime has nearly 25 million digits, so it is feasible that the 654-million digit limit will come into play for some people (if not now, then before too long). But that limit could easily be fixed, and I imagine that many of the big integer packated do. Otherwise I agree with you totally.  Bubba73 You talkin' to me? 02:25, 16 April 2024 (UTC)
 * Pi is now 105 trillion digits, presumably not calculated with the above mentioned packages. As I understand it, the way to specify a value of unknown, and arbitrary, length is (n-1) zeroes (in whatever base) followed by n digits. Gah4 (talk) 05:08, 16 April 2024 (UTC)
 * Pi is now 105 trillion digits, presumably not calculated with the above mentioned packages. As I understand it, the way to specify a value of unknown, and arbitrary, length is (n-1) zeroes (in whatever base) followed by n digits. Gah4 (talk) 05:08, 16 April 2024 (UTC)
 * Pi is now 105 trillion digits, presumably not calculated with the above mentioned packages. As I understand it, the way to specify a value of unknown, and arbitrary, length is (n-1) zeroes (in whatever base) followed by n digits. Gah4 (talk) 05:08, 16 April 2024 (UTC)