User:TobyK/Exact real arithmetic

Introduction
While theoretically computers can perform arithmetic with an accuracy limited only by memory and disk-space, the common methods of representing numbers on computers are either purely symbolic, eg "sqrt(3)", or are fixed- or floating-point with a fixed-length mantissa which accumulates errors during arithmetic operations. Commonly extra guard digits are used to improve floating-point accuracy for an individual operation, but subsequent operations will accumulate rounding errors and the final answer may contain no correct digits at all. A separate numerical analysis of the sequence of operations has to be used to calculate the accuracy of the answer a posteriori. Accurate (as opposed to precise) numerical results are therefore often difficult or tedious to obtain.

It is possible, however, to instead perform arithmetic with exact real numbers which do not accumulate representation, rounding or truncation errors, and which provide correct answers to any requested accuracy on demand. Based on the definition of a real number as the limit of a sequence of nested intervals of rational numbers, we can identify a number with a sequence of contracting mathematical functions operating on an initial base interval (a contracting function maps an interval to a smaller interior interval).

By representing a number as a (possibly infinite) sequence of increasingly better approximations which are created on demand, algorithms for arithmetic and trancendental functions can calculate exact real results to the required precision, only consuming terms from the input (function argument) sequences as needed. Operations can be composed, and the accuracy of the final result can be increased indefinitely by processing more input terms. This represention is particularly suitable for programming languages with good support for lazy evaluation of streams.

A finite sequence, or an initial segment of an infinite sequence, contains a finite set of contracting functions and thus defines a certain interval. This interval contains the limiting value of the infinite sequence and can be used as an error bound.

Properties
To ensure functions don't need to consume an infinite number of input terms before producing (emitting) an output term, the representation must have redundancy. That means that the same number can be represented by different sequences, and rules out the use of the usual positional number systems. To see why redundancy is necessary, consider a function using the decimal representation and producing a result between 0.99...n times...99 and 1.00...n times...01. The function might have to calculate to an arbitrarily high accuracy and so consume an arbitrarily large amount of input before it can emit the first digit. The error interval may always straddle the interval boundaries With a redundant representation, the digits represent overlapping intervals, so the function can emit a term as soon as the computed error interval of the result falls entirely within one of these intervals.

When only a small finite set of contracting functions is used, these are referred to as digits and the representation as a digit stream.

The advantages of exact real arithmetic compared to floating-point arithmetic include:
 * The results of calculations automatically have error bounds, so for example
 * Output in normal positional notation, eg decimal, can guarantee all digits are correct
 * Only computation sufficient for the accuracy requested are performed.
 * Recalculating a result to increased accuracy incurs a relatively small, incremental, amount of work

The disadvantages include:
 * No test for equality (would require comparing an infinite number of terms), which also means:
 * Evaluating inequalities of similar or identical numbers may require arbitrarily long computation.
 * Some operations using comparison (but not "max" or "min") may require arbitrarily long computation
 * A number resulting from a long chain of operations requires memory for the input and intermediate values
 * It is slower than a single floating-point or interval calculation of the same precision.

Choice of contracting function
TODO


 * Cauchy sequence / functions ?
 * continued fractions
 * golden ration
 * Mobius (linear fractional) transforms
 * Stern-Brocot

Implementations
TODO


 * Potts
 * XR
 * CLN

Differences from arbitrary-precision arithmetic
One method used for numerical computation in preference to hardware-supported floating-point arithmetic is arbitrary-precision arithmetic. With this the user defines the precision of the floating-point or fixed-point arihmetic before the calculation, which is then fixed for the duration. Different precision numbers cannot generally be mixed. Initial values are selected from the fixed-point or floating-point set, possibly incurring representation errors. Arithmetic operations approximating the desired mathematical ones are applied, accumulating truncation and round-off errors in the intermediate and final values. Once several operations have been composed the accuracy of the result is not generally known; the hope is that the final value(s) bear some similarity to the desired mathematical result.

Error bounds are not calculated, although if used as part of a Computer algebra system some separate error estimates could be performed. One traditional heuristic used to improve the confidence of the naive user in the result is to repeat the calculation several times with different precisions and observe how much the result varies. Although not mathematically justified the user might decide to believe that the closeness of the results measures the accuracy of the calculation.

Arbitrary-precision arithmetic is generally faster than exact real arithmetic for a single preset precision. Vigorous application of the heuristic, or more formal numerical analysis, may allow a plausible estimate of the accuracy, although this may make the calculation slower overall than exact real arithmetic with its rigourous error bounds. Another difference is that the accuracy of the result is not set directly by the user, but determined indirectly from the precision of the input values and algorithms.

Related articles
Linear fractional transformation

Continued fractions

Numerical analysis

arbitrary-precision arithmetic