Normal number (computing)

In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.

The magnitude of the smallest normal number in a format is given by:

$$b^{E_{\text{min}}}$$

where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and $E_{\text{min}}$  depends on the size and layout of the format.

Similarly, the magnitude of the largest normal number in a format is given by


 * $$b^{E_{\text{max}}}\cdot\left(b - b^{1-p}\right)$$

where p is the precision of the format in digits and $E_{\text{min}}$  is related to $E_{\text{max}}$  as:

$$E_{\text{min}}\, \overset{\Delta}{\equiv}\, 1 - E_{\text{max}} = \left(-E_{\text{max}}\right) + 1$$

In the IEEE 754 binary and decimal formats, b, p, $E_{\text{min}}$, and $E_{\text{max}}$  have the following values:

For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10&minus;95 through 9.999999 × 1096.

Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).

Zero is considered neither normal nor subnormal.