Optimal radix choice

In mathematics and computer science, optimal radix choice is the problem of choosing the base, or radix, that is best suited for representing numbers. Various proposals have been made to quantify the relative costs of using different radices in representing numbers, especially in computer systems. One formula is the number of digits needed to express it in that base, multiplied by the base (the number of possible values each digit could have). This expression also arises in questions regarding organizational structure, networking, and other fields.

Definition
The cost of representing a number N in a given base b can be defined as


 * $$E(b,N) = b \lfloor \log_b (N) +1 \rfloor \, $$

where we use the floor function $\lfloor \rfloor$ and the base-b logarithm $$\log_{b}$$.

If both b and N are positive integers, then the quantity $$E(b,N)$$ is equal to the number of digits needed to express the number N in base b, multiplied by base b. This quantity thus measures the cost of storing or processing the number N in base b if the cost of each "digit" is proportional to b. A base with a lower average $$E(b,N)$$ is therefore, in some senses, more efficient than a base with a higher average value.

For example, 100 in decimal has three digits, so its cost of representation is 10×3 = 30, while its binary representation has seven digits (11001002), so the analogous calculation gives 2×7 = 14. Likewise, in base 3 its representation has five digits (102013), for a value of 3×5 = 15, and in base 36 (2S36) one finds 36×2 = 72.

If the number is imagined to be represented by a combination lock or a tally counter, in which each wheel has b digit faces, from $$0, 1, ..., b-1$$ and having $$\lfloor \log_b (N) +1 \rfloor$$ wheels, then $$E(b,N)$$ is the total number of digit faces needed to inclusively represent any integer from 0 to N.

Asymptotic behavior
The quantity $$E(b,N)$$ for large N can be approximated as follows:


 * $$ E(b,N) = b \lfloor \log_b (N) +1 \rfloor \sim b\ \log_b (N) = {b \over \ln(b)} \ln(N) .$$
 * $$ {E(b,N) \over \ln(N)} \sim {b \over \ln(b)} .$$

The asymptotically best value is obtained for base 3, since $$b \over \ln(b)$$ attains a minimum for $$b = 3$$ in the positive integers:
 * $${2 \over \ln(2)} \approx 2.88539\,,$$
 * $${3 \over \ln(3)} \approx 2.73072\,,$$
 * $${4 \over \ln(4)} \approx 2.88539\,.$$

For base 10, we have:
 * $${10 \over \ln(10)} \approx 4.34294\,.$$

Comparing different bases
The values of $$E(b,N)$$ of bases b1 and b2 may be compared for a large value of N:


 * $$ {{E(b_1,N)} \over {E(b_2,N)}} \approx {{b_1 {\log_{b_1} (N)}} \over {b_2 {\log_{b_2} (N)}}}

= {\left( \dfrac{b_1 \ln (N)} {\ln (b_1)} \right) \over \left( \dfrac{b_2 \ln (N)} {\ln (b_2)} \right)} = {{b_1 \ln (b_2)} \over {b_2 \ln (b_1)}} \,. $$

Choosing e for b2 gives


 * $$ {{E(b)} \over {E(e)}} \approx {{b \ln (e)} \over {e \ln (b)}} = {{b} \over {e \ln(b)}} \, . $$

The average $$E(b,N)$$ of various bases up to several arbitrary numbers (avoiding proximity to powers of 2 through 12 and e) are given in the table below. Also shown are the values relative to that of base e. $$E(1,N)$$ of any number $$N$$ is just $$N$$, making unary the most economical for the first few integers, but this no longer holds as N climbs to infinity.


 * {| class="wikitable sortable"

! Base b ! Avg. E(b,N) N = 1 to 6 ! Avg. E(b,N) N = 1 to 43 ! Avg. E(b,N) N = 1 to 182 ! Avg. E(b,N) N = 1 to 5329 ! $$ {{E(b)} \over {E(e)}} $$ ! Relative size of E&thinsp;(b&thinsp;)/E&thinsp;(e&thinsp;)
 * - align=right
 * 1
 * 3.5
 * 22.0
 * 91.5
 * 2,665.0
 * $$ \infty $$ || align="left"|&mdash;
 * - align=right
 * 2
 * 4.7
 * 9.3
 * 13.3
 * 22.9
 * - align=right
 * e
 * 4.5
 * 9.0
 * 12.9
 * 22.1
 * - align=right
 * 3
 * 5.0
 * 9.5
 * 13.1
 * 22.2
 * - align=right
 * 4
 * 6.0
 * 10.3
 * 14.2
 * 23.9
 * - align=right
 * 5
 * 6.7
 * 11.7
 * 15.8
 * 26.3
 * - align=right
 * 6
 * 7.0
 * 12.4
 * 16.7
 * 28.3
 * - align=right
 * 7
 * 7.0
 * 13.0
 * 18.9
 * 31.3
 * - align=right
 * 8
 * 8.0
 * 14.7
 * 20.9
 * 33.0
 * - align=right
 * 9
 * 9.0
 * 16.3
 * 22.6
 * 34.6
 * - align=right
 * 10
 * 10.0
 * 17.9
 * 24.1
 * 37.9
 * - align=right
 * 12
 * 12.0
 * 20.9
 * 25.8
 * 43.8
 * - align=right
 * 15
 * 15.0
 * 25.1
 * 28.8
 * 49.8
 * - align=right
 * 16
 * 16.0
 * 26.4
 * 30.7
 * 50.9
 * - align=right
 * 20
 * 20.0
 * 31.2
 * 37.9
 * 58.4
 * - align=right
 * 30
 * 30.0
 * 39.8
 * 55.2
 * 84.8
 * - align=right
 * 40
 * 40.0
 * 43.7
 * 71.4
 * 107.7
 * - align=right
 * 60
 * 60.0
 * 60.0
 * 100.5
 * 138.8
 * }
 * 84.8
 * - align=right
 * 40
 * 40.0
 * 43.7
 * 71.4
 * 107.7
 * - align=right
 * 60
 * 60.0
 * 60.0
 * 100.5
 * 138.8
 * }
 * 138.8
 * }
 * }

Ternary tree efficiency
One result of the relative economy of base 3 is that ternary search trees offer an efficient strategy for retrieving elements of a database. A similar analysis suggests that the optimum design of a large telephone menu system to minimise the number of menu choices that the average customer must listen to (i.e. the product of the number of choices per menu and the number of menu levels) is to have three choices per menu.

In a $d$-ary heap, a priority queue data structure based on $d$-ary trees, the worst-case number of comparisons per operation in a heap containing $$n$$ elements is $$d\log_d n$$ (up to lower-order terms), the same formula used above. It has been suggested that choosing $$d=3$$ or $$d=4$$ may offer optimal performance in practice.

Brian Hayes suggests that $$E(b,N)$$ may be the appropriate measure for the complexity of an Interactive voice response menu: in a tree-structured phone menu with $$n$$ outcomes and $$r$$ choices per step, the time to traverse the menu is proportional to the product of $$r$$ (the time to present the choices at each step) with $$\log_r n$$ (the number of choices that need to be made to determine the outcome). From this analysis, the optimal number of choices per step in such a menu is three.

Computer hardware efficiencies
The 1950 reference High-Speed Computing Devices describes a particular situation using contemporary technology. Each digit of a number would be stored as the state of a ring counter composed of several triodes. Whether vacuum tubes or thyratrons, the triodes were the most expensive part of a counter. For small radices r less than about 7, a single digit required r triodes. (Larger radices required 2r triodes arranged as r flip-flops, as in ENIAC's decimal counters.)

So the number of triodes in a numerical register with n digits was rn. In order to represent numbers up to 106, the following numbers of tubes were needed:


 * {| class="wikitable"

! Radix r ! Tubes N = rn
 * 2
 * 39.20
 * 3
 * 38.24
 * 4
 * 39.20
 * 5
 * 42.90
 * 10
 * 60.00
 * }
 * 42.90
 * 10
 * 60.00
 * }
 * }

The authors conclude, "Under these assumptions, the radix 3, on the average, is the most economical choice, closely followed by radices 2 and 4. These assumptions are, of course, only approximately valid, and the choice of 2 as a radix is frequently justified on more complete analysis. Even with the optimistic assumption that 10 triodes will yield a decimal ring, radix 10 leads to about one and one-half times the complexity of radix 2, 3, or 4. This is probably significant despite the shallow nature of the argument used here."