BKM algorithm

The BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by Jean-Claude Bajard, Sylvanus Kla, and Jean-Michel Muller. BKM is based on computing complex logarithms (L-mode) and exponentials (E-mode) using a method similar to the algorithm Henry Briggs used to compute logarithms. By using a precomputed table of logarithms of negative powers of two, the BKM algorithm computes elementary functions using only integer add, shift, and compare operations.

BKM is similar to CORDIC, but uses a table of logarithms rather than a table of arctangents. On each iteration, a choice of coefficient is made from a set of nine complex numbers, 1, 0, −1, i, −i, 1+i, 1−i, −1+i, −1−i, rather than only −1 or +1 as used by CORDIC. BKM provides a simpler method of computing some elementary functions, and unlike CORDIC, BKM needs no result scaling factor. The convergence rate of BKM is approximately one bit per iteration, like CORDIC, but BKM requires more precomputed table elements for the same precision because the table stores logarithms of complex operands.

As with other algorithms in the shift-and-add class, BKM is particularly well-suited to hardware implementation. The relative performance of software BKM implementation in comparison to other methods such as polynomial or rational approximations will depend on the availability of fast multi-bit shifts (i.e. a barrel shifter) or hardware floating point arithmetic.

Overview
In order to solve the equation
 * $$\ln(x) = y$$

the BKM algorithm takes advantage of a basic property of logarithms
 * $$\ln(ab) = \ln(a)+\ln(b)$$

Using Pi notation, this identity generalizes to
 * $$\ln\left(\prod_{k=0}^n a_k\right) = \sum_{k=0}^n\ln(a_k)$$

Because any number can be represented by a product, this allows us to choose any set of values $$a_k$$ which multiply to give the value we started with. In computer systems, it's much faster to multiply and divide by multiples of 2, but because not every number is a multiple of 2, using $$a_k = 1+2^m$$ is a better option than a more simple choice of $$a_k = 2^m$$. Since we want to start with large changes and get more accurate as $$k$$ increases, we can more specifically use $$a_k = 1+2^{-k}$$, allowing the product to approach any value between 1 and ~4.768, depending on which subset of $$a_k$$ we use in the final product. At this point, the above equation looks like this:
 * $$\ln\left(\prod_{k\in K\subset \mathbb{Z}_0^+} 1+2^{-k}\right) = \sum_{k\in K\subset \mathbb{Z}_0^+}\ln(1+2^{-k})$$

This choice of $$a_k$$ reduces the computational complexity of the product from repeated multiplication to simple addition and bit-shifting depending on the implementation. Finally, by storing the values $$\ln(1+2^{-k})$$ in a table, calculating the solution is also a simple matter of addition. Iteratively, this gives us two separate sequences. One sequence approaches the input value $$x$$ while the other approaches the output value $$\ln(x)=y$$:

$$ x_k = \begin{cases} 1                      & \text{if } k = 0 \\ x_{k-1}\cdot (1+2^{-k}) & \text{if } x_k \text{ would be} \leq x \\ x_{k-1}                & \text{otherwise} \end{cases} $$

Given this recursive definition and because $$x_k$$ is strictly increasing, it can be shown by induction and convergence that
 * $$\lim_{k\to\infty}x_k = x$$

for any $$1 \leq x \lesssim 4.768$$. For calculating the output, we first create the reference table
 * $$A_k = \ln(1+2^{-k})$$

Then the output is computed iteratively by the definition $$ y_k = \begin{cases} 0            & \text{if } k = 0 \\ y_{k-1} + A_k & \text{if } x_k \text{ would be} \leq x \\ y_{k-1}      & \text{otherwise} \end{cases} $$ The conditions in this iteration are the same as the conditions for the input. Similar to the input, this sequence is also strictly increasing, so it can be shown that
 * $$\lim_{k\to\infty}y_k = y$$

for any $$0 \leq y \lesssim 1.562$$.

Because the algorithm above calculates both the input and output simultaneously, it's possible to modify it slightly so that $$y$$ is the known value and $$x$$ is the value we want to calculate, thereby calculating the exponential instead of the logarithm. Since x becomes an unknown in this case, the conditional changes from
 * $$\dots \text{if } x_k \text{ would be} \leq x $$

to
 * $$\dots \text{if } y_k \text{ would be} \leq y $$

Logarithm function
To calculate the logarithm function (L-mode), the algorithm in each iteration tests if $$x_n \cdot (1+2^{-n}) \le x$$. If so, it calculates $$x_{n+1}$$ and $$y_{n+1}$$. After $$N$$ iterations the value of the function is known with an error of $$\Delta \ln(x) \le 2^{-N}$$.

Example program for natural logarithm in C++ (see  for table):

Logarithms for bases other than e can be calculated with similar effort.

Example program for binary logarithm in C++ (see  for table):

The allowed argument range is the same for both examples (1 ≤  ≤ 4.768462058…). In the case of the base-2 logarithm the exponent can be split off in advance (to get the integer part) so that the algorithm can be applied to the remainder (between 1 and 2). Since the argument is smaller than 2.384231…, the iteration of k can start with 1. Working in either base, the multiplication by s can be replaced with direct modification of the floating point exponent, subtracting 1 from it during each iteration. This results in the algorithm using only addition and no multiplication.

Exponential function
To calculate the exponential function (E-mode), the algorithm in each iteration tests if $$y_n + \ln(1+2^{-n}) \le y$$. If so, it calculates $$x_{n+1}$$ and $$y_{n+1}$$. After $$N$$ iterations the value of the function is known with an error of $$\Delta \exp(x) \le 2^{-N}$$.

Example program in C++ (see  for table):