Summed-area table

A summed-area table is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid. In the image processing domain, it is also known as an integral image. It was introduced to computer graphics in 1984 by Frank Crow for use with mipmaps. In computer vision it was popularized by Lewis and then given the name "integral image" and prominently used within the Viola–Jones object detection framework in 2001. Historically, this principle is very well known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions.

The algorithm
As the name suggests, the value at any point (x, y) in the summed-area table is the sum of all the pixels above and to the left of (x, y), inclusive: $$ I(x,y) = \sum_{\begin{smallmatrix} x' \le x \\ y' \le y\end{smallmatrix}} i(x',y')$$ where $$i(x,y)$$ is the value of the pixel at (x,y).

The summed-area table can be computed efficiently in a single pass over the image, as the value in the summed-area table at (x, y) is just: $$ I(x,y) = i(x,y) + I(x,y-1) + I(x-1,y) - I(x-1,y-1)$$ (Noted that the summed matrix is calculated from top left corner)

Once the summed-area table has been computed, evaluating the sum of intensities over any rectangular area requires exactly four array references regardless of the area size. That is, the notation in the figure at right, having $A = (x_{0}, y_{0})$, $B = (x_{1}, y_{0})$, $C = (x_{0}, y_{1})$ and $D = (x_{1}, y_{1})$, the sum of $i(x,y)$ over the rectangle spanned by A, B, C, and D is: $$\sum_{\begin{smallmatrix} x_0 < x \le x_1 \\ y_0 < y \le y_1 \end{smallmatrix}} i(x,y) = I(D) + I(A) - I(B) - I(C)$$

Extensions
This method is naturally extended to continuous domains.

The method can be also extended to high-dimensional images. If the corners of the rectangle are $$x^p$$ with $$p$$ in $$\{0,1\}^d$$, then the sum of image values contained in the rectangle are computed with the formula $$ \sum_{p\in\{0,1\}^d }(-1)^{d-\|p\|_1} I(x^p)$$ where $$I(x)$$ is the integral image at $$x$$ and $$d$$ the image dimension. The notation $$x^p$$ correspond in the example to $$d=2$$, $$A=x^{(0,0)}$$, $$B=x^{(1,0)}$$, $$C=x^{(1,1)}$$ and $$D=x^{(0,1)}$$. In neuroimaging, for example, the images have dimension $$d=3$$ or $$d=4$$, when using voxels or voxels with a time-stamp.

This method has been extended to high-order integral image as in the work of Phan et al. who provided two, three, or four integral images for quickly and efficiently calculating the standard deviation (variance), skewness, and kurtosis of local block in the image. This is detailed below:

To compute variance or standard deviation of a block, we need two integral images: $$ I(x,y) = \sum_{\begin{smallmatrix} x' \le x \\ y' \le y\end{smallmatrix}} i(x',y')$$ $$ I^2(x,y) = \sum_{\begin{smallmatrix} x' \le x \\ y' \le y\end{smallmatrix}} i^2(x',y')$$ The variance is given by: $$ \operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2. $$ Let $$S_1$$ and $$S_2$$ denote the summations of block $$ABCD$$ of $$I$$ and $$I^2$$, respectively. $$S_1$$ and $$S_2$$ are computed quickly by integral image. Now, we manipulate the variance equation as: $$ \begin{align} \operatorname{Var}(X) &= \frac{1}{n} \sum_{i=1}^n \left(x_i^2 - 2 \mu x_i + \mu^2\right) \\[1ex] &= \frac{1}{n} \left[\sum_{i=1}^n x_i^2 - 2 \sum_{i=1}^n \mu x_i + \sum_{i=1}^n \mu^2\right] \\[1ex] &= \frac{1}{n} \left[\sum_{i=1}^n x_i^2 - 2\sum_{i=1}^n \mu x_i + n \mu^2\right] \\[1ex] &= \frac{1}{n} \left[\sum_{i=1}^n x_i^2 - 2 \mu \sum_{i=1}^n x_i + n \mu^2\right] \\[1ex] &= \frac{1}{n} \left[S_2 - 2 \frac{S_1}{n} S_1 + n \left(\frac{S_1}{n}\right)^2\right] \\[1ex] &= \frac{1}{n} \left[S_2 - \frac{S_1^2}{n}\right] \end{align} $$ Where $$\mu=S_1/n$$ and $S_2 = \sum_{i=1}^n x_i^2$.

Similar to the estimation of the mean ($$\mu$$) and variance ($$\operatorname{Var}$$), which requires the integral images of the first and second power of the image respectively (i.e. $$I, I^2$$); manipulations similar to the ones mentioned above can be made to the third and fourth powers of the images (i.e. $$I^3(x,y), I^4(x,y)$$.) for obtaining the skewness and kurtosis. But one important implementation detail that must be kept in mind for the above methods, as mentioned by F Shafait et al. is that of integer overflow occurring for the higher order integral images in case 32-bit integers are used.

Implementation considerations
The data type for the sums may need to be different and larger than the data type used for the original values, in order to accommodate the largest expected sum without overflow. For floating-point data, error can be reduced using compensated summation.

Lecture videos

 * An introduction to the theory behind the integral image algorithm
 * A demonstration to a continuous version of the integral image algorithm, from the Wolfram Demonstrations Project