Scott's rule

Scott's rule is a method to select the number of bins in a histogram. Scott's rule is widely employed in data analysis software including R, Python and Microsoft Excel where it is the default bin selection method.

For a set of $$n$$ observations $$x_i$$ let $$\hat{f}(x)$$ be the histogram approximation of some function $$f(x)$$. The integrated mean squared error (IMSE) is

\text{IMSE} = E\left[ \int_{\infty}^{\infty} dx (\hat{f}(x) - f(x))^2\right] $$ Where $$E[\cdot]$$ denotes the expectation across many independent draws of $$n$$ data points. By Taylor expanding to first order in $$h$$, the bin width, Scott showed that the optimal width is

h^* = \left( 6 / \int_{-\infty}^{\infty} f'(x)^2 dx \right)^{1/3}n^{-1/3} $$ This formula is also the basis for the Freedman–Diaconis rule.

By taking a normal reference i.e. assuming that $$f(x)$$ is a normal distribution, the equation for $$h^*$$ becomes

h^* = \left( 24 \sqrt{\pi} \right)^{1/3} \sigma n^{-1/3} \sim 3.5 \sigma n^{-1/3} $$ where $$\sigma$$ is the standard deviation of the normal distribution and is estimated from the data. With this value of bin width Scott demonstrates that
 * $$\text{IMSE} \propto n^{-2/3}$$

showing how quickly the histogram approximation approaches the true distribution as the number of samples increases.

Terrell–Scott rule
Another approach developed by Terrell and Scott is based on the observation that, among all densities $$g(x)$$ defined on a compact interval, say $$|x| < 1/2$$, with derivatives which are absolutely continuous, the density which minimises $$\int_{\infty}^{\infty} dx (g^{(k)}(x))^2$$ is

f_k(x) = \begin{cases} \frac{(2k+1)!}{2^{2k}(k!)^2}(1-4x^2)^k, \quad &|x|\leq1/2\\ 0 &|x|>1/2 \end{cases} $$ Using this with $$k=1$$ in the expression for $$h^*$$ gives an upper bound on the value of bin width which is

h^*_{TS} = \left( \frac{4}{n} \right)^{1/3}. $$ So, for functions satisfying the continuity conditions, at least

k_{TS} = \frac{b-a}{h^*} = \left( 2n \right)^{1/3} $$ bins should be used.



This rule is also called the oversmoothed rule or the Rice rule, so called because both authors worked at Rice University. The Rice rule is often reported with the factor of 2 outside the cube root, $$2\left(n \right)^{1/3}$$, and may be considered a different rule. The key difference from Scott's rule is that this rule does not assume the data is normally distributed and the bin width only depends on the number of samples, not on any properties of the data.

In general $$\left( 2n \right)^{1/3}$$ is not an integer so $$\lceil \left( 2n \right)^{1/3} \rceil$$ is used where $$\lceil \cdot \rceil$$ denotes the ceiling function.