Finite difference coefficient

In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward.

Central finite difference
This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing:

For example, the third derivative with a second-order accuracy is


 * $$f'''(x_{0}) \approx \frac{-\frac{1}{2}f(x_{-2}) + f(x_{-1}) -f(x_{+1}) + \frac{1}{2}f(x_{+2})}{h^3_x} + O\left(h_x^2 \right),$$

where $$ h_x $$ represents a uniform grid spacing between each finite difference interval, and $$x_n = x_0 + n h_x$$.

For the $$m$$-th derivative with accuracy $$n$$, there are $$2p + 1 = 2 \left\lfloor \frac{m+1}{2} \right\rfloor - 1 + n$$ central coefficients $$a_{-p}, a_{-p+1}, ..., a_{p-1}, a_p$$. These are given by the solution of the linear equation system



\begin{pmatrix} 1 & 1 & ... & 1 & 1 \\ -p & -p+1 & ... & p-1 & p \\ (-p)^2 & (-p+1)^2 &... & (p-1)^2 & p^2 \\ ... & ... &...&...&... \\ ... & ... &...&...&... \\ ... & ... &...&...&... \\ (-p)^{2p} & (-p+1)^{2p} & ... & (p-1)^{2p} & p^{2p} \end{pmatrix} \begin{pmatrix} a_{-p} \\ a_{-p+1} \\ a_{-p+2} \\ ... \\ ... \\ ... \\ a_p \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ ... \\ m! \\ ...\\ 0 \end{pmatrix}, $$

where the only non-zero value on the right hand side is in the $$(m+1)$$-th row.

An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available. Given that the left-hand side matrix $$\mathbf{J}^{T}$$ is a transposed Vandermonde matrix, a rearrangement reveals that the coefficients are basically computed by fitting and deriving a $$2p$$-th order polynomial to a window of $$2p + 1$$ points. Consequently, the coefficients can also be computed as the $$m$$-th order derivative of a fully determined Savitzky–Golay filter with polynomial degree $$2p$$ and a window size of $$2p + 1$$. For this, open source implementations are also available. There are two possible definitions which differ in the ordering of the coefficients: a filter for filtering via discrete convolution or via a matrix-vector-product. The coefficients given in the table above correspond to the latter definition.

The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients. For the first six derivatives we have the following:

where $$H_{n,m}$$ are generalized harmonic numbers.

Forward finite difference
This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing:

For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are


 * $$\displaystyle f'(x_{0}) \approx \displaystyle \frac{-\frac{11}{6}f(x_{0}) + 3f(x_{+1}) -\frac{3}{2}f(x_{+2}) +\frac{1}{3}f(x_{+3}) }{h_{x}} + O\left(h_{x}^3 \right), $$


 * $$\displaystyle f''(x_{0}) \approx \displaystyle \frac{2f(x_{0}) - 5f(x_{+1}) + 4f(x_{+2}) - f(x_{+3}) }{h_{x}^2} + O\left(h_{x}^2 \right), $$

while the corresponding backward approximations are given by


 * $$\displaystyle f'(x_{0}) \approx \displaystyle \frac{\frac{11}{6}f(x_{0}) - 3f(x_{-1}) +\frac{3}{2}f(x_{-2}) -\frac{1}{3}f(x_{-3}) }{h_{x}} + O\left(h_{x}^3 \right), $$


 * $$\displaystyle f''(x_{0}) \approx \displaystyle \frac{2f(x_{0}) - 5f(x_{-1}) + 4f(x_{-2}) - f(x_{-3}) }{h_{x}^2} + O\left(h_{x}^2 \right), $$

Backward finite difference
To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same. The following table illustrates this:

Arbitrary stencil points
For a given arbitrary stencil points $$\displaystyle s  $$ of length $$\displaystyle N  $$ with the order of derivatives $$\displaystyle d < N $$, the finite difference coefficients can be obtained by solving the linear equations



\begin{pmatrix} s_1^0 & \cdots & s_N^0 \\ \vdots & \ddots & \vdots \\ s_1^{N-1} & \cdots & s_N^{N-1} \end{pmatrix} \begin{pmatrix} a_1 \\ \vdots \\ a_N \end{pmatrix} = d! \begin{pmatrix} \delta_{0,d} \\ \vdots\\ \delta_{i,d}\\ \vdots\\ \delta_{N-1,d} \end{pmatrix}, $$

where $$\delta_{i,j}$$ is the Kronecker delta, equal to one if $$i = j$$, and zero otherwise.

Example, for $$s = [-3, -2, -1, 0, 1]$$, order of differentiation $$d = 4$$:



\begin{pmatrix} a_{1} \\ a_{2} \\ a_{3} \\ a_4 \\ a_5 \end{pmatrix} = \begin{pmatrix} 1 & 1 &  1 & 1 & 1 \\  -3 & -2 & -1 & 0 & 1 \\   9 &  4 &  1 & 0 & 1 \\ -27 & -8 & -1 & 0 & 1 \\  81 & 16 &  1 & 0 & 1 \\ \end{pmatrix}^{-1} \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 24 \end{pmatrix} = \begin{pmatrix} 1 \\ -4 \\ 6 \\ -4\\  1 \end{pmatrix}. $$

The order of accuracy of the approximation takes the usual form $$O\left(h_{x}^{(N-d)}\right)$$.