General linear methods

General linear methods (GLMs) are a large class of numerical methods used to obtain numerical solutions to ordinary differential equations. They include multistage Runge–Kutta methods that use intermediate collocation points, as well as linear multistep methods that save a finite time history of the solution. John C. Butcher originally coined this term for these methods and has written a series of review papers, a book chapter, and a textbook on the topic. His collaborator, Zdzislaw Jackiewicz also has an extensive textbook on the topic. The original class of methods were originally proposed by Butcher (1965), Gear (1965) and Gragg and Stetter (1964).

Some definitions
Numerical methods for first-order ordinary differential equations approximate solutions to initial value problems of the form


 * $$ y' = f(t,y), \quad y(t_0) = y_0. $$

The result is approximations for the value of $$ y(t) $$ at discrete times $$ t_i $$:


 * $$ y_i \approx y(t_i) \quad\text{where}\quad t_i = t_0 + i h, $$

where h is the time step (sometimes referred to as $$ \Delta t $$).

A description of the method
We follow Butcher (2006), pp. 189–190 for our description, although we note that this method can be found elsewhere.

General linear methods make use of two integers: $$r$$ – the number of time points in history, and $$s$$ –  the number of collocation points. In the case of $$r = 1$$, these methods reduce to classical Runge–Kutta methods, and in the case of $$s = 1$$, these methods reduce to linear multistep methods.

Stage values $$Y_i$$ and stage derivatives $$F_i,\ i = 1, 2, \dots s$$ are computed from approximations $$y_i^{[n-1]},\ i = 1, \dots, r$$ at time step $$n$$:



y^{[n-1]} = \left[ \begin{matrix} y_1^{[n-1]} \\ y_2^{[n-1]} \\ \vdots \\ y_r^{[n-1]} \\ \end{matrix} \right], \quad y^{[n]} = \left[ \begin{matrix} y_1^{[n]} \\ y_2^{[n]} \\ \vdots \\ y_r^{[n]} \\ \end{matrix} \right], \quad Y = \left[ \begin{matrix} Y_1 \\ Y_2 \\ \vdots \\ Y_s \end{matrix} \right], \quad F = \left[ \begin{matrix} F_1 \\ F_2 \\ \vdots \\ F_s \end{matrix} \right] = \left[ \begin{matrix} f(Y_1) \\ f(Y_2) \\ \vdots \\ f(Y_s) \end{matrix} \right]. $$

The stage values are defined by two matrices $$A = [a_{ij}]$$ and $$U = [u_{ij}]$$:



Y_i = \sum_{j=1}^s a_{ij} h F_j + \sum_{j=1}^r u_{ij} y_j^{[n-1]}, \qquad i=1,2, \dots, s, $$

and the update to time $$t^n$$ is defined by two matrices $$B = [b_{ij}]$$ and $$V = [v_{ij}]$$:



y_i^{[n]} = \sum_{j=1}^s b_{ij} h F_j + \sum_{j=1}^r v_{ij} y_j^{[n-1]}, \qquad i=1, 2, \dots, r. $$

Given the four matrices $$A, U, B$$ and $$V$$, one can compactly write the analogue of a Butcher tableau as



\left[ \begin{matrix} Y \\ y^{[n]} \end{matrix} \right] = \left[ \begin{matrix} A \otimes I & U \otimes I \\ B \otimes I & V \otimes I \end{matrix} \right] \left[ \begin{matrix} h F \\ y^{[n-1]} \end{matrix} \right], $$ where $$\otimes$$ stands for the tensor product.

Examples
We present an example described in (Butcher, 1996). This method consists of a single "predicted" step and "corrected" step, which uses extra information about the time history, as well as a single intermediate stage value.

An intermediate stage value is defined as something that looks like it came from a linear multistep method:



y^*_{n-1/2} = y_{n-2} + h \left( \frac9 8 f( y_{n-1} ) + \frac3 8 f( y_{n-2} ) \right). $$

An initial "predictor" $$y^*_n$$ uses the stage value $$y^*_{n-1/2}$$ together with two pieces of time history:



y^*_n = \frac{28}{5} y_{n-1} - \frac{23}{5} y_{n-2} + h \left( \frac{32}{15} f( y^*_{n-1/2} ) - 4 f( y_{n-1} ) - \frac{26}{15} f( y_{n-2} ) \right), $$

and the final update is given by



y_n = \frac{32}{31} y_{n-1} - \frac{1}{31} y_{n-2} + h \left( \frac{5}{31} f( y^*_n ) + \frac{64}{93} f( y^*_{n-1/2} ) + \frac{4}{31} f( y_{n-1} ) - \frac{1}{93} f( y_{n-2} ) \right). $$

The concise table representation for this method is given by



\left[ \begin{array}{ccc|cccc} 0 & 0 & 0 & 0 & 1 & \frac{9}{8} & \frac{3}{8} \\ \frac{32}{15} & 0 & 0 & \frac{28}{5} & -\frac{23}{5} & -4 & -\frac{26}{15} \\ \frac{64}{93} & \frac{5}{31} & 0 & \frac{32}{31} & -\frac{1}{31} & \frac{4}{31} & -\frac{1}{93} \\ \hline \frac{64}{93} & \frac{5}{31} & 0 & \frac{32}{31} & -\frac{1}{31} & \frac{4}{31} & -\frac{1}{93} \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{array} \right]. $$