User:Garhorn/Runge-Kutta

In numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.

See the article on numerical ordinary differential equations for more background and other methods. See also List of Runge-Kutta methods.

The classical fourth-order Runge–Kutta method
One member of the family of Runge–Kutta methods is so commonly used that it is often referred to as "RK4" or simply as "the Runge–Kutta method".

Let an initial value problem be specified as follows.


 * $$ y' = f(t, y), \quad y(t_0) = y_0. $$

Then, the RK4 method for this problem is given by the following equations:


 * $$\begin{align}

y_{n+1} &= y_n + {h \over 6} \left(k_1 + 2k_2 + 2k_3 + k_4 \right) \\ t_{n+1} &= t_n + h \\ \end{align}$$

where $$ y_{n+1}$$ is the RK4 approximation of $$ y(t_{n+1}) $$, and


 * $$\begin{align}

k_1 &= f \left( t_n, y_n \right) \\ k_2 &= f \left( t_n + {h \over 2}, y_n + {h \over 2} k_1\right) \\ k_3 &= f \left( t_n + {h \over 2}, y_n + {h \over 2} k_2\right) \\ k_4 &= f \left( t_n + h, y_n + h k_3 \right) \\ \end{align}$$

Thus, the next value (yn+1) is determined by the present value (yn) plus the product of the size of the interval (h) and an estimated slope. The slope is a weighted average of slopes:


 * k1 is the slope at the beginning of the interval;
 * k2 is the slope at the midpoint of the interval, using slope k1 to determine the value of y at the point tn + h/2 using Euler's method;
 * k3 is again the slope at the midpoint, but now using the slope k2 to determine the y-value;
 * k4 is the slope at the end of the interval, with its y-value determined using k3.

In averaging the four slopes, greater weight is given to the slopes at the midpoint:


 * $$\mbox{slope} = \frac{k_1 + 2k_2 + 2k_3 + k_4}{6}.$$

The RK4 method is a fourth-order method, meaning that the error per step is on the order of h5, while the total accumulated error has order h4.

Note that the above formulas are valid for both scalar- and vector-valued functions (i.e., y can be a vector and $$ f $$ an operator). For example one can integrate Schrödinger's equation using Hamiltonian operator as function $$ f $$.

Explicit Runge–Kutta methods
The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by
 * $$ y_{n+1} = y_n + h\sum_{i=1}^s b_i k_i, $$

where
 * $$ k_1 = f(t_n, y_n), \, $$
 * $$ k_2 = f(t_n+c_2h, y_n+a_{21}hk_1), \, $$
 * $$ k_3 = f(t_n+c_3h, y_n+a_{31}hk_1+a_{32}hk_2), \, $$
 * $$ \vdots $$
 * $$ k_s = f(t_n+c_sh, y_n+a_{s1}hk_1+a_{s2}hk_2+\cdots+a_{s,s-1}hk_{s-1}). $$
 * (Note: the above equations have different but equivalent definitions in different texts).

To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 &le; j < i &le; s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):

The Runge–Kutta method is consistent if
 * $$\sum_{j=1}^{i-1} a_{ij} = c_i\ \mathrm{for}\ i=2, \ldots, s.$$

There are also accompanying requirements if we require the method to have a certain order p, meaning that the truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a 2-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2.

Examples
The RK4 method falls in this framework. Its tableau is:

However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula $$ y_{n+1} = y_n + hf(t_n,y_n) $$. This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is:

An example of a second-order method with two stages is provided by the midpoint method
 * $$ y_{n+1} = y_n + hf\left(t_n+\frac{h}{2},y_n+\frac{h}{2}f(t_n, y_n)\right). $$

The corresponding tableau is:

Note that this 'midpoint' method is not the optimal RK2 method. An alternative is provided by Heun's method, but no one uses that method, because Heun's pool was closed due to AIDS. If one wants to minimize the truncation error, the method below should be used (Atkinson p. 423). Other important methods are Fehlberg, Cash-Karp and Dormand-Prince. Also, read the article on Adaptive Stepsize.

Usage
The following is an example usage of a two-stage explicit Runge–Kutta method:

to solve the initial-value problem
 * $$ y' = (\tan{y})+1,\quad y(1)=1,\ t\in [1, 1.1]$$

with step size h=0.025.

The tableau above yields the equivalent corresponding equations below defining the method:
 * $$ k_1 = y_n \,$$
 * $$ k_2 = y_n + 2/3hf(t_n, k_1) \,$$
 * $$ y_{n+1} = y_n + h(1/4f(t_n,k_1)+3/4f(t_n+2/3h,k_2))\,$$

The numerical solutions correspond to the underlined values. Note that $$f(t_i,k_1)$$ has been calculated to avoid recalculation in the $$y_i$$s.

Adaptive Runge-Kutta methods
The adaptive methods are designed to produce an estimate of the local truncation error of a single Runge-Kutta step. This is done by having two methods in the tableau, one with order $$p$$ and one with order $$p - 1$$.

The lower-order step is given by
 * $$ y^*_{n+1} = y_n + h\sum_{i=1}^s b^*_i k_i, $$

where the $$k_i$$ are the same as for the higher order method. Then the error is
 * $$ e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (b_i - b^*_i) k_i, $$

which is $$O(h^p)$$. The Butcher Tableau for this kind of method is extended to give the values of $$b^*_i$$:

The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher Tableau is:

However, the simplest adaptive Runge-Kutta method involves combining the Heun method, which is order 2, with the Euler method, which is order 1. Its extended Butcher Tableau is:

The error estimate is used to control the stepsize.

Implicit Runge-Kutta methods
The implicit methods are more general than the explicit ones. The distinction shows up in the Butcher Tableau: for an implicit method, the coefficient matrix $$a_{ij}$$ is not necessarily lower triangular:



\begin{array}{c|cccc} c_1   & a_{11} & a_{12}& \dots & a_{1s}\\ c_2   & a_{21} & a_{22}& \dots & a_{2s}\\ \vdots & \vdots & \vdots& \ddots& \vdots\\ c_s   & a_{s1} & a_{s2}& \dots & a_{ss} \\ \hline & b_1   & b_2   & \dots & b_s\\ \end{array} =

\begin{array}{c|c} \mathbf{c}& A\\ \hline & \mathbf{b^T} \\ \end{array} $$

The approximate solution to the initial value problem reflects the greater number of coefficients:


 * $$y_{n+1} = y_n + h \sum_{i=1}^s b_i k_i\,$$


 * $$k_i = f\left(t_n + c_i h, y_n + h \sum_{j = 1}^s a_{ij} k_j\right).$$

Due to the fullness of the matrix $$a_{ij}$$, the evaluation of each $$k_i$$ is now considerably involved and dependent on the specific function $$f(t, y)$$. Despite the difficulties, implicit methods are of great importance due to their high (possibly unconditional) stability, which is especially important in the solution of partial differential equations. The simplest example of an implicit Runge-Kutta method is the backward Euler method:


 * $$y_{n + 1} = y_n + h f(t_n + h, y_{n + 1})\,$$

The Butcher Tableau for this is simply:



\begin{array}{c|c} 1 & 1 \\ \hline & 1 \\ \end{array} $$

It can be difficult to make sense of even this simple implicit method, as seen from the expression for $$k_1$$:


 * $$k_1 = f(t_n + c_1 h, y_n + h a_{11} k_1) \rightarrow k_1 = f(t_n + h, y_n + h k_1).$$

In this case, the awkward expression above can be simplified by noting that


 * $$y_{n+1} = y_n + h k_1 \rightarrow h k_1 = y_{n+1} - y_n\,$$

so that


 * $$k_1 = f(t_n + h, y_n + y_{n+1} - y_n) = f(t_n + h, y_{n+1}).\,$$

from which


 * $$y_{n + 1} = y_n + h f(t_n + h, y_{n + 1})\,$$

follows. Though simpler then the "raw" representation before manipulation, this is an implicit relation so that the actual solution is problem dependent. Multistep implicit methods have been used with success by some researchers. The combination of stability, higher order accuracy with fewer steps, and stepping that depends only on the previous value makes them attractive; however the complicated problem-specific implementation and the fact that $$k_i$$ must often be approximated iteratively means that they are not common.