User:Jlerner/IVP2

{SPCitation
 * Lawrence F. Shampine and Skip Thompson (2007), Scholarpedia, 2(3):2861.
 * doi:10.4249/scholarpedia.2861|64746}

Initial Value Problems
Most general-purpose programs for the numerical solution of ordinary differential equations expect the equations to be presented as an explicit system of first order equations, that are to hold on a finite interval $$[t_0, t_f]$$. An initial value problem specifies the solution of interest by an initial condition $$y(t_0) = A$$. The solvers also expect that $$F(t,y)$$ is continuous in a region that includes $$A$$ and that the partial derivatives $$\partial F_i/\partial y_j$$ are bounded there, assumptions that imply the initial value problem has a solution and only one.

A boundary value problem specifies a solution of interest by imposing conditions at more than one point. For instance, the Blasius problem is the differential equation $$y' ' ' = - y \, y' '/2$$ with boundary conditions $$y(0) = 0, y'(0) = 0, y'(\infty) = 1$$. This example is quite unusual in that a transformation of the solution of the initial value problem provides a solution of the boundary value problem. Generally existence and uniqueness of solutions are much more complicated for boundary value problems than initial value problems, especially because it is not uncommon that the interval is infinite or $$F(t,y)$$ is not smooth. Correspondingly, solving boundary value problems numerically is rather different from solving initial value problems.

Differential equations arise in the most diverse forms, so it is necessary to prepare them for solution. The usual way to write a set of equations as a first order system is to introduce an unknown for each dependent variable in the original set of equations plus an unknown for each derivative up to one less than the highest appearing. For the problem (2) we could let $$y_1(t) = u(t), y_2(t) = u'(t), y_3(t) = u' '(t)$$, to obtain

y_1' = y_2 $$

y_2' = y_3 $$

y_3' = - y_1 \, y_3/2 $$ with initial conditions $$y_1(0) = 0, y_2(0) = 0, y_3(0) = 1$$.

The van der Pol equation $$\epsilon y' ' - (1 - y^2)\,y' + y = 0$$ is equivalent to

y_1' = y_2 $$

\epsilon y_2' = -y_1 + (1 - y_1^2)y_2 $$ Clearly the character of this equation changes when the parameter $$\epsilon$$ is set to 0. Although we can solve such a system numerically for any given $$\epsilon > 0$$, we need singular perturbation theory to understand how the solution depends on $$\epsilon$$. The system with $$\epsilon = 0$$ is a differential-algebraic equation. Differential-algebraic equations resemble ordinary differential equations, but they differ in important ways. Nonetheless, there are programs that accept equations in the implicit form $$F(t,y,y') = 0$$ and solve initial value problems for both ordinary differential equations and certain kinds of differential-algebraic equations. Van der Pol's equation for small $$\epsilon > 0$$ is an example of a stiff system. Programs intended for non-stiff initial value problems perform very poorly when applied to a stiff system. Although most initial value problems are not stiff, many important problems are, so special methods have been developed that solve them effectively. An initial value problem is stiff in regions where $$y(t)$$ is slowly varying and the differential equation is very stable, i.e., nearby solutions of the equation converge very rapidly to $$y(t)$$.

Discrete Variable Methods
Discrete variable methods approximate $$y(t)$$ on a mesh $$t_0 < t_1 < \ldots < t_f$$. They start with the initial value $$y_0 = A$$ and on reaching $$t_n$$, step to $$t_{n+1} = t_n + h_n$$ by computing $$y_{n+1} \approx y(t_{n+1})$$. Methods are often analyzed with the assumption that the step sizes $$h_n$$ are constant, but general-purpose solvers vary the step size for reasons discussed below. A great many methods have been proposed, but three kinds dominate: Runge-Kutta, Adams, BDFs (backward differentiation formulas). On reaching $$t_n$$ there are available values $$y_n,y_{n-1},\ldots $$ and $$F_n = F(t_n,y_n), F_{n-1},\ldots $$ that might be exploited. Methods with memory like Adams and BDFs use some of these values; one-step methods like Runge-Kutta evaluate $$F$$ only at points in $$[t_n,t_{n+1}]$$.

hi