Picard–Lindelöf theorem

In mathematics, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem.

The theorem is named after Émile Picard, Ernst Lindelöf, Rudolf Lipschitz and Augustin-Louis Cauchy.

Theorem
Let $$D \subseteq \R \times \R^n$$ be a closed rectangle with $$(t_0, y_0) \in \operatorname{int} D$$, the interior of $$D$$. Let $$f: D \to \R^n$$ be a function that is continuous in $$t$$ and Lipschitz continuous in $$y$$ (with Lipschitz constant independent from $$t$$). Then, there exists some $ε > 0$ such that the initial value problem

$$y'(t)=f(t,y(t)),\qquad y(t_0)=y_0.$$

has a unique solution $$y(t)$$ on the interval $$[t_0-\varepsilon, t_0+\varepsilon]$$.

Proof sketch
The proof relies on transforming the differential equation, and applying the Banach fixed-point theorem. By integrating both sides, any function satisfying the differential equation must also satisfy the integral equation


 * $$y(t) - y(t_0) = \int_{t_0}^t f(s,y(s)) \, ds.$$

A simple proof of existence of the solution is obtained by successive approximations. In this context, the method is known as Picard iteration.

Set
 * $$\varphi_0(t)=y_0$$

and
 * $$\varphi_{k+1}(t)=y_0+\int_{t_0}^t f(s,\varphi_k(s))\,ds.$$

It can then be shown, by using the Banach fixed-point theorem, that the sequence of "Picard iterates" $φ_{k}$ is convergent and that the limit is a solution to the problem. An application of Grönwall's lemma to $|φ(t) − ψ(t)|$, where $φ$ and $ψ$ are two solutions, shows that $φ(t) = ψ(t)$, thus proving the global uniqueness (the local uniqueness is a consequence of the uniqueness of the Banach fixed point).

See Newton's method of successive approximation for instruction.

Example of Picard iteration
Let $$y(t)=\tan(t),$$ the solution to the equation $$y'(t)=1+y(t)^2$$ with initial condition $$y(t_0)=y_0=0,t_0=0.$$ Starting with $$\varphi_0(t)=0,$$ we iterate


 * $$\varphi_{k+1}(t)=\int_0^t (1+(\varphi_k(s))^2)\,ds$$

so that $$ \varphi_n(t) \to y(t)$$:


 * $$\varphi_1(t)=\int_0^t (1+0^2)\,ds = t$$


 * $$\varphi_2(t)=\int_0^t (1+s^2)\,ds = t + \frac{t^3}{3}$$


 * $$\varphi_3(t)=\int_0^t \left(1+\left(s + \frac{s^3}{3}\right)^2\right)\,ds = t + \frac{t^3}{3} + \frac{2t^5}{15} + \frac{t^7}{63}$$

and so on. Evidently, the functions are computing the Taylor series expansion of our known solution $$y=\tan(t).$$ Since $$\tan$$ has poles at $$\pm\tfrac{\pi}{2},$$ this converges toward a local solution only for $$|t|<\tfrac{\pi}{ 2},$$ not on all of $$\R$$.

Example of non-uniqueness
To understand uniqueness of solutions, consider the following examples. A differential equation can possess a stationary point. For example, for the equation $dy⁄dt = ay$ ($$a<0$$), the stationary solution is $y(t) = 0$, which is obtained for the initial condition $y(0) = 0$. Beginning with another initial condition $y(0) = y_{0} ≠ 0$, the solution y(t) tends toward the stationary point, but reaches it only at the limit of infinite time, so the uniqueness of solutions (over all finite times) is guaranteed.

However, for an equation in which the stationary solution is reached after a finite time, the uniqueness fails. This happens for example for the equation $dy⁄dt = ay^{&thinsp;2⁄3}$, which has at least two solutions corresponding to the initial condition $y(0) = 0$ such as: $y(t) = 0$ or


 * $$y(t)=\begin{cases} \left (\tfrac{at}{3} \right )^{3} & t<0\\ \ \ \ \ 0 & t \ge 0, \end{cases}$$

so the previous state of the system is not uniquely determined by its state after t = 0. The uniqueness theorem does not apply because the function $&thinsp;f&thinsp;(y) = y^{&thinsp;2⁄3}$ has an infinite slope at $y = 0$ and therefore is not Lipschitz continuous, violating the hypothesis of the theorem.

Detailed proof
Let


 * $$C_{a,b}=\overline{I_a(t_0)}\times\overline{B_b(y_0)}$$

where:


 * $$\begin{align}

\overline{I_a(t_0)}&=[t_0-a,t_0+a] \\ \overline{B_b(y_0)}&=[y_0-b,y_0+b]. \end{align}$$

This is the compact cylinder where &thinsp;$f$&thinsp; is defined. Let


 * $$M = \sup_{C_{a,b}}\|f\|,$$

this is, the supremum of (the absolute values of) the slopes of the function. Finally, let L be the Lipschitz constant of $&thinsp;f&thinsp;$ with respect to the second variable.

We will proceed to apply the Banach fixed-point theorem using the metric on $$\mathcal{C}(I_{a}(t_0),B_b(y_0))$$ induced by the uniform norm


 * $$\| \varphi \|_\infty = \sup_{t \in I_a} | \varphi(t)|.$$

We define an operator between two function spaces of continuous functions, Picard's operator, as follows:


 * $$\Gamma:\mathcal{C}(I_{a}(t_0),B_b(y_0)) \longrightarrow \mathcal{C}(I_{a}(t_0),B_b(y_0))$$

defined by:


 * $$\Gamma \varphi(t) = y_0 + \int_{t_0}^{t} f(s,\varphi(s)) \, ds.$$

We must show that this operator maps a complete non-empty metric space X into itself and also is a contraction mapping.

We first show that, given certain restrictions on $$a$$, $$\Gamma$$ takes $$\overline{B_b(y_0)}$$ into itself in the space of continuous functions with the uniform norm. Here, $$\overline{B_b(y_0)}$$ is a closed ball in the space of continuous (and bounded) functions "centered" at the constant function $$y_0$$. Hence we need to show that


 * $$\| \varphi -y_0 \|_\infty \le b$$

implies
 * $$\left\| \Gamma\varphi(t)-y_0 \right\| = \left\|\int_{t_0}^t f(s,\varphi(s))\, ds \right\| \leq \int_{t_0}^{t'} \left\|f(s,\varphi(s))\right\| ds \leq \int_{t_0}^{t'} M\, ds = M \left|t'-t_0 \right| \leq M a \leq b$$

where $$t'$$ is some number in $$[t_0-a, t_0 +a]$$ where the maximum is achieved. The last inequality in the chain is true if we impose the requirement $$a < \frac{b}{M}$$.

Now let's prove that this operator is a contraction mapping.

Given two functions $$\varphi_1,\varphi_2\in\mathcal{C}(I_{a}(t_0),B_b(y_0))$$, in order to apply the Banach fixed-point theorem we require


 * $$ \left \| \Gamma \varphi_1 - \Gamma \varphi_2 \right\|_\infty \le q \left\| \varphi_1 - \varphi_2 \right\|_\infty,$$

for some $$0 \leq q < 1$$. So let $$t$$ be such that


 * $$\| \Gamma \varphi_1 - \Gamma \varphi_2 \|_\infty = \left\| \left(\Gamma\varphi_1 - \Gamma\varphi_2 \right)(t) \right\|.$$

Then using the definition of $$\Gamma$$,


 * $$\begin{align}

\left\|\left(\Gamma\varphi_1 - \Gamma\varphi_2 \right)(t) \right\| &= \left\|\int_{t_0}^t \left( f(s,\varphi_1(s))-f(s,\varphi_2(s)) \right)ds \right\|\\ &\leq \int_{t_0}^t \left\|f \left(s,\varphi_1(s)\right)-f\left(s,\varphi_2(s) \right) \right\| ds \\ &\leq L \int_{t_0}^t \left\|\varphi_1(s)-\varphi_2(s) \right\|ds && \text{since } f \text{ is Lipschitz-continuous} \\ &\leq L \int_{t_0}^t \left\|\varphi_1-\varphi_2 \right\|_\infty \,ds \\ &\leq La \left\|\varphi_1-\varphi_2 \right\|_\infty \end{align}$$

This is a contraction if $$a < \tfrac{1}{L}.$$

We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed-point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function


 * $$\varphi\in \mathcal{C}(I_a (t_0),B_b(y_0))$$

such that $Γφ = φ$. This function is the unique solution of the initial value problem, valid on the interval Ia where a satisfies the condition


 * $$a < \min \left \{ \tfrac{b}{M}, \tfrac{1}{L} \right \}.$$

Optimization of the solution's interval
We wish to remove the dependence of the interval Ia on L. To this end, there is a corollary of the Banach fixed-point theorem: if an operator Tn is a contraction for some n in N, then T has a unique fixed point. Before applying this theorem to the Picard operator, recall the following:

$$

Proof. Induction on m. For the base of the induction ($m = 1$) we have already seen this, so suppose the inequality holds for $m − 1$, then we have: $$\begin{align} \left \| \Gamma^m \varphi_1(t) - \Gamma^m\varphi_2(t) \right \| &= \left \|\Gamma\Gamma^{m-1} \varphi_1(t) - \Gamma\Gamma^{m-1}\varphi_2(t) \right \| \\ &\leq \left| \int_{t_0}^t \left \| f \left (s,\Gamma^{m-1}\varphi_1(s) \right )-f \left (s,\Gamma^{m-1}\varphi_2(s) \right )\right \| ds \right| \\ &\leq L \left| \int_{t_0}^t \left \|\Gamma^{m-1}\varphi_1(s)-\Gamma^{m-1}\varphi_2(s)\right \| ds\right| \\ &\leq L \left| \int_{t_0}^t \frac{L^{m-1}|s-t_0|^{m-1}}{(m-1)!}  \left \| \varphi_1-\varphi_2\right \| ds\right| \\ &\leq \frac{L^m |t-t_0|^m }{m!} \left \|\varphi_1 - \varphi_2 \right \|. \end{align}$$

By taking a supremum over $$ t \in [t_0 - \alpha, t_0 + \alpha] $$ we see that $$\left \| \Gamma^m \varphi_1 - \Gamma^m\varphi_2 \right \| \leq \frac{L^m\alpha^m}{m!}\left \|\varphi_1-\varphi_2\right \|$$.

This inequality assures that for some large m, $$\frac{L^m\alpha^m}{m!}<1,$$ and hence Γm will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking $α = min{a, b⁄M}$.

In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value.

Other existence theorems
The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The Peano existence theorem shows only existence, not uniqueness, but it assumes only that $&thinsp;f&thinsp;$ is continuous in $y$, instead of Lipschitz continuous. For example, the right-hand side of the equation $dy⁄dt = y^{&thinsp;1⁄3}$ with initial condition y(0) = 0 is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has at least three solutions:


 * $$y(t) = 0, \qquad y(t) = \pm\left (\tfrac23 t\right)^{\frac{3}{2}}$$.

Even more general is Carathéodory's existence theorem, which proves existence (in a more general sense) under weaker conditions on $&thinsp;f&thinsp;$. Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura's theorem.

Global existence of solution
The Picard–Lindelöf theorem ensures that solutions to initial value problems exist uniquely within a local interval $$[t_0-\varepsilon, t_0+\varepsilon]$$, possibly dependent on each solution. The behavior of solutions beyond this local interval can vary depending on the properties of $&thinsp;f&thinsp;$ and the domain over which $&thinsp;f&thinsp;$ is defined. For instance, if $&thinsp;f&thinsp;$ is globally Lipschitz, then the local interval of existence of each solution can be extended to the entire real line and all the solutions are defined over the entire R.

If $&thinsp;f&thinsp;$ is only locally Lipschitz, some solutions may not be defined for certain values of t, even if $&thinsp;f&thinsp;$ is smooth. For instance, the differential equation $dy⁄dt = y^{&thinsp;2}$ with initial condition y(0) = 1 has the solution y(t) = 1/(1-t), which is not defined at t = 1. Nevertheless, if $&thinsp;f&thinsp;$ is a differentiable function defined over a compact subset of Rn, then the initial value problem has a unique solution defined over the entire R. Similar result exists in differential geometry: if $&thinsp;f&thinsp;$ is a differentiable vector field defined over a domain which is a compact smooth manifold, then all its trajectories (integral curves) exist for all time.