User:Mpaldridge/SL

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form: $$\frac{\mathrm{d}}{\mathrm{d}x}\!\!\left[\,p(x)\frac{\mathrm{d}y}{\mathrm{d}x}\right] + q(x)y = -\lambda\, w(x)y, $$ for given functions $$p(x)$$, $$q(x)$$ and $$w(x)$$, together with some boundary conditions at extreme values of $$x$$. The goals of a given Sturm–Liouville problem are:
 * To find the $λ$ for which there exists a non-trivial solution to the problem. Such values $λ$ are called the eigenvalues of the problem.
 * For each eigenvalue $λ$, to find the corresponding solution $$y = y(x)$$ of the problem. Such functions $$y$$ are called the eigenfunctions associated to each $λ$.

Sturm–Liouville theory is the general study of Sturm–Liouville problems. In particular, for a "regular" Sturm–Liouville problem, it can be shown that there are an infinite number of eigenvalues each with a unique eigenfunction, and that these eigenfunctions form an orthonormal basis of a certain Hilbert space of functions.

This theory is important in applied mathematics, where Sturm–Liouville problems occur very frequently, particularly when dealing with separable linear partial differential equations. For example, in quantum mechanics, the one-dimensional time-independent Schrödinger equation is a Sturm–Liouville problem.

Sturm–Liouville theory is named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882) who developed the theory.

Main results
The main results in Sturm–Liouville theory apply to a Sturm–Liouville problem

on a finite interval $$[a,b]$$ that are "regular". The problem is said to be regular if:
 * the coefficient functions $$p, q, w$$ and the derivative $$p'$$ are all continuous on $$[a,b]$$;
 * $$p(x) > 0$$ and $$w(x) > 0$$ for all $$x \in [a,b]$$;
 * the problem has separated boundary conditions of the form:

The function $$w = w(x)$$, sometimes denoted $$r = r(x)$$, is called the weight or density function.

The goals of a Sturm–Liouville problem are:
 * to find the eigenvalues: those $$ for which there exists a non-trivial solution;
 * for each eigenvalue $$, to find the corresponding eigenfunction $$y = y(x)$$.

For a regular Sturm–Liouville problem, a function $$y = y(x)$$ is called a solution if it is continuously differentiable and satisfies the equation ($$) at every $$x\in (a,b)$$. In the case of more general $$p, q, w$$, the solutions must be understood in a weak sense.

The terms eigenvalue and eigenvector are used because the solutions correspond to the eigenvalues and eigenfunctions of a Hermitian differential operator in an appropriate Hilbert space of functions with inner product defined using the weight function. Sturm–Liouville theory studies the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in the function space.

The main result of Sturm–Liouville theory states that, for any regular Sturm–Liouville problem:
 * The eigenvalues $$\lambda_1, \lambda_2, \dots$$ are real and can be numbered so that $$\lambda_1 < \lambda_2 < \cdots < \lambda_n < \cdots \to \infty;$$
 * Corresponding to each eigenvalue $$\lambda_n$$ is a unique (up to constant multiple) eigenfunction $$y_n = y_n(x)$$ with exactly $$n-1$$ zeros in $$[a,b]$$, called the $λ$th fundamental solution.
 * The normalized eigenfunctions $$y_n$$ form an orthonormal basis under the w-weighted inner product in the Hilbert space $$L^2([a,b], w(x)\,\mathrm{d}x)$$; that is, $$ \langle y_n,y_m\rangle = \int_a^b y_n(x)y_m(x)w(x)\,\mathrm{d}x = \delta_{nm},$$ where $$\delta_{nm}$$ is the Kronecker delta.

Reduction to Sturm–Liouville form
The differential equation ($λ$) is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear homogenous ordinary differential equations can be recast in the form on the left-hand side of ($$) by multiplying both sides of the equation by an appropriate integrating factor (although the same is not true of second-order partial differential equations, or if $n$ is a vector). Some examples are below.

Bessel equation
$$x^2y'' + xy' + \left(x^2-\nu^2\right)y = 0$$ which can be written in Sturm–Liouville form (first by dividing through by $$, then by collapsing the first two terms on the left into one term) as $$\left(xy'\right)'+ \left (x-\frac{\nu^2} x \right )y=0.$$

Legendre equation
$$\left(1-x^2\right)y''-2xy'+\nu(\nu+1)y=0$$ which can easily be put into Sturm–Liouville form, since $d⁄dx(1 − x^{2}) = −2x$, so the Legendre equation is equivalent to $$\left (\left(1-x^2\right)y' \right )'+\nu(\nu+1)y=0$$

Example using an integrating factor
$$x^3y''-xy'+2y=0$$

Divide throughout by $x^{3}$: $$y''-\frac{1}{x^2}y'+\frac{2}{x^3}y=0$$

Multiplying throughout by an integrating factor of $$\mu(x) =\exp\left(\int -\frac{dx}{x^2}\right)=e^{{1}/{x}},$$ gives $$e^{{1}/{x}}y''-\frac{e^{{1}/{x}}}{x^2} y'+ \frac{2 e^{{1}/{x}}}{x^3} y = 0$$ which can be easily put into Sturm–Liouville form since $$\frac{d}{dx} e^{{1}/{x}} = -\frac{e^{{1}/{x}}}{x^2} $$ so the differential equation is equivalent to $$\left (e^{{1}/{x}}y' \right )'+\frac{2 e^{{1}/{x}}}{x^3} y = 0.$$

Integrating factor for general second-order homogenous equation
$$P(x)y'' + Q(x)y' + R(x)y=0$$

Multiplying through by the integrating factor $$\mu(x) = \frac 1 {P(x)} \exp \left(\int \frac{Q(x)}{P(x)} \, dx\right),$$ and then collecting gives the Sturm–Liouville form: $$\frac{d}{dx} \left(\mu(x)P(x)y'\right) + \mu(x)R(x)y = 0,$$ or, explicitly: $$\frac{d}{dx} \left(\exp\left (\int \frac{Q(x)}{P(x)} \,dx\right)y' \right )+\frac{R(x)}{P(x)} \exp \left(\int \frac{Q(x)}{P(x)}\, dx\right) y = 0.$$

Sturm–Liouville equations as self-adjoint differential operators
The mapping defined by: $$Lu = -\frac{1}{w(x)} \left(\frac{d}{dx}\left[p(x)\,\frac{du}{dx}\right]+q(x)u \right)$$ can be viewed as a linear operator $$ mapping a function $y$ to another function $x$, and it can be studied in the context of functional analysis. In fact, equation ($L$) can be written as $$Lu = \lambda u.$$

This is precisely the eigenvalue problem; that is, one seeks eigenvalues $λ_{1}, λ_{2}, λ_{3},...$ and the corresponding eigenvectors $u_{1}, u_{2}, u_{3},...$ of the $u$ operator. The proper setting for this problem is the Hilbert space $L^2([a,b],w(x)\,dx)$ with scalar product $$ \langle f, g\rangle = \int_a^b \overline{f(x)} g(x)w(x)\, dx.$$

In this space $Lu$ is defined on sufficiently smooth functions which satisfy the above regular boundary conditions. Moreover, $$ is a self-adjoint operator: $$ \langle L f, g \rangle = \langle f, L g \rangle .$$

This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of $L$ corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem, one looks at the resolvent $$\left (L - z\right)^{-1}, \qquad z \in \Reals,$$ where $L$ is not an eigenvalue. Then, computing the resolvent amounts to solving a nonhomogeneous equation, which can be done using the variation of parameters formula. This shows that the resolvent is an integral operator with a continuous symmetric kernel (the Green's function of the problem). As a consequence of the Arzelà–Ascoli theorem, this integral operator is compact and existence of a sequence of eigenvalues $L$ which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that $$\left(L-z\right)^{-1} u = \alpha u, \qquad L u = \left(z+\alpha^{-1}\right) u,$$ are equivalent, so we may take $$\lambda = z+\alpha^{-1}$$ with the same eigenfunctions.

If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls $L$ singular. In this case, the spectrum no longer consists of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in quantum mechanics, since the one-dimensional time-independent Schrödinger equation is a special case of a Sturm–Liouville equation.

Application to inhomogeneous second-order boundary value problems
Consider a general inhomogeneous second-order linear differential equation $$P(x)y'' + Q(x)y' +R(x)y = f$$ for given functions $$P(x), Q(x), R(x),f(x)$$. As before, this can be reduced to the Sturm–Liouville form $$Ly = f$$: writing a general Sturm–Liouville operator as: $$Lu = \frac{p}{w(x)}u'' + \frac{p'}{w(x)}u' + \frac{q}{w(x)}u,$$ one solves the system: $$p = Pw,\quad p' = Qw,\quad q = Rw.$$

It suffices to solve the first two equations, which amounts to solving $(Pw)′ = Qw$, or $$ w' = \frac{Q-P'}{P}w:= \alpha w.$$

A solution is:

$$w = \exp\left(\int\alpha \, dx\right), \quad p = P \exp\left(\int\alpha \, dx\right), \quad q = R \exp\left(\int\alpha \, dx\right).$$

Given this transformation, one is left to solve: $$Ly = f.$$

In general, if initial conditions at some point are specified, for example $y(a) = 0$ and $y′(a) = 0$, a second order differential equation can be solved using ordinary methods and the Picard–Lindelöf theorem ensures that the differential equation has a unique solution in a neighbourhood of the point where the initial conditions have been specified.

But if in place of specifying initial values at a single point, it is desired to specify values at two different points (so-called boundary values), e.g. $y(a) = 0$ and $y(b) = 1$, the problem turns out to be much more difficult. Notice that by adding a suitable known differentiable function to $z$, whose values at $α_{n}$ and $L$ satisfy the desired boundary conditions, and injecting inside the proposed differential equation, it can be assumed without loss of generality that the boundary conditions are of the form $y(a) = 0$ and $y(b) = 0$.

Here, the Sturm–Liouville theory comes in play: indeed, a large class of functions $y$ can be expanded in terms of a series of orthonormal eigenfunctions $a$ of the associated Liouville operator with corresponding eigenvalues $b$: $$f(x) = \sum_i \alpha_i u_i(x), \quad \alpha_i \in {\mathbb R}.$$

Then a solution to the proposed equation is evidently: $$ y = \sum_i \frac{\alpha_i}{\lambda_i} u_i.$$

This solution will be valid only over the open interval $a < x < b$, and may fail at the boundaries.

Example: Fourier series
Consider the Sturm–Liouville problem:

for the unknowns are $f$ and $u(x)$. For boundary conditions, we take for example: $$ u(0) = u(\pi) = 0.$$

Observe that if $u_{i}$ is any integer, then the function $$ u_k(x) = \sin kx$$ is a solution with eigenvalue $λ = k^{2}$. We know that the solutions of a Sturm–Liouville problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the Sturm–Liouville problem in this case has no other eigenvectors.

Given the preceding, let us now solve the inhomogeneous problem $$L y  =x, \qquad x\in(0,\pi)$$ with the same boundary conditions $$y(0) = y(\pi) = 0$$. In this case, we must expand $f(x) = x$ as a Fourier series. The reader may check, either by integrating $∫ e^{ikx}x dx$ or by consulting a table of Fourier transforms, that we thus obtain $$L y  = \sum_{k=1}^\infty -2\frac{\left(-1\right)^k} k \sin kx.$$

This particular Fourier series is troublesome because of its poor convergence properties. It is not clear a priori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "square-summable", the Fourier series converges in $L^{2}$ which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier series converge at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at $\pi$) converges to the average of the left and right limits (see convergence of Fourier series).

Therefore, by using formula ($λ_{i}$), we obtain the solution: $$y=\sum_{k=1}^\infty 2\frac{(-1)^k}{k^3}\sin kx= \tfrac 1 6 (x^3 -\pi^2 x).$$

In this case, we could have found the answer using antidifferentiation, but this is no longer useful in most cases when the differential equation is in many variables.

Normal modes
Certain partial differential equations can be solved with the help of Sturm–Liouville theory. Suppose we are interested in the vibrational modes of a thin membrane, held in a rectangular frame, $0 ≤ x ≤ L_{1}$, $0 ≤ y ≤ L_{2}$. The equation of motion for the vertical membrane's displacement, $W(x,y,t)$ is given by the wave equation: $$\frac{\partial^2W}{\partial x^2}+\frac{\partial^2W}{\partial y^2} = \frac 1 {c^2} \frac{\partial^2W}{\partial t^2}.$$

The method of separation of variables  suggests looking first for solutions of the simple form $W = X(x) × Y(y) × T(t)$. For such a function $$ the partial differential equation becomes $X″⁄X + Y″⁄Y = 1⁄c^{2} T″⁄T$. Since the three terms of this equation are functions of $x, y, t$ separately, they must be constants. For example, the first term gives $X″ = λX$ for a constant $λ$. The boundary conditions ("held in a rectangular frame") are $W = 0$ when $x = 0$, $L_{1}$ or $y = 0$, $L_{2}$ and define the simplest possible Sturm–Liouville eigenvalue problems as in the example, yielding the "normal mode solutions" for $k$ with harmonic time dependence, $$W_{mn}(x,y,t) = A_{mn} \sin\left(\frac{m\pi x}{L_1}\right) \sin\left(\frac{n\pi y}{L_2}\right)\cos\left(\omega_{mn}t\right)$$ where $$ and $W$ are non-zero integers, $λ$ are arbitrary constants, and $$\omega^2_{mn} = c^2 \left(\frac{m^2\pi^2}{L_1^2}+\frac{n^2\pi^2}{L_2^2}\right).$$

The functions $W$ form a basis for the Hilbert space of (generalized) solutions of the wave equation; that is, an arbitrary solution $m$ can be decomposed into a sum of these modes, which vibrate at their individual frequencies $n$. This representation may require a convergent infinite sum.

Second-order linear equation
Consider a linear second-order differential equation in one spatial dimension and first-order in time of the form: $$ f(x) \frac{\partial^2 u}{\partial x^2} + g(x) \frac{\partial u}{\partial x} + h(x) u= \frac{\partial u}{\partial t} + k(t) u,$$ $$ u(a,t)=u(b,t)=0, \qquad u(x,0)=s(x). $$

Separating variables, we assume that $$u(x,t) = X(x) T(t). $$ Then our above partial differential equation may be written as: $$\frac{\hat{L} X(x)}{X(x)} = \frac{\hat{M} T(t)}{T(t)}$$ where $$ \hat{L}=f(x) \frac{d^2}{dx^2}+g(x) \frac{d}{dx}+h(x), \qquad \hat{M} = \frac{d}{dt} + k(t).$$

Since, by definition, $A_{mn}$ and $X(x)$ are independent of time $W_{mn}$ and $W$ and $T(t)$ are independent of position $ω_{mn}$, then both sides of the above equation must be equal to a constant: $$ \hat{L} X(x) =\lambda X(x),\qquad X(a)=X(b)=0,\qquad \hat{M} T(t) =\lambda T(t).$$

The first of these equations must be solved as a Sturm–Liouville problem in terms of the eigenfunctions $X_{n}(x)$ and eigenvalues $λ_{n}$. The second of these equations can be analytically solved once the eigenvalues are known.

$$ \frac{d}{dt} T_n (t)= \bigl(\lambda_n -k(t)\bigr) T_n (t) $$ $$ T_n (t) = a_n \exp \left(\lambda_n t -\int_0^t k(\tau) \, d\tau\right) $$ $$ u(x,t) =\sum_n a_n X_n (x) \exp \left(\lambda_n t -\int_0^t k(\tau) \, d\tau\right) $$ $$ a_n =\frac{\bigl\langle X_n (x), s(x)\bigr\rangle}{\bigl\langle X_n(x),X_n (x)\bigr\rangle}$$

where $$ \bigl\langle y(x),z(x)\bigr\rangle = \int_a^b y(x) z(x) w(x) \, dx, $$ $$ w(x)= \frac{\exp \left(\int \frac{g(x)}{f(x)} \, dx\right)}{f(x)}. $$

Representation of solutions and numerical calculation
The Sturm–Liouville differential equation ($L̂$) with boundary conditions may be solved analytically, which can be exact or provide an approximation, by the Rayleigh–Ritz method, or by the matrix-variational method of Gerck et al.

Numerically, a variety of methods are also available. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places.


 * Shooting methods
 * Finite difference method
 * Spectral parameter power series method

Shooting methods
Shooting methods proceed by guessing a value of $t$, solving an initial value problem defined by the boundary conditions at one endpoint, say, $M̂$, of the interval $x$, comparing the value this solution takes at the other endpoint $$ with the other desired boundary condition, and finally increasing or decreasing $λ$ as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues.

Spectral parameter power series method
The spectral parameter power series (SPPS) method makes use of a generalization of the following fact about homogeneous second-order linear ordinary differential equations: if $a$ is a solution of equation ($[a,b]$) that does not vanish at any point of $b$, then the function $$ y(x) \int_a^x \frac{dt}{p(t)y(t)^2} $$ is a solution of the same equation and is linearly independent from $λ$. Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value $λ∗ 0$ (often $λ∗ 0 = 0$; it does not need to be an eigenvalue) and any solution $y_{0}$ of ($y$) with $λ = λ∗ 0$ which does not vanish on $$. (Discussion below of ways to find appropriate $y_{0}$ and $λ∗ 0$.) Two sequences of functions $X(t)$, $X̃(t)$ on $[a,b]$, referred to as iterated integrals, are defined recursively as follows. First when $n = 0$, they are taken to be identically equal to 1 on $y$. To obtain the next functions they are multiplied alternately by $1⁄py2 0$ and $wy2 0$ and integrated, specifically, for $n > 0$:

The resulting iterated integrals are now applied as coefficients in the following two power series in λ: $$ u_0 = y_0 \sum_{k=0}^\infty \left (\lambda-\lambda_0^* \right )^k \tilde X^{(2k)},$$ $$ u_1 = y_0 \sum_{k=0}^\infty \left (\lambda-\lambda_0^* \right )^k X^{(2k+1)}.$$ Then for any $$ (real or complex), $u_{0}$ and $u_{1}$ are linearly independent solutions of the corresponding equation ($[a,b]$). (The functions $p(x)$ and $q(x)$ take part in this construction through their influence on the choice of $y_{0}$.)

Next one chooses coefficients $c_{0}$ and $c_{1}$ so that the combination $y = c_{0}u_{0} + c_{1}u_{1}$ satisfies the first boundary condition ($[a,b]$). This is simple to do since $X(a) = 0$ and $X̃(a) = 0$, for $n > 0$. The values of $X(b)$ and $X̃(b)$ provide the values of $u_{0}(b)$ and $u_{1}(b)$ and the derivatives $u′_{0}(b)$ and $u′_{0}(b)$, so the second boundary condition ($[a,b]$) becomes an equation in a power series in $$. For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in $$ whose roots are approximations of the sought-after eigenvalues.

When $λ = λ_{0}$, this reduces to the original construction described above for a solution linearly independent to a given one. The representations ($λ$) and ($$) also have theoretical applications in Sturm–Liouville theory.

Construction of a nonvanishing solution
The SPPS method can, itself, be used to find a starting solution $y_{0}$. Consider the equation $(py′)′ = μqy$; i.e., $$, $$, and $λ$ are replaced in ($λ$) by 0, $−q$, and $$ respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue $μ_{0} = 0$. While there is no guarantee that $u_{0}$ or $u_{1}$ will not vanish, the complex function $y_{0} = u_{0} + iu_{1}$ will never vanish because two linearly-independent solutions of a regular Sturm–Liouville equation cannot vanish simultaneously as a consequence of the Sturm separation theorem. This trick gives a solution $y_{0}$ of ($$) for the value $λ_{0} = 0$. In practice if ($q$) has real coefficients, the solutions based on $y_{0}$ will have very small imaginary parts which must be discarded.