Talk:Variation of parameters

Example
In the first example, it says "So, we obtain u1=e-2x, and u2=xe-2x." This is in my text too, but I never understood why. Could someone add a short comment saying why (e.g. "So, we obtain u1=e^-2x (see blahs_rule) and we introduce an x term (see blah2s_rule) as well to yield u2=xe^-2x"). 75.128.252.106 (talk) 03:50, 8 February 2008 (UTC)

Using this for first-order ODEs
Can't this also be used for 1st order ODE's? Perhaps I'm thinking of a different method, but I was just looking at my Differential Equations Text, and the method seems the same, except for first order equations. Any thoughts? Gershwinrb 06:34, 1 February 2006 (UTC)


 * Any first-order linear ODE can be solved with little fuss &mdash; see Ordinary differential equation &mdash; such that techniques like the method of variation of parameters are unnecessary. Ruakh 15:29, 1 February 2006 (UTC)


 * It should be noted that variation of parameters does work for first order ODE just as it does for 2nd order and higher. But you are correct that conceptually easier techniques are available there.163.118.103.199 (talk) 18:34, 5 March 2012 (UTC)

Should we also have examples for systems of equations and/or higher order ODEs? jleto

u is homogenous solution?
In the beginning of the Technique section, the article says $$u_1$$ and $$u_2$$ are "solutions" to the equation. It really means solutions to the homogenous equation, right? If not, I'm totally confused. This should be changed and made clear. Lavaka 05:32, 15 September 2006 (UTC)


 * Good call. I've fixed the article now. Ruakh 13:30, 15 September 2006 (UTC)


 * Thanks! Lavaka 01:55, 20 September 2006 (UTC)

Copy from Ordinary differential equation
I deleted the following text from the Ordinary differential equation where it consumed far too much space. I copied it here in case anyone is able to salvage some parts and integrate them into this article. MathMartin 20:24, 11 December 2006 (UTC)

Method of variation of parameters


As explained above, the general solution to a non-homogeneous, linear differential equation $$y(x) + p(x) y'(x) + q(x) y(x) = g(x)$$ can be expressed as the sum of the general solution $$y_h(x)$$ to the corresponding homogenous, linear differential equation $$y(x) + p(x) y'(x) + q(x) y(x) = 0$$ and any one solution $$y_p(x)$$ to $$y''(x) + p(x) y'(x) + q(x) y(x) = g(x)$$.

Like the method of undetermined coefficients, described above, the method of variation of parameters is a method for finding one solution to $$y(x) + p(x) y'(x) + q(x) y(x) = g(x)$$, having already found the general solution to $$y(x) + p(x) y'(x) + q(x) y(x) = 0$$. Unlike the method of undetermined coefficients, which fails except with certain specific forms of g(x), the method of variation of parameters will always work; however, it is significantly more difficult to use.

For a second-order equation, the method of variation of parameters makes use of the following fact:

Fact
Let p(x), q(x), and g(x) be functions, and let $$y_1(x)$$ and $$y_2(x)$$ be solutions to the homogeneous, linear differential equation $$y''(x) + p(x) y'(x) + q(x) y(x) = 0$$. Further, let u(x) and v(x) be functions such that $$u'(x) y_1(x) + v'(x) y_2(x) = 0$$ and $$u'(x) y_1'(x) + v'(x) y_2'(x) = g(x)$$ for all x, and define $$y_p(x) = u(x) y_1(x) + v(x) y_2(x)$$. Then $$y_p(x)$$ is a solution to the non-homogeneous, linear differential equation $$y''(x) + p(x) y'(x) + q(x) y(x) = g(x)$$.

Proof
$$y_p(x) = u(x) y_1(x) + v(x) y_2(x)$$

$$y_p(x) + p(x) y'_p(x) + q(x) y_p(x) = g(x) + u(x) y_1(x) + v(x) y_2''(x) + p(x) u(x) y_1'(x) + p(x) v(x) y_2'(x) + q(x) u(x) y_1(x) + q(x) v(x) y_2(x) $$

$$ = g(x) + u(x) (y_1(x) + p(x) y_1'(x) + q(x) y_1(x)) + v(x) (y_2(x) + p(x) y_2'(x) + q(x) y_2(x)) = g(x) + 0 + 0 = g(x)$$

Usage
To solve the second-order, non-homogeneous, linear differential equation $$y''(x) + p(x) y'(x) + q(x) y(x) = g(x)$$ using the method of variation of parameters, use the following steps:


 * 1) Find the general solution to the corresponding homogeneous equation $$y''(x) + p(x) y'(x) + q(x) y(x) = 0$$. Specifically, find two linearly independent solutions $$y_1(x)$$ and $$y_2(x)$$.
 * 2) Since $$y_1(x)$$ and $$y_2(x)$$ are linearly independent solutions, their Wronskian $$y_1(x) y_2'(x) - y_1'(x) y_2(x)$$ is nonzero, so we can compute $$-(g(x) y_2(x))/({y_1(x) y_2'(x) - y_1'(x) y_2(x)})$$ and $$({g(x) y_1(x)})/({y_1(x) y_2'(x) - y_1'(x) y_2(x)})$$. If the former is equal to u ' (x) and the latter to v ' (x), then u and v satisfy the two constraints given above: that $$u'(x) y_1(x) + v'(x) y_2(x) = 0$$ and that $$u'(x) y_1'(x) + v'(x) y_2'(x) = g(x)$$. We can tell this after multiplying by the denominator and comparing coefficients.
 * 3) Integrate $$-(g(x) y_2(x))/({y_1(x) y_2'(x) - y_1'(x) y_2(x)})$$ and $$({g(x) y_1(x)})/({y_1(x) y_2'(x) - y_1'(x) y_2(x)})$$ to obtain u(x) and v(x), respectively. (Note that we only need one choice of u and v, so there is no need for constants of integration.)
 * 4) Compute $$y_p(x) = u(x) y_1(x) + v(x) y_2(x)$$. The function $$y_p$$ is one solution of $$y''(x) + p(x) y'(x) + q(x) y(x) = g(x)$$.
 * 5) The general solution is $$c_1 y_1(x) + c_2 y_2(x) + y_p(x)$$, where $$c_1$$ and $$c_2$$ are arbitrary constants.

Higher-order equations
The method of variation of parameters can also be used with higher-order equations. For example, if $$y_1(x)$$, $$y_2(x)$$, and $$y_3(x)$$ are linearly independent solutions to $$y'(x) + p(x) y(x) + q(x) y'(x) + r(x) y(x) = 0$$, then there exist functions u(x), v(x), and w(x) such that $$u'(x) y_1(x) + v'(x) y_2(x) + w'(x) y_3(x) = 0$$, $$u'(x) y_1'(x) + v'(x) y_2'(x) + w'(x) y_3'(x) = 0$$, and $$u'(x) y_1(x) + v'(x) y_2(x) + w'(x) y_3''(x) = g(x)$$. Having found such functions (by solving algebraically for u ' (x), v ' (x), and w ' (x), then integrating each), we have $$y_p(x) = u(x) y_1(x) + v(x) y_2(x) + w(x) y_3(x)$$, one solution to the equation $$y'(x) + p(x) y(x) + q(x) y'(x) + r(x) y(x) = g(x)$$.

Example
Solve the previous example, $$y'' + y = \sec x$$ Recall $$\sec x = \frac{1} = f$$. From technique learned from 3.1, LHS has root of $$r = \pm i$$ that yield $$y_c  = C_1 \cos x + C_2 \sin x$$, (so $$y_1  = \cos x$$, $$y_2  = \sin x$$ ) and its derivatives
 * $$\left\{ {\begin{matrix}

{\dot u = \frac{W} = \frac = \tan x} \\ {\dot v = \frac{W} = \frac = 1} \\ \end{matrix}} \right.$$ where the Wronskian
 * $$W\left( {y_1,y_2 :x} \right) = \left| {\begin{matrix}

{\cos x} & {\sin x} \\ { - \sin x} & {\cos x} \\ \end{matrix}} \right| = 1$$ were computed in order to seek solution to its derivatives.

Upon integration,
 * $$\left\{ \begin{matrix}

u = - \int {\tan x\,dx =  - \ln \left| {\sec x} \right| + C}  \\ v = \int {1\,dx = x + C} \\ \end{matrix} \right.$$ Computing $$y_p$$ and $$y_G$$:
 * $$\begin{matrix}

y_p = f = uy_1  + vy_2  = \cos x\ln \left| {\cos x} \right| + x\sin x \\ y_G = y_c  + y_p  = C_1 \cos x + C_2 \sin x + x\sin x + \cos x\ln \left( {\cos x} \right) \\ \end{matrix}$$

Wrong Equation
In the fith equation of the section Method_of_variation_of_parameters it should be "b(x)" rather than "-b(x)". But i am not completely sure. —The preceding unsigned comment was added by 141.35.186.111 (talk • contribs).

Integrals and dummy variables
I think the integrals should be changed to use dummy variables -- as written now, they are misleading. The current format is $$\int cos(x) dx = sin(x) $$ but I'd much rather see a dummy variable, e.g. $$\int_0^x cos(s) ds = sin(x) - sin(0) $$ or at least $$\int cos(x) ds = sin(x) $$ Anyone interested in redoing this? --Lavaka 18:20, 17 April 2007 (UTC)


 * and I think
 * $$c_i^'(x) = \frac{b(y) W_i(x)}{W(y)} \, \mathrm{,} \quad i=1,\ldots,n$$
 * should be
 * $$c_i^'(x) = \frac{b(y) W_i(x)}{W(x)} \, \mathrm{,} \quad i=1,\ldots,n$$
 * as well, no? --Lavaka 18:27, 17 April 2007 (UTC)


 * actually, the $$W(y)$$ is a constant, no? This should be made clear, and written $$W$$ --Lavaka 18:47, 17 April 2007 (UTC)


 * The current format is just the typical representation of indefinite integral. I can't see any problem with it.129.94.223.121 (talk) —Preceding undated comment added 01:18, 24 August 2009 (UTC).

Mistake in the First Example
The first example calculation involves an integral containing $$f(x)$$,
 * $$A(x) = - \int {1\over W} u_2(x) f(x)\,dx,\; B(x) = \int {1 \over W} u_1(x)f(x)\,dx$$

Nowhere up to this point is $$f(x)$$ defined; in fact (it refers to the right-hand-side of the ODE) the function on the right-hand-side has previously been called $$b(x)$$. —Preceding unsigned comment added by 68.49.223.78 (talk) 14:57, 8 March 2010 (UTC)

Annotation regarding coefficient-functions
I am not sure, whether the statement in parenthesis is 100% correct.
 * $$c_i(x)$$ are continuous functions which satisfy the equations
 * $$\sum_{i=1}^{n} c_i^'(x) y_i^{(j)}(x) = 0 \, \mathrm{,} \quad j = 0,\ldots, n-2$$    (iv) "(results from substitution of (iii) into the homogeneous case (ii); )"

How is one supposed to conclude this from the described substitution?? —Preceding unsigned comment added by 93.104.136.99 (talk) 14:47, 19 September 2010 (UTC)


 * You are right, that part is assumed, not computed. I changed it.
 * 99.126.180.28 (talk) 22:36, 2 February 2011 (UTC)

Why?
why equation 3 is true? Please tell, if you know, and have some time. eq 2 looks like definition of null space, and the y sub i forms a basis of the null space, a subspace of the vector space of — Preceding unsigned comment added by 108.236.198.181 (talk) 23:34, 2 June 2013 (UTC)

Newton's law and variation of parameters
It seems to me that someone should write a brief motivation (for second order ODEs) of the method via Newton's law. The left-hand side of a nonhomogeneous second-order ODE
 * $$mx'' + \mu x + k x = b(t)$$

is the net force on a mass attached to a damped spring. The right-hand side is the external force applied to the spring. The method of variation of parameters is nothing more than cumulatively adding to the solution during each time interval $$[t,t+dt]$$ the effect of imparting an additional momentum $$b(t)dt$$ to the mass. It would be nice if someone could track down a reference for this (and hopefully a clearer explanation) and add it to the article. The same essential idea applies more generally to Duhamel's principle. Sławomir Biały (talk) 18:42, 24 June 2013 (UTC)

Also the matrix formulation of variation of parameters is conspicuously absent from the article. Sławomir Biały (talk) 18:45, 24 June 2013 (UTC)

Merge content from Duhamel's principle
All the references I could find that mentioned both variation of parameters and Duhamel's principle said they were equivalent. Even the variation of parameters article says that they're related. I tried tracking down the source of "Duhamel's principle" and they just mention the superposition principle (again something done in variation of parameters). Variation of parameters seems like the more common term in overall usage. --Mathnerd314159 (talk) 05:50, 7 April 2017 (UTC)

No; in ODEs, Variation of Parameters refers to a specific method that finds the coefficients of the particular solution to a differential equation. This is not the same thing as Duhamel's Principle at all. --Review day in Math 240 — Preceding unsigned comment added by 216.125.152.241 (talk) 19:42, 7 November 2017 (UTC)

Origin and Intuition Behind the Method
Hello,

I've studied mathematics through my life and I find it hard to remember something which is true but feels like the answer came from the sky. Most of the time great mathematical results can be proved and summed up in three lines, but then no trace is left of the days/weeks/months/years/letter exchanges between mathematicians which led up to those brilliant short truths. I feel variation of parameters is one of these things where the proof is relatively easy to understand, but it looks like a recipe someone pulled out of his/her hat. I appreciate the history section of the artice about the study of planets, but it sheds little light (for me) on how someone thought of trying to replace a constant with a function. Did someone try it by accident ? Does it have something to do with "perturbated orbits"? I understand it can be hard to sum up in an article a long history of research and discovery, but I just hoped I could find somewhere more explanations about how and why somebody came up with this idea. Anyway, I'm sorry if I come off as complaining and am not offering solutions here, but if anyone has a link which could enlighten me on how someone thought of this method, I'd be very happy to click on it ! Thank you very much.--ByteMe666 (talk) 19:45, 17 May 2018 (UTC)


 * There is an intuitive explanation. Consider the example of a forced spring, $$x+x=F$$, where F is the external applied force.  We can solve this equation by superposing the solutions of the homogeneous problems $$x+x=0$$, with conditions at time t=s, x(s)=0 and x'(s)=F(s)ds (the net impulse added to the solution during the time interval between t=s and t=s+ds).  The solution to this homogeneous problem is easily seen to be $$x(t) = F(s)\sin(t-s)ds $$.   These solutions are superposed via the integral: $$\int_0^t F(s)\sin(t-s)\,ds$$, exactly as in the variation of parameters.  I'd say it was likely known to Newton or Hooke in some form, not certain.  It's actually more conceptual to build up the solution in this way, rather than writing down the differential equation first.  That is, the action of an applied force to a spring is exactly as if we added to the motion the impulse F(s)ds at each time t=s.  Then the differential equation comes out in the wash: $$x''+x=F$$.  So I'd expect this to be a known method, as soon as one starts thinking about second order equations in physics (basically at the very beginning of things).  Refinements to the main idea were no doubt known to Euler, Lagrange, Wroński, Duhamel, etc.  I am very skeptical about the account given in the article.   Sławomir Biały  (talk) 20:35, 17 May 2018 (UTC)


 * Hi !
 * Thank you for your reply. It does clear things up a little bit more for me. But I still haven't fully understood exactly how someone came up with variation of parameters, but now I get a sense of the general idea, and it doesn't seem completely ad hoc anymore. I've read what you have replied and read some more elsewhere:
 * https://math.stackexchange.com/questions/1183954/intuition-behind-variation-of-parameters-method-for-solving-differential-equatio
 * https://en.wikipedia.org/wiki/Green%27s_function
 * And I understand the full answer will come much later when I learn about distributions and Green functions. Though I think your answer is also really good because I believe solutions to first and second order linear differential equations and techniques such as the variation of parameters have been discovered before the full fledged theory on distributions was born. I cannot dwell too much longer on this topic, but I hope I can get back to this page later and offer some form of improvement later. Thank you very much.--ByteMe666 (talk) 16:21, 19 May 2018 (UTC)


 * Here's a way to come up with the idea on your own. Say you have the differential equation $$ y' + py = q $$, and you already know how to solve $$ y' + py = 0$$.  Then proceed as you would anyways: $$ y'/y = q/y - p \to y = c e^{\int q/y} e^{-\int p} = C(x) e^{-\int p}$$ where $$C(x)$$ is some unknown function of $$ x $$.  Then, plug this form into the original equation to solve for $$ C(x) $$.  The more I think about "variation of parameters" the less I like the idea.  I much prefer to think of it as a technique which may arise in a number of natural algebraic ways, then to think "of course here I should just let the parameter depend on $$ x $$," which doesn't seem to be natural (or really make sense) in all cases.  I really like the idea of summing a large number of impulses as described above (and in the article), but it's hard for me to conceptually jump from "adding a large number of impulses" to "let the constants vary with $$x$$," but it might be because I'm missing something. --Nathan.s.chappell (talk) 06:01, 18 October 2019 (UTC)
 * Dear Slawomir,
 * I have just finished typing here a detailed explanation of the physical and geometric meaning of VOP. It would be advisable to use some of that material here. Definitely, Newman's example.
 * Cheers,
 * Michael 214.9.101.6 (talk) 02:48, 11 January 2023 (UTC)