Wikipedia:Reference desk/Archives/Mathematics/2008 March 18

= March 18 =

Proof of Chain Rule
Let

where $$f(u)$$ and $$g(x)$$ are both differentiable functions. Then

Treat the arrow $$\rightarrow$$ as equal sign $$=$$. We can do the same operation on both sides of the arrow without changing the relationship

Function $$g(x)$$ is continuous since it is differentiable. Apply $$g(\cdot)$$ to both sides of (8)

Let

Replace (11) into (10)

Therefore

Replace (2) into (11)

Replace (2), (11), (13) and (14) into (6)

Q.E.D.

Is the proof of chain rule above correct and rigorous? - Justin545 (talk) 06:25, 18 March 2008 (UTC)
 * There are some questionable details. First, if we want a proof we can consider "rigorous", we would want to avoid treating functions as quantities (e.g., u instead of $$g(x)$$) and using Leibniz notation ($$\tfrac{du}{dx}$$). So as a first step you should try formulating the proof without using u or y, only f, g and their composition $$h = f \circ g$$ (equivalently, $$h(x) = f(g(x))$$. Second, the limit notation, $$\lim_{x \to x_0}f(x)=L$$, is one unit. You shouldn't take out the $$x \to x_0$$ and treat it as something that stands on its own. This would be acceptable for a handwaving proof, but not for a rigorous one. -- Meni Rosenfeld (talk) 07:37, 18 March 2008 (UTC)


 * >> "the limit notation, $$\lim_{x \to x_0}f(x)=L$$, is one unit. You shouldn't take out the $$x \to x_0$$ and treat it as something that stands on its own."
 * I think you mean the result of (13) is incorrect or not rigorous. Does it mean the whole proof should be re-derived in a completely different way or we can somehow fix the problem so that we don't have to re-derive the whole proof? If (13) is not rigorous, is there any example which opposes it? Thanks! - Justin545 (talk) 09:00, 18 March 2008 (UTC)
 * (13) and the derivations that lead to it are "not even wrong" in the sense that in the standard framework of calculus they are pretty much meaningless - if you look at the standard rigorous definitions of limits, you will see that they do not allow a function to be used as a variable. It is "correct" in the sense that intuitively, the limit of a function "when" the variable approaches some value is equal to the limit when some othee function approaches its appropriate limit value. However, this "when" business lacks a rigorous interpretation and is haunted by Bishop Berkeley's ghosts.
 * I have thought about how one might amend the proof, and realized that you also have a mistake much earlier. Step (5), dividing and multiplying by $$g(x+\Delta x)-g(x)$$, is only valid if $$g(x+\Delta x)-g(x) \neq 0$$, but there is no reason to assume that should be the case. Take, for example
 * $$g(x)=\left\{\begin{array}{ll}x^2\sin\tfrac1x&x\neq0\\0&x=0\end{array}\right.$$
 * - a perfectly differentiable function at 0, and yet $$g(x)=0=g(0)$$ infinitely many times in any neighborhood of 0. Thus your proof will not work for it. Those kinds of pathological counterexamples are one of the things that separates rigorous proofs from not-so-rigorous ones. -- Meni Rosenfeld (talk) 10:58, 18 March 2008 (UTC)
 * >> "Step (5), dividing and multiplying by $$g(x+\Delta x)-g(x)$$, is only valid if $$g(x+\Delta x)-g(x) \neq 0$$, but there is no reason to assume that should be the case. Take, for example..."
 * I think $$g(x+\Delta x)-g(x)$$ will never be zero since $$\Delta x$$ "is not zero", $$\Delta x$$ is just a value that "very close to zero". Thus, $$g(x+\Delta x)-g(x)$$ will only close to zero but $$g(x+\Delta x)-g(x)$$ will not be zero, and I believe the step (5) would be still correct. As for your example, we may first need to evaluate


 * But what will $$\lim_{\Delta x\rightarrow 0}\sin\frac{1}{\Delta x}$$ evalute to? I'm not sure... - Justin545 (talk) 01:41, 19 March 2008 (UTC)
 * You've made two mistakes here. First, $$g(x+\Delta x)-g(x)$$ can be zero for arbitrarily small values of $$\Delta x$$. That's what Meni's example shows. Your (18)=(19) is also mistaken: it would be valid if both limits in (19) existed, but as it happens the second one doesn't. Btw, your error at step (5) is a reasonably common one: IIRC, it occurs in the first few editions of G H Hardy's A Course of Pure Mathematics. Though there are other ways round it, perhaps the best is to avoid division at all in the proof. This has the advantage that your proof immediately generalises to the multi-dimensional case. Algebraist 02:36, 19 March 2008 (UTC)
 * >> "First, $$g(x+\Delta x)-g(x)$$ can be zero for arbitrarily small values of $$\Delta x$$. That's what Meni's example shows."
 * Meni's example is not so obvious to me why $$g(x+\Delta x)-g(x)=0$$ where

{{NumBlk|:::::::|$$g(x)=\left\{\begin{array}{ll}x^2\sin\tfrac1x&x\neq0\\0&x=0\end{array}\right.$$|20}}
 * Could you provide more explanation for it? Or could you tell what theorem supports that $$g(x+\Delta x)-g(x)$$ could be exactly zero?
 * >> "perhaps the best is to avoid division at all in the proof."
 * Division could be avoided at all, but it is "intuitive" since the definition of derivative involves division. Besides, even this proof involves division I think. If it does involve division, the proof would be considered non-rigorous. - Justin545 (talk) 03:08, 19 March 2008 (UTC)
 * >> "(13) and the derivations that lead to it are "not even wrong" in the sense that in the standard framework of calculus they are pretty much meaningless - if you look at the standard rigorous definitions of limits, you will see that they do not allow a function to be used as a variable."
 * I'm afraid I don't get it that "rigorous definitions of limits do not allow a function to be used as a variable" and why the derivations lead to (13) is meaningless. - Justin545 (talk) 03:37, 19 March 2008 (UTC)
 * A better question is how are they not meaningless. Where in your textbook did anyone mention taking the $$\Delta x \to 0$$ notation, treating it as a formula on its own, and doing manipulations on it? -- Meni Rosenfeld (talk) 16:48, 19 March 2008 (UTC)
 * This proof is really from my textbook except the steps from (7) to (14) are missing. The missing steps is my creation since I have no idea how does step (6) become step (15). I want to know, in detail, how does step (6) become step (15) so I added those steps and make discussion here to see if it's correct or not. - Justin545 (talk) 05:30, 20 March 2008 (UTC)
 * In this case, the proof in your book is wrong (that happens too). Step 5 cannot be justified without more assumptions on g. Your steps 7-12 describe intuitively correct ideas but are far from being rigorous. If g is "ordinary" enough for step 5 to hold, it is possible to justify the leap from (6) to (15), but if you want it to be rigorous you need to rely only on the definition of limits, not on your intuitive ideas of what they mean. -- Meni Rosenfeld (talk) 12:03, 20 March 2008 (UTC)
 * If you want a similar proof that really works, one way would be to apply the mean value theorem to f at (4). This allows you to replace $$ f(g(x+h)) - f(g(x)) $$ 163.1.148.158 (talk) 12:54, 18 March 2008 (UTC)
 * The mean value theorem I found is
 * $$f'(c)=\frac{f(b)-f(a)}{b-a}$$
 * where $$c\in(a,b)$$. But I have no idea how to apply it to f at (4) and why it's needed to replace $$ f(g(x+h)) - f(g(x)) $$? Thanks! - Justin545 (talk) 02:38, 19 March 2008 (UTC)
 * To the OP: it is not necessary to avoid division to make the proof rigorous, but it is one way of doing it. I meant division specifically by values of the domain or codomain of f and g (since these are the things that become vectors when you generalise), but I see I failed to say it. Apologies. The definition of the derivative need not involve such division (the one lectured to me didn't, for example), and one could argue that it shouldn't. Not sure if one would be right, mind. To your specific question, Meni's function is zero whenever x is 1/(nπ) (n a non-zero integer). Thus we have g(x)=0 for arbitrarily small x. Algebraist 03:34, 19 March 2008 (UTC)
 * >> "The definition of the derivative need not involve such division (the one lectured to me didn't, for example), and one could argue that it shouldn't."
 * The familiar definition of derivative is


 * It seems you was were saying that (21) is not a "rigorous" definition. It sounds pretty odd to me. I thought (21) is the only way of defining derivative. There are many lemmas or theorems about derivative in my textbook are originated from (21). It's not easy to imagine there other there are other definition without division. - Justin545 (talk) 05:13, 19 March 2008 (UTC)
 * No, that's not what he was saying. He said that you can define the derivative without division, not that you should. Definition (21) (at least the $$f'(x)=\lim_{\Delta x\rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$$ part) is rigorous and is indeed the standard definition. There is nothing wrong with division, except for division by zero. The main flaw in your proof is dividing by $$g(x+\Delta x)-g(x)$$ which may be zero. Just because $$\Delta x \neq 0$$ doesn't mean that $$g(x+\Delta x) \neq g(x)$$. This is just common sense, you don't need my complicated example for that. -- Meni Rosenfeld (talk) 16:48, 19 March 2008 (UTC)
 * >> "To your specific question, Meni's function is zero whenever x is 1/(nπ) (n a non-zero integer). Thus we have g(x)=0 for arbitrarily small x."
 * I'm afraid I'm not able to proof (20) is zero when $$x\in\left\{\frac{1}{n\pi}\Bigg| n\in\mathbb{Z}\land n\ne 0\right\}$$. But I think $$g(x+\Delta x)-g(x)$$ will be zero when $$g(x)=a$$ where $$a$$ is any fixed constant. (Edit: which means I was ridiculously wrong. Apologies.) - Justin545 (talk) 05:42, 19 March 2008 (UTC)


 * x2sin(1/x) is zero whenever sin(1/x) is zero, which happens whenever 1/x is a multiple of pi, which happens whenever x = 1/npi for some integer n. You know, you're not really all that wrong. You have the right idea, you just don't have the tools to implement it. Here's roughly how my analysis textbook solves the problem. First, you define a new function h(y). I'll skip the details about intervals and mappings, and just say that it's focused on f and ignoring g, and assumes some interesting value c has been chosen. Let h(y) = (f(y)-f(g(c)))/(y-g(c)) if y does not equal g(c), and let h(y) = f'(y) if y=g(c). All that should be possible by assumption. Since g is differentiable at c, g is continuous at c, so h of g is continuous at c, so lim x->c (hog)(x)=h(g(c))=f'(g(c)). By the definition of h, f(y)-f(g(c))=h(y)(y-g(c)) for all y, so ((fog)(x)-(fog)(c)) = (hog(x))(g(x)-g(c)), so for x not equal to c we have ((fog)(x)-(fog)(c))/(x-c) = (hog(x))(g(x)-g(c))/(x-c). Taking the limit of both sides as x->c, then (fog)'(c)=lim x->c ((fog)(x)-(fog)(c))/(x-c) = (lim x->c hog(x))(lim x->c (g(x)-g(c))/(x-c)) = f'(g(c))g'(c). Black Carrot (talk) 06:36, 19 March 2008 (UTC)
 * >> "x2sin(1/x) is zero whenever sin(1/x) is zero, which happens whenever 1/x is a multiple of pi, which happens whenever x = 1/npi for some integer n."
 * Thanks! Now I understand it.
 * >> "You know, you're not really all that wrong. You have the right idea, ..."
 * Excuse my rewiring of your response for readability:
 * Here's roughly how my analysis textbook solves the problem. First, you define a new function $$h(y)$$. I'll skip the details about intervals and mappings, and just say that it's focused on $$f$$ and ignoring $$g$$, and assumes some interesting value $$c$$ has been chosen. Let
 * $$h(y)=\begin{cases}

\frac{f(y)-f(g(c))}{y-g(c)},&y\ne g(c)\\ f'(y),&y=g(c) \end{cases}$$
 * All that should be possible by assumption. Since $$g$$ is differentiable at $$c$$, $$g$$ is continuous at $$c$$, so $$h$$ of $$g$$ is continuous at $$c$$, so
 * $$\lim_{x\rightarrow c}h(g(x))=h(g(c))=f'(g(c))$$.
 * By the definition of $$h$$,
 * $$f(y)-f(g(c))=h(y)[y-g(c)]$$, $$\forall y\ne g(c)$$,
 * so
 * $$f(g(x))-f(g(c))=h(g(x))[g(x)-g(c)]$$,
 * so for $$x$$ not equal to $$c$$ we have
 * $$\frac{f(g(x))-f(g(c))}{x-c}=\frac{h(g(x))[g(x)-g(c)]}{x-c}$$.
 * Taking the limit of both sides as $$x\rightarrow c$$, then
 * $$(f\circ g)'(c)=\lim_{x\rightarrow c}\frac{f(g(x))-f(g(c))}{x-c}=\left[\lim_{x\rightarrow c}h(g(x))\right]\left[\lim_{x\rightarrow c}\frac{g(x)-g(c)}{x-c}\right]=f'(g(c))g'(c)$$.
 * Did I misunderstand your response? Thanks! - Justin545 (talk) 09:01, 19 March 2008 (UTC)


 * After "By the definition of h", it should be for all y. If y=g(c), both sides are equal to zero, and the equality still holds. That one line is pretty much the goal of the whole thing, finding a way to get that conclusion without dividing by zero anywhere. Black Carrot (talk) 16:02, 19 March 2008 (UTC)


 * But I think it should be


 * by definition of $$h$$.
 * By the way, I think derivatives of composition functions should be able to rewritten rewrite to be rewritten in Leibniz notation as below


 * Justin545 (talk) 07:02, 20 March 2008 (UTC)

Calculating inflation by consumer price index statistics
Hello,

I'm currently working on HMAS Melbourne (R21), which is currently at Featured Article Candidates. One of the reviewers has requested that more recent financial figures be provided for the various sums of money (listed at User talk:Saberwyn/HMAS Melbourne (R21)) mentioned in the history of the ship. I've been pointed to Australian consumer price index statistics (available at from 1969 to 2007, and  from 1949 to 1997 - Both open directly as Excel spreadsheets).

I've looked at them, and realise I don't have the first idea as to using these statistics to convert, for example, a AU$1.4 million figure from 1985 to a AU$ figure in 2007. Any help, at least a point in the right direction to an online tutorial or something, would be muchly appreciated. -- saberwyn 10:32, 18 March 2008 (UTC)


 * Well, the table says that AU$69.7 in 06/1985 are equivalent to AU$157.5 in 06/2007 in terms of overall purchasing power, so 1.4m in 1985 AU$ would roughly amount to 1.4m*157.5/69.7 = 3.2m in 2007 AU$. Bikasuishin (talk) 12:21, 18 March 2008 (UTC)


 * You calculate percentage of inflation from Year 1 to Year 2 by using the formula $$Inflation=\frac{(CPI\ in\ Year\ 2)-(CPI\ in\ Year\ 1)}{(CPI\ in\ Year\ 1)}\times 100$$
 * For example, according to the table CPI in March 1988 was 87 and in March 1998 it was 120.3. Thus, by using the above formula, we find that prices in this period have inflated by $$\frac{120.3-87}{87}\times 100\approx 38.28%$$. Therefore, AU$ 1,000 in March 1988 was equivalent to AU$ $$1000+1000\times 0.3828=1382.8$$ in March 1998. Cheers,   A R  TYOM    17:14, 18 March 2008 (UTC)


 * Thank you for your assistance, it has been a great help!! -- saberwyn 00:12, 19 March 2008 (UTC)

Area Between Curves
I'm having a little trouble with an area between curves problem. The functions are f(x)=x^3-x and g(x)=3x. What I'm trying to figure out is if I should add the two areas (the one bellow and the one above the curve) or if i should subtract the area from underneath the X axis from the one above it. Thanks RedStateV (talk) 14:28, 18 March 2008 (UTC)


 * The distance between two points is the absolute value of one minus the other. So long as you keep your signs straight, it doesn't matter where either curve is in relation to the x-axis or to the other. &mdash; Lomn 14:42, 18 March 2008 (UTC)

So should I be getting a answer of 8 then? RedStateV (talk) 14:52, 18 March 2008 (UTC)
 * That's not what I get, but either one of us could have got the arithmetic wrong. I've done it twice and got the same answer both times, do you want to try doing it again and see if you still get 8? --Tango (talk) 15:17, 18 March 2008 (UTC)
 * My silicon master confirms that the answer is indeed 8. -- Meni Rosenfeld (talk) 15:35, 18 March 2008 (UTC)
 * Yeah, I can't integrate, it's definitely 8... sorry. --Tango (talk) 17:31, 18 March 2008 (UTC)
 * Are you going from x=-2 to x=2 ? If so, you can take advantage of the symmetry of the curves - take the area between the curves from x=0 to x=2 and then double it. I also get 8. Gandalf61 (talk) 15:49, 18 March 2008 (UTC)

Ok, so the value of the area bellow the X axis is not negative then. Good, Thanks. RedStateV (talk) 16:03, 18 March 2008 (UTC)
 * Area is, in fact, always positive (at least in the usual settings). What would negative area (or length, or volume) even mean? &mdash; Lomn 16:11, 18 March 2008 (UTC)
 * To answer this question we need to go back to the source. Lengths, areas etc. are measures given to sets in, say, an Euclidean space. To this extent they cannot be negative. But we can generalize to a sort of multiset in which elements can appear several times or even a negative number of times, and define measures on these in the natural way. Then the area of a set which only has negative points will be negative. In analytic geometry, there are formulae for area that can give negative results if an absolute value is not taken explicitly, such as $$S=\frac{1}{2}\det\begin{pmatrix}x_A & x_B & x_C \\ y_A & y_B & y_C \\ 1 & 1 & 1\end{pmatrix}$$ for a triangle. The sign is traditionally taken to represent the orientation, with only the absolute value representing the "true" area. But the signed number can be taken to be the area, if a negatively oriented triangle is seen as containing its points negatively. The same goes for calculating length by simply subtracting two coordinates, or area under a function by evaluating its integral. -- Meni Rosenfeld (talk) 17:21, 18 March 2008 (UTC)

If in doubt - plot the graph. Always check for places where the line crosses y=0 because if you don't notice this can mess up your areas if you are using integration.87.102.47.176 (talk) 16:56, 18 March 2008 (UTC)
 * If you're calculating the area between a curve and the x-axis, then the curve crossing the axis is significant, however this question is about the area between 2 curves, so it's where the curves cross each other that's significant, not where they cross the x-axis. --Tango (talk) 15:45, 19 March 2008 (UTC)

Rational function conjecture
If $$f : \mathbb{R} \rightarrow \mathbb{R}$$ is an analytic function such that $$x \in \mathbb{Q}$$ implies $$f(x) \in \mathbb{Q}$$, must $$f$$ be a rational function? I expect it's false, so what's a counterexample? —Keenan Pepper 19:02, 18 March 2008 (UTC)


 * How about f(x)=2^(lg(x+1))?, where lg is log base 2. GromXXVII (talk) 20:08, 18 March 2008 (UTC)
 * Silly me...bad domain. GromXXVII (talk) 20:10, 18 March 2008 (UTC)
 * And it equals x+1, which is rational. Nice try, though! I can't think of a counterexample. It wouldn't surprise me if it were true, but I can't prove it. --Tango (talk) 20:13, 18 March 2008 (UTC)

Any ideas? Or search keywords? —Keenan Pepper 15:26, 19 March 2008 (UTC)


 * I believe it's true, based on the following thinking. The question here is really, can we construct an irrational number with a finite number of rational parts? If we can then your conjecture is false, but if not then it is true. And since by definition an irrational number cannot be constructed by a finite number of rational components the conjecture is true. A math-wiki (talk) 03:15, 20 March 2008 (UTC)


 * I don't follow this at all. Are you sure you're talking about my question and not its converse? It's easy to show that every rational function whose denominator has no real zeros is analytic on the real line and maps rationals to rationals. My question was about going the other direction: does being analytic and mapping rationals to rationals guarantee that something is a rational function? —Keenan Pepper 05:19, 20 March 2008 (UTC)

Lovely question. I don't know the answer. Here's one approach that occurs to me: Let's try and figure out whether, if f is represented by a power series that converges everywhere on R, and f takes rationals to rationals, then f must be a polynomial. If that can be proved then it doesn't finish off the conjecture but might be a large step towards a general proof; a counterexample disposes of the question.

So consider all sequences of reals $$\langle a_0, a_1, \ldots \rangle\in\mathbb{R}^\omega$$ such that the corresponding power series converges everywhere (if "converges everywhere" is too messy, we could simplify it a bit by putting bounds on the $$a_i$$ that guarantee the power series converges via the ratio test or something). That's a nice topological space (a Polish space) in which we might be able to work some magic. Now enumerate the rationals as $$q_0, q_1, \ldots$$. For what values of $$\langle a_0, a_1, \ldots\rangle$$ is $$a_0+a_1q_0+a_2q_0^2+\ldots$$ a rational number? It'll be a reasonably simple set, maybe F-sigma or something (have to think about this) -- can we find a big chunk of this set that's compact? Because then we pull the same trick with $$q_1\,$$ and try to get a nonempty compact set that works for both $$q_0\,$$ and $$q_1\,$$ and keep going. If we can do this forever then we have a descending sequence of compact sets, so it has a nonempty intersection, and we've got out counterexample. But there's a lot of ifs; I don't know whether it can be made to work. --Trovatore (talk) 03:42, 20 March 2008 (UTC)


 * Oh, I didn't make explicit why a counterexample to my version answers the original question. That's because the power series is assumed to converge everywhere. That means it extends to the whole complex plane giving an entire function -- no poles. Therefore, if it were a rational function, it would have to be a polynomial. --Trovatore (talk) 03:50, 20 March 2008 (UTC)


 * That's true if the function is analytic over the whole complex plane, but not if it only has to be analytic on the real line. Counterexample: $$f(x) = 1/(x^2+1)$$ (it has two poles, but they're both imaginary). —Keenan Pepper 05:19, 20 March 2008 (UTC)
 * But that wasn't the hypothesis -- the hypothesis is that the function has a power series that converges everywhere on the real line. That isn't true for 1/(x^2+1). You have to use different power series with different centers and glue them together. --Trovatore (talk) 07:17, 20 March 2008 (UTC)
 * Ah yes, you're right. —Keenan Pepper 13:24, 20 March 2008 (UTC)
 * Analytic *doesn't* mean that there's a power series convergent everywhere --- see our article, where $$1/(1+x^2)$$ is given as an example.  Analicity is a local property: there must be a power series representation for f valid in a neighbourhood of every point.79.67.177.9 (talk) 10:30, 20 March 2008 (UTC)


 * I think Keenan requires the function ƒ to be meromorphic but not necessarily holomorphic. Gandalf61 (talk) 11:05, 20 March 2008 (UTC)
 * Trovatore asked a different question than Keenan. Keenan's hypothesis is that the function is analytic (by which I think he means holomorphic on the real line, but not necessarily elsewhere). Trovatore's hypothesis is that the function has a power series which converges everywhere. The two questions are related in that a counterexample to Trovatore's is also a counterexample to Keenan's. Read his argument more carefully to understand why. -- Meni Rosenfeld (talk) 12:11, 20 March 2008 (UTC)

One idea on different lines: for a counterexample it's enough to show there exist analytic f that map rationals to rationals and grow faster than any polynomial 163.1.148.158 (talk) 11:24, 20 March 2008 (UTC)


 * I'll clarify my argument a bit Keenan, your right in some ways, I'm talking about the converse of your hypothesis. But what I was pointing to was to try to show that if your hypothesis was false, then the converse is also false which as we know, would be a contradiction. The key to my line of thought is that f is really an operator on x and a rational operator will have $$f : \mathbb{Q} \rightarrow \mathbb{Q}$$ but an irrational operator must for some rational input, give a irrational output or for some irrational input, give a rational output for some x. (e.g. sin and it's inverse arcsin) Therefore f is a rational operator. A math-wiki (talk) 22:39, 20 March 2008 (UTC)
 * Both a statement and its converse can be false. Are you confusing converse with negation? --Tango (talk) 22:46, 20 March 2008 (UTC)


 * Tango, I think u slightly missread my last post, I was suggesting he try to show that IF his hypothesis is false, THEN the converse (which is true) would have to be false also. A math-wiki (talk) 23:03, 20 March 2008 (UTC)