Wikipedia:Reference desk/Archives/Mathematics/2010 November 13

= November 13 =

Infinitely differentiable implies polynomial
Hello everyone,

I am attempting to prove the following:

Let f:$$\mathbb{R} \to \mathbb{R}$$ infinitely differentiable such that $$\forall x \in \mathbb{R}\, \exists \, n\, s.t. \, f^{(m)}(x)=0 \, \forall \, m \, \geq \, n$$ - then f must be a polynomial.

How would I get started on this - is it best to try to prove it directly or to use some more advanced tools to prove the result? I'm comfortable with the majority of analysis, certainly up to an undergraduate level at least (Banach spaces etc. are all fine!) - I have spent quite a while on this but I can't see anything obvious - proof by contradiction, for example? Doesn't appear like there's any obvious solution to me!

Any help or guidance would be hugely appreciated, thanks! Spalton232 (talk) 02:15, 13 November 2010 (UTC)


 * Can't you just integrate f^(n) n times? Is there something I'm overlooking? 67.158.43.41 (talk) 03:53, 13 November 2010 (UTC)


 * How about, using Taylor expansions? Since f is infinitely differentiable, it has a Taylor expansion.  Since after a finite n, the derivatives are all zero so the Taylor expansion is finite, which means that it is a polynomial (of degree n). -Looking for Wisdom and Insight! (talk) 03:56, 13 November 2010 (UTC)
 * Yeah, you've proved that every Taylor Series of the function is a polynomial... That doesn't really seem to help much wrt the original function, though, since there's no assumption that it's analytic.
 * One thing that's clearly crucial here is the fact that you have a finite Taylor expansion around every x; if it was just around zero, then the function $$f(x) = e^{\frac{1}{x^2}}$$ for nonzero x, 0 for x=0, would be a counterexample. --COVIZAPIBETEFOKY (talk) 04:30, 13 November 2010 (UTC)
 * Obviously COVIZAPIBETEFOKY ment to write $$f(x) = e^{-\frac{1}{x^2}}$$. Bo Jacoby (talk) 10:16, 13 November 2010 (UTC).
 * Erm, yes. Sorry, got a bit bogged down by the symbols; I haven't written formatted text on wp in a while... --COVIZAPIBETEFOKY (talk) 14:11, 13 November 2010 (UTC)
 * Certainly if the quantifiers were reversed, this would be simple; just integrate n times. So the goal is to reverse the quantifiers.  My first instinct is the Baire category theorem.  Let $$Z_n$$ be the set of $$x$$ such that $$(\forall m \geq n) f^{(m)}(x) = 0$$.  Since the reals are a Baire space (as is every open subset of the reals), one of the $$Z_n$$ has non-empty interior.  In fact, this holds on every neighborhood.  So we have densely many intervals on which the function is a polynomial.  Unfortunately, I don't see how to put this together to get the final result.--203.97.79.114 (talk) 04:37, 13 November 2010 (UTC)
 * Nevermind, I was overdoing it. I will use the following fact: if two polynomials agree on an infinite number of points, they are the same polynomial.  From this, it follows that it suffices to prove the result for arbitrary closed intervals.  So fix one.


 * Let $$C_n$$ be the set of $$x$$ such that $$|f^{(n)}(x)| < 1$$. Then $$C_n$$ is an open set, and by hypothesis $$\bigcup C_n$$ for $$n > N$$ covers the interval for any $$N$$.  By compactness, some finite collection of $$C_n$$ do.  This lets us show that the remainder term of the finite taylor series goes to zero.  So $$f$$ is analytic on the interval.  As noted above, this suffices.--203.97.79.114 (talk) 05:11, 13 November 2010 (UTC)
 * Gah, forget I said anything. Not sure what I was thinking when I said we could show the remainder term gets small.--203.97.79.114 (talk) 05:19, 13 November 2010 (UTC)


 * I'd iterate backwards, show that if f'(x) is always zero then f(x) is a constant, then that if the first derivative is always a constant then the function is linear, if linear the function is a quadratic etc. Dmcq (talk) 09:45, 13 November 2010 (UTC)
 * You're missing that the quantifiers are reversed. There isn't (a priori) a single n for all x.  Rather, every x has an n.  So we can't assume some derivative is identically 0.--203.97.79.114 (talk) 10:42, 13 November 2010 (UTC)
 * Sorry yes I see you pointed that out again above, I should really read things properly. Okay it needs considerably more thought. Dmcq (talk) 10:49, 13 November 2010 (UTC)
 * As above for any closed finite interval n must be bounded and therefore we can get a polynomial there. The whole line can be covered by overlapping intervals. Where the intervals overlap the polynomials must be the same. Therefore there is one polynomial covering the whole line. Hope I haven't missed out something this time. Dmcq (talk) 10:59, 13 November 2010 (UTC)
 * It's not obvious why, on any finite interval, we should be able to find a fixed n such that the nth derivative is zero. The best you can easily do is what 203.97.79.114 did above: use Baire to show that there is some interval on which this holds. It's not clear how to extend this. Algebraist 11:04, 13 November 2010 (UTC)
 * The 'closed' is important, there can be no limit point in a closed interval where n goes to infinity if it is bounded at each point. Dmcq (talk) 11:10, 13 November 2010 (UTC)
 * Except n need not be infinite at the limit point. For appropriate $$a_n$$, the limit of $$a_nx^n$$ is 0 in every derivative.--203.97.79.114 (talk) 11:14, 13 November 2010 (UTC)
 * Thanks, now that's why I should have thought about it a bit more. :) And one could have a limit point in every finite interval which is nasty. Dmcq (talk) 11:29, 13 November 2010 (UTC)
 * Now you see why I struggled! I have a feeling Baire Category theorem is very much along the lines of the approach we're meant to use though, as it's something we only learned recently - could the Stone-Weierstrass theorem be of use, perhaps? Or maybe the Tietze extension theorem? These are the two other results which look relevant to me: particularly the former, though again I can't see how to apply it directly to achieve the result - though hopefully one of you might be able to! It certainly looks pertinent... Spalton232 (talk) 14:00, 13 November 2010 (UTC)


 * My intuition is that if you take an interval [a, b] on which f^(m) (x) = 0 for m >= n, where the interval is as large as possible for fixed n, some point left of a (but close to a) or right of b (but close to b) must have non-zero kth derivative for infinitely many k. This would be a contradiction, so that the interval must have been R, which causes the result to follow as above. A non-analytic smooth function enjoys this property, at least. The existence of an interval [a, b] follows from Baire as noted above, which also lends plausibility. Regardless, I wouldn't suggest my intuition if there seemed to be fruitful approaches at hand. 67.158.43.41 (talk) 02:08, 14 November 2010 (UTC)


 * What is the problem with my taylor polynomial approach above? Yes, I have shown that every Taylor expansion is a polynomial but this function is analytic as the remainder term goes to zero as n goes to infinity because after a finite n, all of the derivatives are zero.  So the function is equal to its taylor expansion.  Right?  -Looking for Wisdom and Insight! (talk) 05:19, 15 November 2010 (UTC)
 * The derivative forms of the Taylor remainder all involve the n+1th derivative of f at some point close to x, depending on n. So there's no reason the remainder should go to zero unless there's an n such that the nth derivative is 0 in a neighbourhood of x, which is the same problem again. Algebraist 11:40, 15 November 2010 (UTC)


 * If you look at Smooth function you'll see a function which is infinitely differentiable and where all the derivatives at one point are zero - yet it is not not polynomial. The bump functions are like this. Dmcq (talk) 08:26, 15 November 2010 (UTC)


 * By the way I was just looking at Sard's theorem. It implies the value of the last term before the zeroes must form a set of measure zero, so if that value of this last term changes between two points there must be higher order polynomials in between. The problem with going down this route is as above - one can always approach infinity without necessarily getting there. Dmcq (talk) 08:46, 15 November 2010 (UTC)
 * From my perspective, the strongest result we have here so far (this seems to be a collaborative effort now!) is that there is an interval on which for some n, m > n implies the m-th derivative is identically 0 on that interval, so then f must behave like a polynomial on that interval, and outside that interval there must be some nearby point on which the derivatives are not identically 0 after some sufficiently large m - does this lead us to a contradiction necessarily? Spalton232 (talk) 01:21, 16 November 2010 (UTC)
 * I'm trying to imagine a counterexample, and where it must go wrong, but I can't tell, nor can I come up with an explicit formula of a function with these properties. Maybe someone can see why this function can't actually exist:  Consider a sort of bump function f with support [0,1], which is identically 1 on [1/3,2/3], so then f' has support [0,1/3] ∪ [2/3,1] and let it be equal to some positive constant on the interval [1/9,2/9] and some negative constant [7/9,8/9].  Then f" has support [0,1/9] ∪ [2/9,1/3] ∪ [2/3,7/9] ∪ [8/9,1] and let it be some positive constant on [1/27,2/27] and [25/27,26/27] and a negative constant on [7/27,8/27] and [19/27,20/27], etc. Rckrone (talk) 04:28, 16 November 2010 (UTC)


 * I believe you're getting at my "intuition" above. I wasn't able to find an obvious contradiction but then again I haven't been able to construct a counterexample function, i.e. one with nth derivative identically zero on a closed interval and each point off that interval nonzero at only finitely many derivatives. I almost suspect the question is false, the counterexample is awful, and the quantifiers are just reversed; then again maybe we're all just missing something. 67.158.43.41 (talk) 04:35, 16 November 2010 (UTC)


 * Ok, I've got some actual values now to define this function piece-wise. Let c0 = 1, and cn = (3n+1/2)cn-1 for n > 0.  Then f(n)(x) = ±cn on the intervals chosen how I did before (so f(0) = 1 on [1/3,2/3], f(1) = 9/2 on [1/9,2/9] and f(1) = -9/2 on [7/9,8/9], etc).  I guess you could describe the intervals as [0.a1...an1,0.a1...an2] where these are ternary representations with all ai equal to 0 or 2, and then the sign there corresponds to the number of 2s.  In other words the intervals you remove at each step in constructing the Cantor set (except closed rather than open).  The gaps are filled in by the higher derivatives.  I think that should work out, and I think it's a counterexample. Rckrone (talk) 05:47, 16 November 2010 (UTC)


 * I tried to construct a counter-example using a similar approach, but there's a problem: you never make any definition on elements of the Cantor set which are not endpoints of intervals -- the majority of the Cantor set. Since you have already defined it on a dense set, you can extend the function to the unique continuous extension, but now you need to prove that all the derivatives exist at these new points and that the derivatives are eventually 0.--203.97.79.114 (talk) 07:41, 16 November 2010 (UTC)


 * f(x) is infinitely differentiable. In particular, that implies that all derivatives change smoothly.  If there exists a finite N such that $$f^{(m)}(x)=0 \, \forall \, m \, \geq \, N, x \in [a, b]$$ for any continuous interval [a, b] of non-zero measure then it implies $$f^{(m \geq N)}(x)=0 \, \forall \, x \in \mathbb{R}$$.  To do otherwise would imply a kink in at least one of the higher order derivatives of f(x), and hence a point that was not infinitely differentiable.


 * Hence, either there exists a finite bound N everywhere, or there exists an arbitrarily large n in every interval no matter how small. The latter possibility can be excluded because f(x) is infinitely differentiable.  Therefore n is bounded.  Knowing that a upper-bound N exists everywhere such that $$f^{(N)}(x)=0 \, \forall \, x$$ it follows immediately that $$f^{(N-1)}(x)=\text{constant} \, \forall \, x$$, etc., which allows one to conclude that f(x) is a polynomial.  Dragons flight (talk) 08:31, 16 November 2010 (UTC)


 * Bump functions are a counter-example to your first paragraph.--203.97.79.114 (talk) 11:29, 16 November 2010 (UTC)


 * No, the bump transition requires an infinite series of non-zero derivatives. Sorry if this was too implicit.  If you have a interval [a, b] where all derivatives m > N equal zero, but in an neighborhood adjacent to [a, b] the derivative m* > N is changing, then continuity and smoothness imply that all derivatives m > m* must also start changing over that neighborhood.  This implies at least an interval adjacent to [a, b] where the derivative tower is everywhere unbounded, which contradicts the assumptions of the problem.  You can't have a bump function with a bounded derivative tower.  Dragons flight (talk) 19:05, 16 November 2010 (UTC)
 * How do continuity and smoothness imply that? That would solve the problem, but I can't immediately see the implication. Algebraist 19:11, 16 November 2010 (UTC)
 * Neither do I. It's impossible to have two different polynomials on adjacent intervals such that their union is infinitely differentiable, that much is clear. Hence if b is chosen so that f(m) = 0 on [a,b], but not on any larger interval, then for every k, b is a limit point of the set {x > b: f(k)(x) ≠ 0}. However, I do not see why this should imply the existence of a single x such that f(k)(x) ≠ 0 for infinitely many k.—Emil J. 19:39, 16 November 2010 (UTC)


 * Let [a,b] be an interval such that $$f^{(m > N)}(x) = 0, \forall x \in [a,b]$$. Let c not in [a, b] and m* > N be such that $$f^{(m^*)}(c) \ne 0$$.


 * By virtue of smoothness and without loss of generality, we can choose c such that for all x in (b, c], $$f^{(m^*)}(x) \ne 0$$. Since $$f^{(m^*)}(x)$$ is changing, it implies $$f^{(m^*+1)}(x)$$ is non-zero over at least some sub interval (b, d] within (b, c].  The observation that this latter subinterval must also start at b is a consequence of the fact that $$f^{(m^*+1)}(b) = 0$$.


 * This allows one to build a tower of intervals


 * $$C_m = (b, c_m] \, s.t. \, \forall m \geq m^*, \forall x \in C_m , f^{(m)}(x) \ne 0$$


 * Further we can choose cm such that $$ C_{m+1} \subset C_m, \forall m$$.


 * The infinite intersection of a nested series of non-empty intervals must be non-empty (containing at least one point, in general). Therefore, $$\exists x \in \cap C_m, s.t. f^{(m)}(x) \ne 0, \forall m \geq m^*$$.  Which is our contradiction.  Dragons flight (talk) 19:59, 16 November 2010 (UTC)


 * Hmmm, brain fart. The infinite intersection of open nested sets can be empty.  For example if An = (0, 1/n].  Dragons flight (talk) 20:14, 16 November 2010 (UTC)


 * It's also not obvious to me why you can choose a c as claimed.--130.195.5.7 (talk) 22:21, 16 November 2010 (UTC)


 * The Baire category theorem isn't something I've studied, I must have a look at it, but above it is asserted it says there must be an open set where n is bounded and therefore the function is polynomial in some open set somewhere. Am I reading that right - it sounds a very strong result. However if the end points of such a set are not polynomial you'd have a discontinuity so any such set can be closed. If we construct a function by collapsing all these closed sets where we have a polynomial and join up all the end points then we should end up with another smooth function where every point has a finite number of non-zero differentials - but has no open sets where we have a polynomial or they only occupy a finite length either way it looks like a contradiction with the original business of always being able to find an opens set. Is that application of the Baire Category Theorem right? If so then you're about there I believe. Dmcq (talk) 12:01, 16 November 2010 (UTC)
 * It's true that the Baire theorem implies that there's a positive-length interval on which f is a polynomial. In fact, it implies that every open interval has a positive-length subinterval on which f is a polynomial. It's clear that the set of points where f matches some given polynomial is closed, so these intervals can always be taken to be closed and unextendable. I don't understand the rest of your argument. Algebraist 12:05, 16 November 2010 (UTC)
 * Possibly for the very good reason that what I was thinking of was not well thought through and was wrong or not quite there. You'd have to adjust the other end by the slope and height etc and with an dense set of such polynomials you'd have no guarantee that removing a bunch of them wouldn't send the other end off to infinity. Dmcq (talk) 12:14, 16 November 2010 (UTC)
 * Ulp, it has struck me one can even take away positive length intervals from every single little interval without having the remainder of measure zero. Dmcq (talk) 13:56, 16 November 2010 (UTC)
 * Yes. There are open dense subsets of the real line of arbitrarily small positive measure. Algebraist 15:16, 16 November 2010 (UTC)

Define n(x) to be the smallest integer such that $$f^{(m)}(x)=0$$ for $$ m\geq n(x)$$. Then, on any closed interval the function n(x) has a maximum. So, f(x) is a polynomial on any closed interval. Count Iblis (talk) 16:49, 16 November 2010 (UTC)
 * Why should n have a maximum on any closed interval? Algebraist 16:50, 16 November 2010 (UTC)


 * Ah, I see the problem :) . I don't have good answer yet. I thought about the following: Suppose that n(x) doesn't have an upper bound. Then there exists a converging sequence $$x_{j}$$ such that $$n\left(x_{j+1}\right)>n\left(x_{j}\right)$$. Call the limit point y. Then consider the sequence of functions $$g_{j}(x) = f^{(n(x_{j})-1)}(x)$$. There then exist intervals $$I_{j}$$ containing the point $$x_{j}$$ such that $$g_{j}(x)$$ is nonzero on the interval. I was then thinking about cooking up a contradiction between the $$g'_{j}(x_{j})$$ and all higher derivatives being zero, and the intervals $$I_{j}$$ shrinking in size faster than $$\left | x_{j+1}-x_{j}\right |$$ 19:28, 16 November 2010 (UTC)


 * Suppose you get an upper bound in any closed interval. What's to say those upper bounds don't tend to infinity as the size of the interval increases?
 * Actually I think I just solved that one. Take the interval [0, 1], get a least upper bound, so f is a degree N polynomial on [0, 1]. Do the same to [-1, 0], giving a degree M polynomial. For the two polynomial's derivatives to agree at 0, N=M, which also forces them to be the same polynomial. Extend the new interval [-1, 1] in a similar manner inductively to show that f is a degree N polynomial. 67.158.43.41 (talk) 22:18, 16 November 2010 (UTC)


 * Ok, so only the upper bound on n(x) needs to be proved. Let's try something different. Define $$V_{n}$$ to be the support of the function $$f^{(n)}(x)$$. If f(x) is not a polynomial, then all the $$V_{n}$$ are dense in R. Let U be the intersection of all the $$V_{n}$$. Baire's theorem says that the intersection of any countable collection of dense open sets is dense, so U is dense in R. Then consider n(x) for some x in U. It was assumed in the problem specification that n(x) is a finite number for all x in R. However, x being in U implies that n(x) has to be larger than any finite number. Count Iblis (talk) 00:46, 17 November 2010 (UTC)


 * You'll need to justify why f(x) not being a polynomial implies the $$V_n$$ are dense.--130.195.5.7 (talk) 00:59, 17 November 2010 (UTC)
 * Yes, I was getting ahead of the argument way too fast again :). Let's try this. Suppose for some n a $$V_{n}$$ is not dense in R. Then there exists an open set on which the nth derivative of f is zero. So, f is a polynomial on that open set. Then consider the complement of that open set. Either all the $$V_{n}$$ are dense in that set (in which case we get a contradiction per the above argument using Baire's theorem and f is a polynomial on the entire set), or there again exists an open set where you can represent f as a polynomial. So, this suggests that a transfinite induction argument can be set up to show that for any x there exists an open set V such that x is in the closure of V and such that f is a polynomial on V. Then we're done because n(x) is bounded on all these open sets as f is a polynomial on them. So, Dr. IP's argument above applies to join them together. Count Iblis (talk) 18:32, 17 November 2010 (UTC)
 * I'm not totally convinced the intervals that your argument supplies have to be nice enough to be glued together the way Dr. IP is suggesting to cover the whole real line. For example, if we start with the interval [-1,0], and we have the intervals [1/(n+1),1/n] to choose from, how can we extend [-1,0] past 0? Rckrone (talk) 21:02, 17 November 2010 (UTC)

I think this can be dealt with using transfinite induction. We know that every x in R is in an interval on which f is a polynomial (for the argument we need to drop the fact that we can chose these to be closed intervals). We can define a partial ordering on the set of all such representations of f by declaring that representation R1 is larger than or equal to representation R2 if all the intervals used by R1 are unions of those of R2, or closure of unions of subsets. Then there exists a maximal totally ordered subset of the set of all the representations, which we know is nonempty. From any totally ordered set of representations, we can construct a representation as follows. For any point x we take the union of all the sets that have x as a member from the representations in the totally ordered set. So, we then have an unambiguous prescription to partition R into intervals on which f is a polynomial.

If we then take the maximal totally ordered subset and construct the representation of f from that, we get a representation that is larger or equal to all the elements in the totally ordered subset. Since this subset is maximal, it is an element of this set that cannot be extended. That then imples that it contains R as its only interval. Count Iblis (talk) 18:20, 18 November 2010 (UTC)


 * "We know that every x in R is in an interval on which f is a polynomial " How do we know this?  Once we have that fact, the result follows immediately from the fact that the reals are not a countable union of closed intervals, except in the trivial fashion.--203.97.79.114 (talk) 21:55, 18 November 2010 (UTC)
 * countable union of disjoint closed intervals--203.97.79.114 (talk) 21:57, 18 November 2010 (UTC)

That statement was not derived rigorously and I see now that it isn't actually true. Let's start right from the start again and do everything rigorously. Let V be some arbitrary open interval and C its closure. We put $$X = \mathbb{R} - \{0\}$$ Then define the sets $$W_{n}=\left(f^{(n)}(x)\right)^{-1}\left(X\right)$$. The closures of these sets are the supports of the $$f^{(n)}(x)$$, but we don't want to take the closure. Now X is an open set, and it follows from the continuity of the $$f^{(n)}(x)$$ that the $$W_{n}$$ are open sets. Define $$V_{n} = V\cap W_{n}$$, which are clearly open sets. Then suppose that all the $$V_{n}$$ are dense in C. Then Baire's theorem says that the union intersections of all the $$V_{n}$$ is also dense in C (Baire's theorem applies because all the $$V_{n}$$ are open and because C is a complete metric space). But then we get a contradiction because at a point x in the union of all the $$V_{n}$$, all the derivatives of f would be nonzero, which contradicts the problem statement.

So, we conclude that given an arbitrary open interval V, there always exists an n such that $$V_{n}$$ is not dense in V. This means that any arbitrary open interval V contains an open interval on which $$f^{(n)}(x) = 0$$, so f is a polynomial there. Then given an arbitrary point x, any neighborhood of x will contain such an interval, so either x is inside such an interval or it is arbitrary close to one.

Let's define the set S to be the set of all these intervals. The union of all the elements of this set is dense in $$\mathbb{R}$$. There can, of course, be different such sets of intervals that represent the function as polynomials on each interval. We shall call such a set a representation of f. We can define a partial ordering on the set of all such representations of f by declaring that the representation S1 is larger than or equal than represenation S2, if all the intervals in S1 are unions of those of S2, or closure of unions. Then there exists a maximal totally ordered subset of the set of all the representations of f, which we know is nonempty. From any totally ordered set of representations, we can construct a representation as follows. For any point x we take the union of all the sets that have x as a member from the representations in the totally ordered set. So, we then have an unambiguous representation of f.

If we then take the maximal totally ordered subset and construct the representation of f from that, we get a representation that is larger or equal to all the elements in the totally ordered subset. Since this subset is maximal, it is an element of this set that cannot be extended. That then imples that it contains $$\mathbb{R}$$ as its only interval. Count Iblis (talk) 01:56, 19 November 2010 (UTC)


 * When you apply Baire's theorem, you meant "intersection" instead of "union", but that's a minor thing. In your maximal sets argument, I don't understand what you mean by "representation"--a set of open sets where on a particular open set f is 0 at some derivative? You seem to be using Zorn's lemma. To be clear, it just provides an element which is maximal amongst all comparable elements--not necessarily amongst all elements, since some may not be comparable to it. This allows multiple maximal elements. I also don't understand the reasoning in "Since this subset is maximal, it is an element of this set that cannot be extended. That then imples that it contains $$\mathbb{R}$$ as its only interval." 67.158.43.41 (talk) 07:33, 19 November 2010 (UTC)


 * Yes union should be intersection, you can tell that it has been a long time that I passed the functional analysis exam and haven't been doing this kind of stuff since that time :) . I was first using the idea that you can repeatedly use the Baire argument so that starting with $$\mathbb{R}$$ and removing open intervals, you end up with a set where you can't find any open intervals anymore. This may require a separate transfinite induction argument to justify rigorously. What you then have is a collecton of open intervals, the union of which is dense in $$\mathbb{R}$$ and such that on each open interval in this collection, f is a polynomial. I was then using Hausdorf's maximality theorem to set up a transfinite induction argument (So, if we have a partially ordered set, then this will contain a totally ordered subset which is maximimal). A representation of f is any collection of intervals, the union of which is dense on $$\mathbb{R}$$ such that on each interval f is a polynomial. So, the set of all representations is nonempty.


 * Then we define a partial ordering on this set of representations. If one representation S1 can be obtained from another representation S2 by merging intervals in S2, then we say that

S2 > S1. Then we consider a maximal totally ordered subset of all representations. On any representation, one may attempt to use your argument to combine intervals to define the function as a single polynomial on a larger interval which would lead to a representation which is larger in the sense of the defined partial ordering. But this cannot work for the maximal element (that maximal element needs to be constructed from the maximal totally ordered subset), otherwise the totally ordered subset of representations was not maximal. I the claim that this "maximal representation" cannot contain some interval of finite length, but perhaps this needs to be justified rigorously... Count Iblis (talk) 16:51, 19 November 2010 (UTC)


 * You're working way too hard to establish this maximal set. f equaling a polynomial is a closed property, so any interval on which f equals a polynomial is contained in a maximal such interval.  Further, if two such intervals intersect, their union is such an interval.  So any interval on which f equals a polynomial is contained in a unique maximal such interval.  So take the collection of these maximal intervals.  This is the unique maximal element from your partial order.
 * Next, you're trying to argue that if this collection isn't simply $$\{\mathbb{R}\}$$, you can contradict maximality. Why should this be so?--203.97.79.114 (talk) 01:28, 20 November 2010 (UTC)
 * I'm applying the maximality argument to the set of intervals (which cover some dense subset of $$\mathbb{R}$$), not to the intervals themselves. Otherwise we bump into Rckrone's counterexample given above. The intervals on the right of the origin are contained in the maximal interval ]0,1], so after havng found themaximal intervals, we wouldn't be done. Then while the maximal intervals in this case can e combined in a single step, note that Rckrone's argument can just as well used to produce an infinite sequence of infinite sequences of ever shrinking intervals, so that the set of maximal elements you end up with contains an infinite sequence of shrinking intervals. But, in my case, maximality refers to the set of intervals and it thus means that there are no intervals at all that can be extended. Count Iblis (talk) 16:09, 20 November 2010 (UTC)

If n(x) is constant, then the result is trivially a polynomial everywhere. Suppose n(x1) < n(x2), that implies that $$f^{(n(x2)-1)}(x)$$ changed value at some point during the interval [x1, x2]. But that implies its derivative $$f^{(n(x2))}(x)$$ also changed value at some point during the interval. Which means $$\exists x3 \in (x1, x2) \, s.t. \, n(x3) > n(x2)$$. At which point one can repeat the same argument for [x1, x3], [x3, x2], and get even higher n, etc. n(x) can be constant over finite intervals (for example by having intervals where f(x) is a polynomial), but between those intervals would seem to be dense regions of arbitrarily large n. I'm pretty sure that implies a contradiction with the assumptions of smoothness, but I'm not yet sure how to finish it. Dragons flight (talk) 00:57, 17 November 2010 (UTC)
 * More to the point there has to be arbitrarily large derivatives in those intervals which must I'd have though conflict with the idea of having a derivative there - but I can't quite get there either. Dmcq (talk) 01:07, 17 November 2010 (UTC)

Calculus in a nutshell
Can you please give me a very nice definiton of calculus? Say that I was on an ambulance stretcher and only have a minute or two to live, but instead of calling a priest for my last rites, I called a someone from the wikipedia math desk, because my last dying wish was to understand the concept of calculus. How would you explain it to me?

Also, is calculus similar to thought patterns, or rhetoric? Can knowing calculus really make your logos argument better?AdbMonkey (talk) 23:26, 13 November 2010 (UTC)
 * Two minutes? Can't be done.  I'd tell you that calculus is fields of Elysium, with soft breezes and harp music, and you should now go there to rest. --Trovatore (talk) 23:45, 13 November 2010 (UTC)


 * Do you have any confusion with the first paragraph of the article Calculus? "Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. ... It has two major branches, differential calculus and integral calculus ... Calculus is the study of change" Or the definition from Wiktionary "Differential calculus and integral calculus considered as a single subject". Differential calculus is "concerned with the study of the rates at which quantities change", (e.g. the slope of (curved) lines), and integral calculus is concerned with the area/volume enclosed by curves. The Fundamental theorem of calculus says that these two different techniques, differentiation (slope-finding) and integration (area-finding), are inverses of each other. I don't really understand your final question, except to say that there isn't anything particular about the thought patters involved in calculus which aren't also involved in other fields of advanced mathematics. -- 174.21.243.119 (talk) 00:02, 14 November 2010 (UTC)


 * How about this: "Calculus is the practical solution to Zeno's paradox, extended to arbitrary dimensions."   Calculus is closer to geometry than to logic, and good visualization skills will serve you better than abstract logic in most cases. -- Ludwigs 2  00:09, 14 November 2010 (UTC)


 * If I run at 3 miles per hour for 1 hour, I will have gone 3 miles. If instead I start at 0 miles per hour and speed up to 6 miles per hour smoothly, by the end of 1 hour how long will I have gone? Calculus is the branch of math that answers this question and many other related questions. (You seem to have accepted that the definition will be inadequate, so I give an inadequate one that I think captures the start of calculus.) I'm not really qualified to say if your "logos argument" will be better from knowing calculus. I strongly believe studying math makes your reasoning sharper in general. I would suggest studying real analysis, since "calculus" often means a non-rigorous approach, and many arguments from analysis are very subtle, requiring absolutely flawless reasoning to overcome a lack of intuition. 67.158.43.41 (talk) 02:29, 14 November 2010 (UTC)

Yes, unsigned number, I have a lot of confusion about calculus, which is why I asked the question. I was hoping someone could dumb it down for a dummy like me, because my brain starts to aneurysm when I read the wiki article on calculus. Now, if you'll excuse me, I have to finish eating my bowl of paintchips. AdbMonkey (talk) 02:47, 14 November 2010 (UTC)
 * I once had a book years ago that described the essense of it in an understandable way in three or four pages, but as so many books have similar titles I cannot specifically identify it. There's also Calculus Made Easy available online for free, and I found this: http://www-math.mit.edu/~djk/calculus_beginners/ 92.15.7.155 (talk) 16:43, 14 November 2010 (UTC)
 * Well, it's fine to recognize your confusion and try to address it, but you won't get there by trying to put things in a nutshell that don't fit in a nutshell. Find a calculus book and start working through it, ideally one that emphasizes physical intuition.  Avoid like the plague the texts that are mainly aimed at teaching you algorithms.  --Trovatore (talk) 03:00, 14 November 2010 (UTC)
 * If you want to get more into the meat of what sort of issues calculus deals with here on wikipedia, you might check out limit (mathematics) and derivative. Rckrone (talk) 03:24, 14 November 2010 (UTC)
 * Might I recommend Spivak's Calculus to the OP? 24.92.78.167 (talk) 03:26, 14 November 2010 (UTC)
 * ... or, for a more informal and discursive presentation, you could try A Tour of the Calculus by David Berlinski (but be warned that Berlinski's idiosyncratic style of writing is not to everyone's taste). Gandalf61 (talk) 12:42, 14 November 2010 (UTC)


 * Calculus is all about calculating. The things that are trivial to calculate involve adding up and/or multiplying finite amounts of numbers. This is what we learn in primary school. Calculus focusses on finding answers to sums that would involve an infinite number of elementary operations when directly evaluated. Count Iblis (talk) 04:24, 14 November 2010 (UTC)


 * Calculus in a nutshell. A Boeing engineer submits specifications for an airplane can decelerate continuously from its maximum landing speed of 100 miles per hour to 0 by applying it's maximum safe breaking, in 15 seconds.  Is that fast enough to stop by the end of the runway?  This is a calculus question, because the facts you know are about acceleration (change in speed), but the facts you want are about distance.   This is what calculus is used for.  To get from speed to distance, from acceleration to speed, from deceleration to distance.  If you ever do engineering, it is important.  I, as a computer engineer, have never once used calculus.  I know this for a fact, because I failed it and didn't learn anything in that class, so I couldn't possibly have used it afterwards.  It's just not important enough for me, and, being mildly retarded and dyslexic, I have a lot of trouble with symbols and abstract math.  This is why I program in visual basic most of the time, which does not require it. 213.219.143.113 (talk) 08:39, 14 November 2010 (UTC)


 * Try #2: At it's core, calculus deals with problems by breaking them into an infinite number of infinitely small steps. Derivatives (slope-finding) work by breaking a curve into an infinite number of infinitely small straight lines, each of which you can find the slope of. Integration (area-finding) works by breaking a curved area into an infinite number of infinitely small rectangles, each of which you can find the area of. Calculus tells us that infinite sums (infinite series) can sum to finite numbers, if the terms in the series eventually get infinitely small (e.g. 1/2 + 1/4 + 1/8 + 1/16 .... = 1). -- 174.21.243.119 (talk) 19:30, 14 November 2010 (UTC)


 * Newton described his calculus in terms of fluxions (derivatives) and fluents (integrals). You may be interested in glancing at Method_of_Fluxions, and the full-text links within. Newton was very motivated by physics, but calculus is much more general, and physics examples strike many as dry. One (very informal, graphical) way of thinking of the fundamental theorem of calculus is that differentiation is akin to `zooming in' on a curve, until it appears as a straight line. Integration is like zooming out until the filled area under a curve looks like a rectangle. Differentiation is focusing/ sharpening, integration is blurring/ averaging. As such, they are opposite and inverse actions. The basic concepts of calculus can be explained very informally, but we need rigor and formality if we want to use it to make airplanes. SemanticMantis (talk) 20:06, 14 November 2010 (UTC)


 * I don't understand taking integration as zooming out, differentiation as focusing/sharpening, or integration as blurring/averaging. Would you mind explaining a bit more? It might be interesting. I've always thought of the fundamental theorem of calculus as "where you are is the sum of where you've been heading", but that probably requires more explanation to be very descriptive. 67.158.43.41 (talk) 20:15, 14 November 2010 (UTC)
 * First, I think your quote is in the spirit of the position being the integral of a velocity function. This is not the fundamental theorem, but an aspect of the integral alone. To (even paraphrase) the fundamental theorem, you have to identify the inverse operation. As for my analogy, the definite integral of a given function on [a,b] divided by |b-a| is an average value for f on that domain. Thus, it's fair to say loosely that integration is an averaging process. Perhaps smoothing is a better word than blurring. Consider a function F defined as the integral of some continuous function f. F is smoother than f, in the sense that it has at least as many continuous derivatives (an often more in practice). In particular, convolution integrals are a specific way of using one function to smooth out another, via averaging them together in local neighborhoods. SemanticMantis (talk) 23:23, 14 November 2010 (UTC)
 * Blurring/averaging for integration I think I buy, and your explanation is what I expected, though I was somehow hoping for a more direct analogy. You didn't explain the others. I don't understand your criticism of my statement. The sum is taken in the limit of arbitrarily small intervals, and the summands are the y-distance traveled in each interval, given by derivatives. This is precisely the second part of the fundamental theorem of calculus. 67.158.43.41 (talk) 01:40, 15 November 2010 (UTC)
 * I may have misunderstood your phrasing. This is why calculus is best treated formally. I just meant that the FTC essentially states that the antiderivative is the definite integral. It establishes identity between to these two formally distinct entities, and shows that integration and differentiation are inverse operations. To me, this is the key, and even loose descriptions should attempt to convey it. At the core, the derivative says something about a function at a point, while the definite integral describes behavior over a region. This is the basis of the focus/ blur analogy. As for my other analogies, that the derivative is a 'zoom in' process to a linear approximation is fairly evident from the limit definition of the derivative, and well-founded. Integral as 'zoom out' is not as good, but I think it works at a stretch. Picture sin(x)+10. In a smaller viewing window, it looks wiggly. Zoom out a bit, and the shaded area below the curve begins to resemble a rectangle, whose area can be computed without calculus. Thus, we can think finding slopes and areas as zooming in and out, which are obvious inverse processes. Shaky, I know. Just trying to give some indication of what it's all about. Hope this helps. For a true correct understanding, there's no substitute for a rigorous and formal treatment. SemanticMantis (talk) 02:13, 15 November 2010 (UTC)
 * I can't quite agree on the zoom in/zoom out idea, though I certainly understand your desire to describe the derivative and integral symmetrically as physical inverses. I also agree formalizing these concepts is a fantastic idea for a "true" understanding, though I've known people whose only real problem with math was that they couldn't translate from "math speak" to their usual thought processes. For such people writing the symbols down is far less helpful than appealing directly to their intuition. 67.158.43.41 (talk) 05:12, 15 November 2010 (UTC)

The wordiness kind of throws me off, but I think I see. So calculus is no big deal and it's just about calculating things. Things that can help build things. I thought it would be somehow more impressive than that, but I guess that's all there is to it. All right, thanks. AdbMonkey (talk) 03:04, 15 November 2010 (UTC)


 * In my experience, everything is less impressive when you understand it, from TV show plots to calculus. This helps explain why children are so much more excited than adults. 67.158.43.41 (talk) 04:15, 15 November 2010 (UTC)


 * From a practical, "engineering" standpoint, calculus is just about calculating things to help build things, as you said. From a conceptual, "mathematical" standpoint, calculus gives us the tools in order to "do arithmetic involving infinity," roughly, which can mean infinitely big values, infinitely small values, or infinitely many steps in the calculation. These conceptual breakthroughs are what make calculus such a big deal in the mathematics world. —Bkell (talk) 05:08, 15 November 2010 (UTC)

Thanks, but I'm not reading any of those big sophisticated books that are more for people who easily understand the subject. I was looking more for a nice kindergarten version of what calculus is, as if it were in a children's book. I doubt, since no one said it helps improve your rhetorical style, logical fallacies and whatnot, that it would be useful to me. I was thinking logic= logos? Connection, right? :P Oh well. Thanks for assuring me that there is absolutely no way that knowing calculus will help improve your logical reasoning. This really makes me feel like I'm not missing anything important in life. AdbMonkey (talk) 03:05, 16 November 2010 (UTC)


 * "Thanks for assuring me that there is absolutely no way that knowing calculus will help improve your logical reasoning." I strongly disagree. From your statements, I think a little first-order logic would be much more useful to you than calculus in this regard. It seems like all of the content you're really after. Some of that stuff actually can be explained adequately to a kindergartner too, IMO. 67.158.43.41 (talk) 03:47, 16 November 2010 (UTC)

THAT can be explained to a kindergartner? :( Thanks for the effort anyway. AdbMonkey (talk) 08:39, 16 November 2010 (UTC)


 * Lewis Carroll wrote a book called Symbolic Logic that he thought was appropriate for young children. On the calculus side, there's also a book called Calculus for Cats that might appeal to you. —Bkell (talk) 15:54, 16 November 2010 (UTC)
 * Formal treatment of calculus is known for requiring a particularly rigorous way of thought. It has many counterintuitive phenomena, and concepts which are transformed completely if you change a single detail. So learning calculus certainly can improve your logical reasoning, even if not directly.
 * Calculus is the way the universe works. In particular, the most fundamental laws of physics are particular differential equations. I'd say that being completely oblivious to it is a pretty important thing to miss in life. -- Meni Rosenfeld (talk) 16:13, 16 November 2010 (UTC)


 * To be clear, the first order logic page I linked is written formally. Informally, you could start describing first order logic by asking "if I have a chocolate in one of my hands and I show you it's not in my left hand, which hand is it in? How do you know?" 67.158.43.41 (talk) 22:06, 16 November 2010 (UTC)

I like the way that you explain it, .41. This is good palatable stuuff. Do you have any other stuff like your examples? Your examples only please. Not a scary book. AdbMonkey (talk) 22:16, 16 November 2010 (UTC)


 * I'm sorry, I'm making them up off the top of my head. Similar examples would be models for easy first order logic proofs. To make more, one could translate the formal logic of an easy proof into English (a little like this section of the Boolean logic article). Above all the point is to avoid jargon. There must be books out there with a treatment of logic you'd like. A brief search found Logic for Dummies. I don't necessarily endorse it since I haven't read it, but I glanced through it on Google Books and from that brief inspection it seems like what you might be interested in. I do strongly recommend formal logic if you're interested in making good arguments: "practice makes perfect". 67.158.43.41 (talk) 21:11, 17 November 2010 (UTC)

Thank you very much. That's a lot of helpful advice. Thanks for answering. :) AdbMonkey (talk) 20:39, 18 November 2010 (UTC)