Talk:L'Hôpital's rule/Archive 1

Location
Stupid question time again... Should this be here, or at L'H&ocirc;pital's rule...? -- Oliver P. 02:30 Feb 26, 2003 (UTC)
 * I'm not sure if it's written somewhere, but the policy seems to be that the content should be placed under the title with no accents, umlauts, circumflexes, etc. and redirect titles with those characters to the main article. -- Minesweeper 06:33 Mar 9, 2003 (UTC)
 * Huh?? Since when?? What about Kurt Gödel and Paul Erdős?  There are thousands of these and I've never heard anyone say a single word against them. Michael Hardy (talk) 02:37, 20 April 2008 (UTC)

Edit
I have to wonder: as I did my first time major edit, I expected my proof of l'hopital's rule to be torn apart and totally rewritten. I have not yet recieved a single edit after mine. What's up? The edit was good, the proof was correct? Or does nobody care about this corner of wikipedia? --Sverdrup 16:19 Nov 18, 2003 (UTC)

Actually, the history shows there have been edits after yours. I'm guessing L'Hôpital's rule probably isn't one of the most frequently visited pages in Wikipedia though, so it's likely an edit that isn't vandalism would go unnoticed for a while in something as specialised as this. My only thought was - is there a point to the strange purple box? Angela 19:47, 20 Nov 2003 (UTC)

I looked at another page displaying a proof, and that article had a purple box around it. It may not be convention, and it may not be beautiful, but I'll let someone else decide. Sverdrup 16:24, 21 Nov 2003 (UTC)

l'Hopital's rule should be cross-referenced with Stolz-Cesàro theorem, because they are similar in flavour. It is natural to think about a' for sequences as a'_n=a_{n+1}-a_n / 1.
 * I've added a link to that from this article. Michael Hardy 04:08, 2 April 2007 (UTC)

Use examples that avoid the three naive rules
Here are three naive rules that are erroneous and could plausibly be believed by anyone who might be struggling to understand this topic:

(1) The numerator approaches 0; therefore the quotient approaches 0.

(2) The denominator approaches 0; therefore the quotient approaches &infin; or &minus;&infin;.

(3) The numerator and denominator both approach the same number (0); therefore the quotient approaches 1.

To avoid reinforcing errors, the earliest examples involving the indeterminate form 0/0 should therefore be cases in which the limit is neither 0 nor 1 nor &infin; nor &minus;&infin;.

Does anyone have a good example of this involving the indeterminate form &infin;/&infin;? Michael Hardy 00:01, 13 Sep 2004 (UTC)


 * I added an example of 0/0 where the limit is 69.5. By the way, I think the first two rules above are correct if you meant "Only the numerator" and "Only the denominator". Eric119 01:03, 13 Sep 2004 (UTC)


 * Unfortunately, the example you added was one that could be done so easily without L'Hopital's rule that it's not really a good example. And besides, using L'Hopital's rule on problems of that kind often runs the risk of the same kind of circular reasoning that is referred to in the article.  (Of course, the proposed "naive rules" are correct with the qualifications you mention; that is the reason people may be tempted to follow them.) Michael Hardy 20:20, 13 Sep 2004 (UTC)

the limit of sin(6x)/5x as x approaches 0 is 6/5, is that the kind of example you're looking for? --67.103.161.210 (talk) 17:08, 4 October 2008 (UTC)

Other proofs
I don't understand the proof by the linearity argument (I don't even know what does it mean). Can someone please expand it (or explain it to me on this talkpage) so that it's understandable?


 * Maybe I'll add something there at some point... Michael Hardy 19:09, 20 Jan 2005 (UTC)

Additionally, the seccond sentence assumes f(x) tends to infinity if g(x) tends to infinity,


 * I don't see how you get that from the second sentence in that proof. Michael Hardy 19:09, 20 Jan 2005 (UTC)

which there is no reason to assume (or it must be somehow explained if there is some). --hhanke 21:57, 19 Jan 2005 (UTC)


 * Local linearity: a differentiable function is "locally linear," meaning that it is practically linear if a sufficiently small part of it is viewed.


 * The argument is that if $$\lim_{x \to 0} \frac{f(x)}{g(x)}$$ tends to 0/0, then $$\lim_{x \to 0} f(x) = \lim_{x \to 0} g(x) = 0$$. Then, provided the functions are differentiable at 0, each function can be approximated locally:  e.g., for f(x), it can be approximated near 0 by the line with slope $$f'(0)$$ that passes through (0,0).


 * So we have $$f(x) \sim xf'(x)$$ and $$g(x) \sim xg'(x)$$, both near $$x=0$$.


 * Hence $$\lim_{x \to 0} \frac{f(x)}{g(x)} = \lim_{x \to 0} \frac{xf'(x)}{xg'(x)} = \lim_{x \to 0}\frac{f'(x)}{g'(x)}$$.


 * I hope that works for a quick explanation. -Rjyanco 19:17, 20 Jan 2005 (UTC)

Error in example?
I was looking at the example for lim {x->0+} (x ln x) which is the next to last example, and I was wondering if it has some errors in it:

first, I don't see how lim {x->0+} (x ln x) = lim (x->0+) ((ln x)/x)? I've tried doing the math and don't see how x ln x = (ln x)/x directly or by its derivatives

second, assmuming the first equation is true, then after applying l'hopital's we get lim {x->0+} ((1/x)/1) = 0. Well, lim {x->0+} ((1/x)/1) = lim {x->0+} (1/x) and for all intensive purposes i thought lim {x->0+} (1/x) = oo+ (positive infinity)? It seems to me someone applied l'hopital's another time and got lim 0/1 = 0 but you can't do that since we don't have a 0/0 or OO/OO condition. If you graph 1/x you clearly see it going to oo+ as x->0+. [stux]


 * I think you're right. I believe I have fixed it now. Eric119 22:55, 8 Feb 2005 (UTC)

(Since non-native-English-speaking people may be reading this, please not: "for all intents and purposes" is a standard locution; "for all intensive purposes", appearing above, is a jocular parody.) Michael Hardy 22:47, 9 Feb 2005 (UTC)

definition too ridiculous
The definition of L'Hopitals rule is too rediculous. In other words its not accessible. The formal definition either needs to be toned way down, or come later. I will try to do the ladder for now. Fresheneesz 01:44, 5 June 2006 (UTC)

it seems as though your proof relies on lim f(x)=f(x) as x aproches a, but this would only be true if the theorm required continuity for [a,b] which it doesn't.

More examples of other indeterminate forms would be helpful.
I came here to see examples for the forms $$1^\infty$$, $$\infty^0$$, $$\infty-\infty$$ and $$0^0$$. Found none.. —The preceding unsigned comment was added by 84.108.154.59 (talk • contribs).


 * Those are not forms that L'Hopital's Rule is explicitly set up to solve. You can usually solve them using natural logarithms to turn them into one of the other standard forms. Anyone think the article should have this sort of trick added to it? -- dcclark (talk) 18:15, 14 July 2006 (UTC)


 * Not I .  --Bob K 18:35, 14 July 2006 (UTC)

Story of the creator
My univeristy professor told us that the origial author sold the rule to Hospital, because he was apparently poor and Hospital wanted to be remmembered. As I have no citation or source, I'll wonder if anyone else heard something like that.. Lovok 12:16, 15 August 2006 (UTC)

i did


 * MathWorld says that l'Hôpital published it in 1696, but acknowledged the Bernoulli brothers. A letter by John Bernoulli gives the rule and a proof, so it seems likely he discovered it. Tmartin 21:00, 28 November 2006 (UTC)

I think this is great. I consider it an important early example of "the purchaser overshadowing the creator". I happen to have a text by Professor Earl D. Rainville called Unified Calculus and Analytic Geometry from 1961. It's interesting that he presents this as "Theorem 36" on page 262, rather than L'Hopital's Rule. (He also presents it late, after the student has already seen significant material. More modern texts seem to hurry to put it out there within the first 30 pages of the presentation of Derivatives.)TaoPhoenix 06:57, 21 August 2007 (UTC)

Local Linearity Proof, Pending
There is a section in the article, which goes as follows:

Other proofs
There are more intuitive proofs of the rule. If
 * $$\lim_{x \to a} \frac{f(x)}{g(x)}$$

tends to the indeterminate form 0/0, then the rule can be proven with a local linearity argument. If it tends to the indeterminate form $${\infty}/{\infty}$$, then this can be converted to 0/0 form using the identity :$$\frac{f(x)}{g(x)}=\frac{1/g(x)}{1/f(x)}.$$ By assuming this limit equals L, and taking the derivative of the numerator and denominator, it can be proven that $$L=\lim_{x \to a}\frac{f'(x)}{g'(x)}$$.


 * What follows is a local linearity proof that, in a sense, is something that I came up with while thinking about the problem. If it works and you (reader) have no objections to it, I'll include it soon. And I quote:

Proof by local linearity
Suppose that functions $$f(x)$$ and $$g(x)$$ are continuous and differentiable on the interval $$(a, b)$$. In addition, there is a function $$h(x)$$ that is equal to the ratio of $$f(x)$$ and $$g(x)$$. In other words,
 * $$h(x)={{f(x)}\over{g(x)}}$$

When these functions are equal to a real number $$c$$, which is on the interval $$(a, b)$$, their ratio
 * $$h(c)={{f(c)}\over{g(c)}}$$

For example, if $$f(x)=x^3+1$$ and $$g(x) = \sin \left(\frac{\pi x}{4}\right)$$, then $$h(x)$$ evaluated at 1 is
 * $$h(1)={{f(1)}\over{g(1)}}={{1^3+1}\over{\sin \left(\frac{\pi \cdot 1}{4}\right)}}={{2}\over{1 / \sqrt{2}}}=2 \sqrt{2}$$

However, suppose that $$f(c)=g(c)=0 \,$$, where $$c$$ is again a value on the interval $$(a, b)$$. Then
 * $$h(c)={{f(c)}\over{g(c)}}={0 \over 0}$$

0 divided by 0 is indeterminate. This means that $$h(c)$$ could be any real number or infinity, based on the context. However, $$f(x)$$ and $$g(x)$$ are continuous and differentiable at $$x = c$$, they can be approximate by lines (because of local linearity.

The linear approximation (or linearization) of $$f(x)$$ and $$g(x)$$ around $$x = c$$ is, respectively, $$y=f(c)+f'(c)\,(x-c)$$ and $$y=g(c)+g'(c)\,(x-c)$$. The function $$h(x)$$ near $$x=c$$, then, can be approximated by
 * $$h(c)={\lim_{x\rightarrow c}{{\mbox{Linearization of } f\left(x\right)}\over{\mbox{Linearization of } g\left(x\right)}}}$$

Using the mathematical expression for the functions' linearizations,
 * $$h(c)={\lim_{x\rightarrow c}{{f\left(c\right)+f'\left(c\right)\,\left(x-c\right)} \over{g\left(c\right)+g'\left(c\right)\,\left(x-c\right)}}}$$

Since $$f(c)$$ and $$g(c)$$ and 0 regardless of the value of $$x$$,
 * $$h(c)={\lim_{x\rightarrow c}{{f'\left(c\right)\,\left(x-c\right)} \over{g'\left(c\right)\,\left(x-c\right)}}}$$

There is factor of $$(x - c)$$ in both the numerator and denominator, which can be canceled, revealing
 * $$h(c)={\lim_{x\rightarrow c}{{f'\left(c\right)} \over{g'\left(c\right)}}}$$

If $$f'(c)$$ and $$g'(c)$$ are both real and both are not zero, then
 * $$h(c)={{f'\left(c\right)} \over{g'\left(c\right)}}$$

This concludes the proof. If a 0 over 0 form occurs again, L'Hôpital's rule can be used again (unless $$f(x)$$ and $$g(x)$$ resemble the zero function $$y = 0$$ around $$c$$).

The $$\infty$$ over $$\infty$$ can be treated similarly, when $$f(c)=g(c)=\infty$$. Since
 * $$\frac{f(x)}{g(x)}=\frac{1/g(x)}{1/f(x)}$$

which transforms a $$\infty$$ over $$\infty$$ form into a 0 over 0 form, which can be evaluated with L'Hôpital's rule.


 * (end section) All right. That's the proof. If no one disagrees, and please disagree or suggest changes if you don't like it, I'll put it on the L'Hôpital's rule page soon. -- Grace notes T  &#167; 17:01, 28 October 2006 (UTC)


 * The key statement here is: The function $$h(x)$$ near $$x=c$$, then, can be approximated by
 * $$h(c)={\lim_{x\rightarrow c}{{\mbox{Linearization of } f\left(x\right)}\over{\mbox{Linearization of } g\left(x\right)}}}$$

This is equivalent to L'Hopital's rule, and it's not proved at all. Septentrionalis 23:20, 30 October 2006 (UTC)

Name spelling
"l'Hôpital's rule (alternatively, and quite incorrectly, l'Hospital's rule)"


 * Sources for this? My understanding is that the two are almost equivalent in French and, in any case, the latter is a transliteration. This is like saying it's incorrect to spell Paul Erdős as Paul Erdos. In 'Mathematical Methods for Science Students' l'Hospital is used, presumably for a lack of a circonflexe at the printer. I don't see anything objectionable to the spelling and I think it's certainly a bit excessive to say that it's used "quite incorrectly". It may not be how l'Hôpital would spell his own name, but it's an acceptable transliteration of which I'm sure he would approve Tmartin 21:00, 28 November 2006 (UTC)


 * The circumflex character in the word "hôpital" denotes an unpronounced letter s. If the s is not pronounced in French (hence the circumflex), why should it be added back in when transcribed into English, such that English speakers will pronounce it? Most native-English speakers do not know what a circumflex is, let alone that the s it replaces is not pronounced. There's no legitimate reason not to render it with the circumflex--Wikipedia is full of diacritical marks, and IPA pronunciation similarly abounds. His name should be spelled using the French character set (i.e., l'Hôpital). 216.145.255.2 11:31, 5 January 2007 (UTC)

Well, as the article on Guillaume de l'Hôpital himself says:
 * l'Hôpital is commonly spelled as both "l'Hospital" and "l'Hôpital." The Marquis spelled his name with an 's';  however, the French language has since dropped the 's' (it was silent anyway) and added a circumflex to the preceding vowel.

In other words, the modern French spelling is Hôpital, and the old French spelling was Hospital. The spelling may be older, but it was still current in l'Hôpitale's time. So really, while it seems perfectly fair to say that "Hôpital" should be the prefered spelling in Modern English, it is absurd to call Hospital incorrect. --Iustinus 00:41, 31 January 2007 (UTC)

I agree in fact both spellings are common in mathematical literature and from pure language point of view I think you can argue both versions too (common spelling vs unchanged spelling of names). So I took the liberty to remove the incorrectly phrase.

Problem in proof that L'Hôpital's can be applied to infinity over infinity
In the proof that L'Hôpital's theorem can be applied to infinity over infinity, the theorem is applied to

$$ \frac{1/g(x)}{1/f(x)}$$

but nowhere was it assumed that $$({1 \over f(x)})'$$ is non-zero throughout as needed in order to apply the theorem for the case $$0\over 0$$

Micha
 * It is required. It follows from the existence of the limit. 80.178.152.155 20:51, 18 April 2007 (UTC)

You Divided by zero
0/0? You can't divide by zero if you do it gives you all sorts of weird answers. —The preceding unsigned comment was added by 71.246.231.88 (talk) 23:04, 2 January 2007 (UTC).


 * Hence l'Hôpital's rule, my friend. -216.145.255.2 11:32, 5 January 2007 (UTC)


 * We're talking about limits here. We can go as close as we want to 0 without actually touching it.130.234.198.85 23:43, 18 September 2007 (UTC)

How to pronounce the name
I guess most of our readers are not fluent french speakers, so it would be cool to have instructions in the article. Both IPA version and an english approximate would improve the article.130.234.198.85 23:57, 18 September 2007 (UTC)


 * The Guillaume de l'Hôpital page discusses the pronunciation, so I'm not sure it needs to be on this page, too. --David Marcus (talk) 01:58, 26 December 2008 (UTC)


 * A math student viewing this page would care little to look up the guy who made the theorem. The biography of a 19th century academic is pretty useless to a reader of this article. I would argue that important mathematics named for people with non-English names should list Anglicized pronunciations, multiple ones if necessary, denoting how professors and researchers usually pronounce them. SamuelRiv (talk) 03:50, 22 February 2011 (UTC)
 * Addendum: for people like Cauchy and Bernoulli, who have a million things named after them and fairly non-scary-looking names, we can just keep it on the bio page. But for common rules with less-common and crazy-looking-to-English-eyes names, I stand by my statement that pronunciation should be included. SamuelRiv (talk) 04:12, 22 February 2011 (UTC)

Other applications example is poor example of L'Hopital's rule
The example given under "other applications" as an example of an indeterminate inf/inf example is a poor example of L'Hopital's rule for 2 reasons: 1. It is easier solved without using L'Hopital's rule 2. You can take the directive of denominator repeatedly and it will remain an inf/inf indeterminate form. It doesn't terminate. This is represented by the term (2x-1)/(2*sqrt(x^2-x)) whose lim as x-> inf is declared to be 1. [How? By factoring an x out of the numerator and denominator to yield (2x(1-1/x))/(2x(sqrt(1-1/x))), canceling the 2x yields lim x-> inf (1-1/x)/sqrt(1-1/x) which is 1.] Refering back to item (1), if you apply this very same step ('pulling' an x^2 out of the sqrt and canceling) before applying L'Hopital's rule you get the same answer. L'Hopital's rule isn't needed AND doesn't get you any closer to the solution. It just complicates the problem. —Preceding unsigned comment added by 71.212.216.121 (talk) 22:13, 1 November 2007 (UTC)


 * It is a poor example. I've replaced it with a different example. --David Marcus (talk) 21:45, 25 December 2008 (UTC)

Important error

 * $$\lim_{x \to c}f(x)=\lim_{x \to c}g(x)=\pm\infty,$$

the limit of g(x) does not need to equal f(x)

--Perfection (talk) 17:38, 12 January 2008 (UTC)


 * I was about to write the same thing, According to L'Hospital rule if $$\lim_{x \to c}g(x)=\pm\infty,$$ then it doesn't matter what is the limit of f(x). Of course, if the limit of f is a real number then L'Hospital rule isn't need, but in any other case, such that the limit of f(x) is undefined, Plus/Minus infinity or not a real number (e.g. imaginary number) then L'Hospital rule still holds! Yomach (talk) 14:48, 18 May 2010 (UTC)

explanation of revert
The edit by user:69.230.107.87 was good. The first algebraic manipulation is so clear that there is no need to spend a line doing it. The explanation stating that cos is continuous is precise, simple, correct and the established way to make such an argument. Writing cos(1/∞) is too informal. Oded (talk) 16:18, 1 May 2008 (UTC)


 * That was me (I apparently didn't notice I wasn't logged in at the time). It drives me crazy when people say $$ {1 \over 0} = \infty$$ or that $$ {1 \over \infty} = 0 $$. The limit as x approaches zero is infinity and the limit as x approaches infinity (meaning that x just gets bigger and bigger) is zero, but it is important to remember that it is impossible to actually "get" to zero or infinity, but we can get arbitrarily close. miromodo (talk) 02:33, 6 May 2008 (UTC)

L'Hospital Proofs
Not sure about the proofs for L'Hospital's rule (forgive the mis-spelling), I changed the 0/0 case as using only epsilon as previously in both the numerator and denominator as this usage can be contradicted by the curves x^2 and x^3. I'm also not sure as to the veritability of the second proof, proofs I have seen in textbooks are far more complex and involve formal definitions of limits.

Cosec(x) (talk) 04:00, 20 May 2008 (UTC)


 * Could you clarify "can be contradicted by the curves x^2 and x^3"? Furthermore, your changes haven't helped.  The final step:



\frac{\lim_{a\to c}f'(a)}{\lim_{b\to c}g'(b)} = \lim_{x\to c}\frac{f(x)}{g(x)}, $$


 * is meaningless. By definition:



\frac{\lim_{a\to c}f'(a)}{\lim_{b\to c}g'(b)} = \frac{f'(c)}{g'(c)} $$


 * Therefore, I've reverted the changes again. Oli Filth(talk) 19:52, 21 May 2008 (UTC)

Technical Correction
Most calculus students don't know this, but the only condition required for the "infinity" case is that g(x) goes to &infin; or &minus;&infin;. In particular, nothing about the limit of f(x) needs to be assumed. See Rudin's "Principles of Mathematical Analysis" or any other advanced analysis book for a proof. —Preceding unsigned comment added by 98.223.172.153 (talk) 00:34, 3 August 2008 (UTC)


 * If x+sin(x)/x lim x->infinity isn't cos(x)+1, how is it automatically 1? srn347 —Preceding unsigned comment added by 68.7.25.121 (talk) 21:18, 5 November 2008 (UTC)


 * This is where one must be careful in what L'Hopital's Rule actually says. One of the hypotheses is that lim f'(x)/g'(x) exists (including the case of +/- infinity). In your example, that hypothesis fails, and so L'Hopital's rule can not be applied. See Rudin's book for the proof. —Preceding unsigned comment added by 129.79.147.41 (talk) 16:36, 6 November 2008 (UTC)


 * Yeah, that part I get, but how do you compute it and why is x+sin(x)/x lim x->0 1 then? And I have never heard of Rudin or his book, so I wouldn't know where to look. srn347 —Preceding unsigned comment added by 68.7.25.121 (talk) 17:58, 10 November 2008 (UTC)


 * So, you're asking about how to evaluate the limit of (x + sin x)/x as x-> infinity? My only point is that you can't evaluate it using L'Hopital's Rule. As far as how to actually do it, I would split up the fraction (x + sin x)/x = 1 + (sin(x)/x) and then show that limit of sin(x)/x as x->infinity is 0. How do you do that? Well, |sin(x)| <= 1 is a bounded function, and 1/x -> 0 as x-> infinity, thus |sin(x)/x| <= 1/x -> 0 as x-> infinity. Therefore, lim_{x->infinity} sin(x)/x = 0. —Preceding unsigned comment added by 98.228.32.76 (talk) 12:07, 19 November 2008 (UTC)


 * Thank you for explaining that, so I guess the sine (or cosine for that matter) is between 1 and -1 but can't be directly taken for infinity (yet). Maybe eventually it will become defined. srn347 —Preceding unsigned comment added by 68.7.25.121 (talk) 18:36, 24 November 2008 (UTC)


 * What is Rudin's proof? The two books I have that prove the infinity case use the limit of f being infinity in the proof. --David Marcus (talk) 21:59, 25 December 2008 (UTC)


 * It's in Rudin's Principles of Mathematical Analysis (p. 109-110 in the international edition), he proves l'Hopital's rule in 2 cases -- (1) f(x) -> 0, g(x) -> 0 as x -> a, and (2) g(x) -> infinity as x -> a (positive or negative infinity). To prove these, he assumes f'(x)/g'(x) -> A as x -> a where A is real or positive or negative infinity and x in (a, b).


 * For -infinity <= A < +infinity, he chooses q and r such that A < r < q, then the f'(x)/g'(x) assumption allows there to exist a c in (a, b) such that a < x f'(x)/g'(x) < r. For a < x < y < c, Cauchy's mean value theorem shows there exists a point t in (x, y) such that (f(x) - f(y))/(g(x) - g(y)) = f'(t)/g'(t) < r (*).  For case (2) with g(x) -> infinity, he fixes y in (*), so he chooses a point c1 in (a, y) such that g(x) > g(y) and g(x) > 0 when a < x < c1.  He multiplies (*) by (g(x) - g(y))/g(x) to get:


 * f(x)/g(x) < r - rg(y)/g(x) + f(y)/g(x) for (a < x < c1). Then he lets x -> a, so by the limit assumption for (2), there is a point c2 in (a, c1) such that f(x)/g(x) < q.  Thus, for any q > A, there exists a point c2 such that a < x f(x)/g(x) < q.  Similarly for A = +infinity, he finds a lower bound for f(x)/g(x) for a similarly defined requirement on x.  In both cases, therefore, we get f(x)/f(a) -> A as x -> a  —Preceding unsigned comment added by 68.202.40.194 (talk) 08:44, 18 October 2009 (UTC)

f vs. f(x)
The article sometimes confuses a function with its value:

"In simple cases, l'Hôpital's rule states that for functions f(x) and g(x)" should be "In simple cases, l'Hôpital's rule states that for functions f and g".

"When determining the limit of a quotient ƒ(x)/g(x) when both ƒ and g approach 0, or g approaches infinity, l'Hôpital's rule states that if f'(x) / g'(x) converges, then ƒ(x)/g(x) converges" should be "When determining the limit of a quotient ƒ/g when both ƒ and g approach 0, or g approaches infinity, l'Hôpital's rule states that if f'/g' converges, then ƒ/g converges".

"since 1/e^{sin(x)} fluctuates between e^{−1} and e" should be "since 1/e^{sin} fluctuates between e^{−1} and e" or "since 1/e^{sin(x)} fluctuates between e^{−1} and e as x->infty".

"However, it is simpler to observe that this limit is just the definition of the derivative of sin(x) at x = 0. In fact this particular limit is needed in the most usual proof that the derivative of sin(x) is cos(x), but we cannot use l'Hôpital's rule to do this, as it would produce a circular argument." should be "However, it is simpler to observe that this limit is just the definition of the derivative of sin at zero. In fact this particular limit is needed in the most usual proof that the derivative of sin is cos, but we cannot use l'Hôpital's rule to do this, as it would produce a circular argument."

--David Marcus (talk) 03:50, 6 December 2008 (UTC)

ofek says the last example should be explained
yes! —Preceding unsigned comment added by 84.229.217.114 (talk) 20:01, 16 April 2009 (UTC)

In the last example: L'Hôpital's rule can not be applied
Please consider the following example on the page:

"Sometimes L'Hôpital's rule is invoked in a tricky way: suppose $$f(x) + f'(x)$$ converges as $$x \to \infty$$. It follows: $$\lim_{x \to \infty} f(x) = \lim_{x \to \infty} {e^x f(x) \over e^x} = \lim_{x \to \infty} {e^x (f(x) + f'(x)) \over e^x} = \lim_{x \to \infty} (f(x) + f'(x))$$
 * and so $$\lim_{x \to \infty} f(x)$$ exists and $$\lim_{x \to \infty} f'(x) = 0.$$"

In the second equality, you seem to use the rule, which you can not, in general: $$f(x)=0$$ or $$f(x)={\sin(x) \over e^x}$$ are simple counterexamples; because for these functions, $$e^x f(x) \over e^x$$ is not of the form $$0/0$$ or $$\infty/\infty$$.

On the other hand, the conclusion of the example is true in these cases, but clearly, your proof via L'Hôpital's rule is invalid.

In view of these counterexamples to the "proof", is the claim of the example true at all?

89.134.198.147 (talk) 01:41, 27 December 2009 (UTC)

Having read the "Technical Correction" above in this discussion, I see that the Example is true, but some explanation in the main article should be necessary, since here we use the "generalized" version of the rule. 89.134.198.147 (talk) 00:28, 5 January 2010 (UTC)

The very definition of "limit" ?
Let me add some comments to the

''g'(x) <> 0 hypothesis and corresponding example '' subsection by Davidjmarcus on this discussion page, which also affect the main hypotheses of the present article about L'Hopital's Rule.

Davidjmarcus says

In the article, the example of f(x) = x + cos(x) sin(x) has an error. It is not true that...

...

The limit on the left does not exist because cos(x) is zero in any neighborhood of infinity and thus the expression is undefined at those points. The limit on the right is zero. The two limits are not equal. You can't cancel a factor of zero from the numerator and denominator of a fraction.

...

It is true that the derivative of g cannot vanish throughout such an interval, but this is implied by the limit existing, so need not be included in the hypotheses of the theorem. The reason that L'Hopital's Rule does not apply to the example is that lim_{x->a} f'(x)/g'(x) does not exist.

In this article you seem to understand the concept of the limit of a function in a way that the domain of definition of the function you take the limit of is a whole interval (or possibly punctured in the middle). But for several visitors who read this article this might be misleading: it is known (see for example http://eom.springer.de/L/l058820.htm) that $$\lim_{x \to a} F(x)$$ can also be defined if the point $$a$$ is only an accumulation point of the domain of definition of $$F$$. In this broader sense of limit, $$F$$ is not necessarily defined on a punctured interval: in many more points $$F$$ can be undefined, still the limit is well-defined.

I'm sure Spivak's book, Davidjmarcus is referring to, uses this broader sense, that is why he can "cancel a factor of zero from the numerator and denominator of a fraction" or why the example f(x) = x + cos(x) sin(x) is not incorrect.

In this broader sense, existence of the limit $$\lim_{x\to a} f'(x)/g'(x)$$ does not imply $$g'\ne 0$$ throughout the interval.

So my conclusion is that the article about L'Hopital's Rule should indeed include explicitly the appropriate condition $$g'\ne 0$$. —Preceding unsigned comment added by 89.134.198.147 (talk) 02:31, 27 December 2009 (UTC)

More multimedia content
Multimedia content on the topic can be found here: http://ocw.mit.edu/OcwWeb/Mathematics/18-01Fall-2006/VideoLectures/detail/embed35.htm

194.151.76.231 (talk) 15:27, 18 January 2010 (UTC)

Demostration
I personally think that is necessary to add the demostration of this rule. 190.231.115.4 (talk) 00:14, 14 May 2010 (UTC)

Notation
I've not seen anywhere on the page the notation used when the rule is used. While I don't know the typical way to write it in english speaking countries, it's common in Poland to write it like this:

$$\lim_{x\to 0}{\frac{e^x-1}{2x}} \overset{\left[\frac{0}{0}\right]}{\underset{\mathrm{H}}{=}} \lim_{x\to 0}{\frac{e^x}{2}}={\frac{1}{2}} $$

I think that this should be mentioned. Tomato86 (talk) 00:21, 24 June 2010 (UTC)

Bernoulli's rule?
The current version contains an unsourced claim to the effect that the rule is sometimes called Bernoulli's rule. Unless this can be sourced, the claim should be removed. Tkuvho (talk) 01:47, 25 July 2010 (UTC)

"Requirement that the limit exist(s)": grammar?
The phrase "The requirement that the limit ... exist(s)" has been edited a few times now. "Exist" in this context is the subjunctive form of the verb if I understand correctly; "exists" is informal but not necessarily wrong. I don't think it's a big deal which one we use (I have just a mild preference for the subjunctive), but it would be nice to form a consensus rather than have it keep flipping back and forth. Does anyone have a strong opinion on this? Jowa fan (talk) 00:56, 5 June 2011 (UTC)


 * Perhaps a more mathematically significant ambiguity is whether "exist(s)" incorporates infinite limits. It is common in many American textbooks to consider a limit as existing only if it is finite.  But L'Hospital's rule is still correct in the case of infinite limits, so these books have to use a circumlocution such as "then the following equality holds if the right hand side limit (with derivatives) exists, is infinity, or is negative infinity".--DudeOnTheStreet (talk) 00:07, 7 June 2011 (UTC)

This slight generalization should be in the article
The case "infinity over infinity" is too strong, it is necessary only that the denominator tends to infinity. See here: http://math.stackexchange.com/questions/62916/how-to-show-that-lim-limits-x-to-infty-fx-0-implies-lim-limits-x/62931#62931 — Preceding unsigned comment added by 201.231.234.10 (talk) 21:24, 8 September 2011 (UTC)

Circularity
One of the sections uses L'Hospital's rule to prove that $$lim_{x\rightarrow0}\frac{\sin \pi x}{\pi x}=1$$. However, this is circular as the derivative of sin x is prove using $$lim_{x\rightarrow0}\frac{\sin x}{ x}=1$$. Unsigned comment from 220.255.2.173
 * It depends which definition of sin is used. Some textbooks define it in terms of differential equations or as a series.  If either of these definitions is used, then there's no circularity here. Jowa fan (talk) 03:35, 7 November 2011 (UTC)

Infinity/infinity proof
As someone mentioned before, I thought
 * $$\lim_{x\rightarrow c}\frac{f(x)}{g(x)}=\lim_{x\rightarrow c}\frac{\frac{1}{g(x)}}{\frac{1}{f(x)}}$$

was the standard trick for converting an infinity/infinity into a 0/0 case. Currently there seems to be a complicated incomplete proof for this case, and no mention of the above trick. Is there a fatal flaw in the above trick for reducing the infinity/infinity case to 0/0? Rschwieb (talk) 18:32, 1 December 2011 (UTC) OK, n/m I see now that it's not just that such a problem can be reduced to the 0/0 case, it's actually that L'H's rule still is valid for that indeterminite form ∞/∞. That's stronger. Rschwieb (talk) 03:50, 3 December 2011 (UTC)
 * The proof that's totally lacking that's there right now, without even a citation of where it came from, must go. I have developed, in my sandbox, a thorough and rigorous proof, perhaps too long, but convincing, that I would like your comments on, and whether we should put it here once it passes every measure of rigor appropriate for WP. Please comment on it in my Discussion page.Toolnut (talk) 05:10, 4 December 2011 (UTC)
 * Presumably the reference is Spivak, mentioned at the opening of that section. Also keep in mind NOR and NOTTEXT. Generally proofs are avoided in favor of references to reliable sources. Any notion of "passing measures of rigor for proofs" in Wikipedia seems misguided. Rschwieb (talk) 01:42, 9 December 2011 (UTC)
 * We all agree, though, that it must be verifiable, if not through citation, then through logic. The current form fails both tests and itself admits that "steps are missing." Many contributions to WP are not word-for-word copies of other reputable professionals' works and this one is only a proof, supposed to be convincing; it is hard to imagine WP being nothing more than a depository for reputable sources' works, w/out bringing up copyright concerns; it is and remains a volunteer effort of contributors. If a proof is illogical or unconvincing, it achieves nothing and has no purpose being there. Until a proof that meets WP guidelines is found, the interested reader should be directed to where a pertinent discussion on this matter is taking place. Toolnut (talk) 23:08, 9 December 2011 (UTC)
 * If it has numerous problems, I agree it should be removed. Talk of providing "verifiability through logic" is noble, however I have to gently remind you that it is misguided and runs afoul of Wikipedia's guidelines. WP:NOR is very clear on this. Best of all is a cited sketch version of a reliable resource. The only reason this is not removed now is because it might be reproduced from Spivak (which I haven't verified). Fabricating an original proof, no matter if it is correct, is OR and it will be removed. Rschwieb (talk) 01:57, 10 December 2011 (UTC)

That's not how I understand it: If there is no controversy about the fact that L'H's rule works for the ∞/∞ case, because it is said to have been proved somewhere, and if the proof, in a contributor's own words supports that w/o violating the laws of mathematics, I don't see where the objection would lie in WP:NOR. Pertinently, it says: "must be attributable to a reliable published source, even if not actually attributed ... That 'Paris is the capital of France' needs no source, because no one is likely to object to it and we know that sources exist for it ... Articles should be written in your own words while substantially retaining the meaning of the source material." My textbook is a reliable published source that states that the rule works, w/o providing the proof; other textbooks may leave it to the reader as an exercise. Even if we find the proof in a reliable published source, we don't have to quote them word for word, yet the proof has to be convincing and complete in order to serve any educational purpose. And how do we know if the source is reliable, in the first place? I have found many mistakes in my textbooks, some obvious typos, some not so, that require proof, or reasoning, to contradict: some authors encourage their readers to contact the author to point these out to them. The fact of the matter is that no one bothers to question the sources of proofs, let alone their accuracy, due to their abstruse nature, and this is the second instance of an uncited proof I have found. Toolnut (talk) 04:50, 10 December 2011 (UTC)
 * If you do not see that WP:NOR applies, then you have not read carefully. The Paris quotation that you have misinterpreted provides an excellent contrast to this situation. Any reader of the Paris article knows where to find a source to verify Paris is the capital of France. NOT every reader knows where to verify a proof of a special case of L'Hopital's rule. As any experienced editor will tell you the threshold for inclusion is verifiability and not truth. I'm just saying that what you are doing is insisting on including it because it is true, however WP has a stronger standard: verifiability in a reliable source. This policy is intended to safeguard the reliability of WP's contents by ensuring there are published resources that concur. A single editor's opinion on what is true or not is a poor substitute for verifiability. Rschwieb (talk) 14:58, 10 December 2011 (UTC)


 * Rschwieb is right, of course, as is Boris Tsirelson who said on Toolnut's talk page "WP cannot accept original ideas, not even too original presentation of well-known ideas". Duoduoduo (talk) 17:24, 10 December 2011 (UTC)


 * Thomas and Finney, Calculus and Analytic Geometry, 5th edition, p.164 says the validity of l'Hopital's Rule in the infinity over infinity case is proved "in more advanced textbooks" [more advanced than Thomas and Finney!]. In a footnote they say: "There is also a proof of l'Hopital's Rule for both forms 0/0 and infinity/infinity in the article "L'Hopital's Rule," by A. E. Taylor, American Mathematical Monthly 59, 20-24 (1952)." Someone who can access it and follow it may want to use it to replace the incomplete proof that is currently here. (Or maybe they are identical except that a few steps from there need to be inserted here.)  Duoduoduo (talk) 21:49, 10 December 2011 (UTC)


 * I've already downloaded the proof you speak of: good luck to whomever will take on the task of rewriting it in their own words. My point is this: it is not original research to present a proof using existing, incontrovertible, laws of mathematics and logic, just as it is not original research to say the same thing differently, in your own words and w/o violating the rules of grammar. Also, a proof can only be verified by a reader familiar with the underlying laws of mathematics: the only thing that needs to be verified, for those who don't wish to take the trouble to convince themselves, is the foregoing hypothesis of the proof, the theorem. There is a reason why you can't patent a proof. Toolnut (talk) 14:29, 11 December 2011 (UTC)
 * I just rewrote the proof of the infinity/infinity case. It is not Copyright violation since I wrote the proof myself. It is also no original reasearch since it is highly based on a proof I saw in a mathematics lecture a few years ago. I hope it satisfies the Wikipedia standards. Have fun with it. --130.83.2.27 (talk) 16:18, 12 December 2011 (UTC)
 * Copyvio is not usually the problem with mathematical proofs. We shouldn't copy them word for work, but the further we stray from the reference, the closer to OR it is. I'm planning on incorporating the unified proof suggested by Duoduoduo. For one thing, it will eliminate the need for two separate proofs. It does not seem particularly difficult, either. Rschwieb (talk) 17:08, 12 December 2011 (UTC)

OK, here is the draft proof of Taylor's proof adapted for WP. His proof is nice since it only appeals to fundamental limit ideas (glb, lub, liminf, limsup) and the Mean Value and Squeeze theorems, and does not need any epsilons. It does both cases simultaneously in almost the same way. Full citations will be provided. Rschwieb (talk) 15:34, 13 December 2011 (UTC)

Draft replacement proof
In case the above conversation is escaping people's notice, this section should bring you up to date. The problems with the current proof are that it makes too big a deal about doing two cases, and it omits the case when the denominator diverges at a real number, and overall does not seem to follow a verifiable source. To clean this up and eliminate OR concerns, I've adapted an elementary proof by A.E. Taylor which takes care of these problems. READ the draft version here Thanks to Duoduoduo for finding this source. Full citations will be included if this draft makes it into mainspace. Rschwieb (talk) 15:47, 13 December 2011 (UTC)


 * After fixing my typos (thanks Toolnut) I think the aforementioned proof is ready to go. My only reservation is that it looks like 130.83.2.27 may not have been paying attention to this, and is still tweaking the section proposed for replacement. Are there any objections to THIS draft? Rschwieb (talk) 15:11, 14 December 2011 (UTC)


 * That proof looks very nice and is of course much better than my epsilon-proof. The only reason I added my proof to the page was that I didn't like the old version at all and wanted to improve the article. And that I did not think that one simple proof for both cases is even possible. :-) --130.83.2.27 (talk) 13:27, 15 December 2011 (UTC)


 * Hi! It's a relief that you are on board! New proof should go in soon. Rschwieb (talk) 19:19, 15 December 2011 (UTC)

Convergence to infinity?
Some edits today have used the terminology "converges to infinity". I've never heard that terminology before. My understanding has been that "converge" always means "converge to a finite limit", and "diverge" means "go to infinity". See for example the article Convergent series, which says


 * A series is convergent if the sequence of its partial sums $$\left \{ S_1,\ S_2,\ S_3,\dots \right \}$$ converges. In more formal language, a series converges if there exists a limit $$\ell$$ such that for any arbitrarily small positive number $$\varepsilon > 0$$, there is a large integer $$N$$ such that for all $$n \ge \ N$$,


 * $$\left | S_n - \ell \right \vert \le \ \varepsilon.$$

Since it is impossible to get within $$\varepsilon$$ of infinity, it is impossible to converge to infinity. Duoduoduo (talk) 16:07, 16 December 2011 (UTC)

Colloquially people use "converges to infinity" and "diverges to infinity" interchangably, but probably inaccurately. Doesn't "divergent" mean "not convergent"? Since limits approaching infinity technically "do not exist" then we should say "diverges to infinity" to describe the behavior. However people thinking in extended real numbers would say there is nothing inconsistent about saying "converging to infinity" because infinite limits exist, there. It's probably best to say "f diverges to infinity", but I didn't find an article on WP backing it. Do correct it, if you have a good bluelink.

You're right that f can't approach infinity the way it could a finite number. We say "f approaches infinity" when, for all n>0, there exists an interval I around c such that f(I)>n. Said another way, if the limit is finite number L, we need to ensure that we can choose an open interval I around c so that f(I) gets into a target open interval around L. For L=infinity, the target open interval is replaced by an unbounded interval (n,∞). Rschwieb (talk) 17:06, 16 December 2011 (UTC)

Circular example?
The Examples section contains the following:


 * One can also use l'Hôpital's rule to prove the following theorem. If $f &prime;&prime;$ is continuous at $x$, then



\begin{align} \lim_{h \to 0} \frac{f(x + h) + f(x - h) - 2f(x)}{h^2} & = \lim_{h \to 0} \frac{f'(x + h) - f'(x - h)}{2h} \\ & = f''(x). \end{align} $$

But later in the article, the section Logical Circularity says


 * In some cases it may constitute circular reasoning to use l'Hôpital's rule to evaluate a limit. Consider


 * $$\lim_{h\to 0} \frac{(x+h)^n-x^n}{h}.$$


 * If the purpose of evaluating this limit is to prove that if $f(x) = x^{n}$, then


 * $$f'(x)=nx^{n-1},\,$$


 * and one uses l'Hôpital's rule and this same fact to evaluate the limit, then the argument uses the conclusion as an assumption (i.e., begging the question) and is therefore fallacious (even though the conclusion is true).

Doesn't this argument also imply that the first example above involves circular reasoning? The example assumes that l'Hopital's rule is true, but isn't the last step in the example essentially used to prove l'Hopital's rule? Duoduoduo (talk) 16:27, 19 December 2011 (UTC)


 * I think the xn example is misleading. It's circular reasoning, but l'Hopital's rule is not really the main culprit. In this situation, you want to prove the differentiation rule lim((x+h)2-x2)/h=nxn-1. Then of course if one tried to l'Hopital's rule on the expression inside the limit, then they would have to know how to take the derivative of the numerator, which is not known at the outset of the problem. The step where the numerator is differentiated is the step where circular reasoning comes into play. I'm going to remove it until there's a better example.
 * The reasoning in the $$f''(x)$$ example that you cited does not seem circular. Do you see somewhere where the conclusion is used in the proof? Rschwieb (talk) 17:46, 19 December 2011 (UTC)


 * I admit I find this one a little confusing, but I do think there's circularity in



\begin{align} \lim_{h \to 0} \frac{f(x + h) + f(x - h) - 2f(x)}{h^2} & = \lim_{h \to 0} \frac{f'(x + h) - f'(x - h)}{2h} \\ & = f''(x). \end{align} $$


 * It assumes that l'Hopital's rule is true, which I suspect requires (?) the assumption that
 * $$\lim_{h \to 0} \frac{\phi(x + h) - \phi(x - h)}{2h} = \phi'(x)$$
 * which for $$\phi \equiv f \ '$$ is equivalent to $$\lim_{h \to 0} \frac{f'(x + h) - f'(x - h)}{2h}  = f''(x).$$
 * So I think this last equality, which is the conclusion of the argument, is also (?) an assumption of the argument. Duoduoduo (talk) 20:40, 19 December 2011 (UTC)
 * I don't see the conclusion anywhere in the hypotheses. All I see is a proof that a particular limit coincides yields f"(x) in a way other than its definition. What is wrong with assuming l'Hopital's rule is true? Rschwieb (talk) 21:04, 19 December 2011 (UTC)

Error in general proof
Let us begin this discussion by referring to the foregoing discussion started in User talk:Rschwieb/Sandbox.

Rschwieb was at first swayed by my many arguments about the fact that the referenced Taylor's proof has a minor typo in it:
 * $$m(x)\leq \liminf_{x \to c} \frac{f(y)}{g(y)} \leq \limsup_{x \to c} \frac{f(y)}{g(y)}\leq M(x)$$

should be changed to
 * $$m(x)\leq \liminf_{y \to c} \frac{f(y)}{g(y)} \leq \limsup_{y \to c} \frac{f(y)}{g(y)}\leq M(x),$$

for three reasons:
 * 1) Any limit of a function, say f(x,y,...) as x→c, whether inferior or superior, or just plain limit (if it exists), is not a function of the "limit variable", in this case, x:
 * $$\lim_{x\to c} f(x,y,...)=h(f,c,y,...),$$
 * which is not a function of x: x drops out of the evaluated limit.
 * 1) The nearest definition on WP of liminf (and, similarly, of limsup), not involving discrete sequences, is given to be ("metric space to metric space")
 * $$\liminf_{x\to c} f(x) = \lim_{\delta\to 0} (\inf \{ f(x) : x \in E \cap B(c;\delta) - \{c\} \} ) ,$$
 * which for the one-sided limit of a continuous one-dimensional function translates to:
 * $$\liminf_{x\to c^-} f(x) =\lim_{\delta\to0}\,\inf_{x\in(c-\delta,c)} f(x)=\lim_{x\to c^-}\inf_{\xi\in(x,c)} f(\xi),$$
 * again, not a function of the limit variable x, nor of ξ. In this case, it evaluates into a constant, a function of only the function definition "f " and of c. Rschwieb wants to modify this definition so it is a function of x.
 * 1) We are supposed to be taking the limit not of f(y)/g(y) (an intermediate limit only achieved under the assumption that y→c first) but of
 * $$\frac{f(y)-f(x)}{g(y)-g(x)}$$
 * and, as I pointed out earlier, the dependence on x vanishes when we make y go to c first, and so does y, the limit variable: it becomes a constant, sandwiched b/w two functions of x, m & M. If we make them go to c at the same time, we must specify how we do so, an unnecessary level of complication. Clearly, Taylor did not intend it that way, as he first says "Now let y→c ..." and, later, "We then let x→c ..." Toolnut (talk) 12:14, 20 December 2011 (UTC)


 * Again, it is not really an error but a difference of notation style. Hopefully this helps you: as x approches c, the set $$S_x=\{y\mid y \text{ between } x \text{ and } c\}$$ is shrinking, therefore $$p_x=\liminf\{h(y)\mid y\in S_x\}$$ is increasing. Analogously, $$q_x=\limsup\{h(y)\mid y\in S_x\}$$ is falling. Taylor is concluding $$m(x)\leq p_x\leq q_x \leq M(x)$$ for all x in the interval. I think I can adapt the above into the notation of the article, without disrupting Taylor's original proof too much.
 * Yes, he is ridding us of the variable y, but not in the way you are supposing. Yes, his notation does not exactly match the liminf wiki article (but that is hardly his fault). Rschwieb (talk) 14:26, 20 December 2011 (UTC)

Your proof is OR, if ever there was one, and makes us none the wiser for reading it. You are putting far too many words into Taylor's treatment without justification.
 * 1) You are yet to explain what would be wrong in adopting the simple correction I suggested, so anyone can understand it, without an advanced degree in math. The purpose of the proof is to educate, not to intimidate the reader. The proof has no value if the reader can't follow it enough to be convinced of the truthfulness of the theorem. Taylor does none of the shenanigans you suggest (show me!): he puts it in my simple terms: "Now let y→c ..." and, following the statement which needs correction, he says "We then let x→c ...," implying x has not yet done so.
 * 2) You are yet to explain why you let y approach c, before x, in eliminating $$\frac{f(x)}{g(y)}$$ and $$\frac{g(x)}{g(y)}$$ from $$h(x,y)=\frac{\frac{f(y)}{g(y)}-\frac{f(x)}{g(y)}}{1-\frac{g(x)}{g(y)}}$$, while you treat the remaining expression, $$\frac{f(y)}{g(y)}$$, as though that hasn't yet occurred. To be consistent you would have to put the bigger expression inside the liminf and limsup, but that is clearly not what Taylor has done.
 * 3) Based on the context, in your line
 * $$m(x)\le\liminf_{y\in S_x} \frac{f(y)}{g(y)} \leq \limsup_{y \in S_x} \frac{f(y)}{g(y)}\leq M(x),$$


 * (where the limit x→c has not yet been taken) $$\liminf_{y\in S_x} \frac{f(y)}{g(y)}$$ cannot be a limit inferior but an infimum, $$\inf_{y\in S_x} \frac{f(y)}{g(y)}$$, which is, indeed, a function of x (same thing goes for your limsup). To be fully correct, there should have been no assumption made that y had already gone to c, prior to this, and this line should then be corrected to:
 * $$m(x)\le\inf_{y\in S_x}h(x,y)\le\sup_{y\in S_x}h(x,y)\le M(x).$$


 * When we take the limit as x→c,
 * $$\lim_{x\to c}m(x)\le\lim_{x\to c}\inf_{y\in S_x}h(x,y)\le\lim_{x\to c}\sup_{y\in S_x}h(x,y)\le\lim_{x\to c}M(x).$$

In your response, please avoid ad hominems, when your competence dealing with this particular problem had been just as limited when I first brought it to your attention. Just address my issues, one by one, which you haven't done in this last round. I've given this more thought, care, and time than it may to you appear. Toolnut (talk) 12:03, 21 December 2011 (UTC)
 * The question remains: what, for one, is $$\lim_{x\to c}\inf_{y\in S_x}h(x,y)$$? Which goes to c first, y or y & x together? If y (as it should be), then explain how that is implied in the expression?
 * Your absurd behavior has now exhausted my patience for the last time. We will be talking if you decide to infringe upon WP policy again, but I hope this does not happen. Goodbye. Rschwieb (talk) 13:40, 21 December 2011 (UTC)

In the proof, the notation $$\liminf_{y\in S_x}$$ is confusing. It should be $$\liminf_{y\to c, y\in S_x}$$, or (my preference) just $$\liminf_{y\to c}$$ with a note in the text that the limit is taken for $$y\in S_x$$. Sławomir Biały (talk) 13:34, 22 December 2011 (UTC)

Even if no info about the existence of the limit of f is given?
The end of the proof says
 * So, l'Hôpital's rule provides a powerful tool for handling the ±∞/±∞ case, and moreover it can even decide the situation if no information about the existence of the limit of f is given.

But suppose that f has undampened oscillations, so that it has no finite or infinite limit. Then neither does f ', so l'Hopital's rule cannot decide the situation.

For example, consider $$\lim_{x \to \infty}\frac{f(x)}{g(x)} = \lim_{x \to \infty}\frac{\sin(x)}{x^{1/2}}$$. An application of the rule gives $$\lim_{x \to \infty}\frac{\cos(x)}{\tfrac{1}{2}x^{-1/2}}.$$ Here the numerator fluctuates between positive and negative, while the denominator goes to zero, so the ratio fluctuates between increasingly large positive and negative numbers, thus not deciding the situation.

So I think the last part of the above quote needs to be replaced with


 * ''So, ..., and moreover it can even find the limit of $$\frac{f(x)}{g(x)}$$ when $$f(x)$$ is known to have a limit but it is not known if that limit is finite or infinite.

Duoduoduo (talk) 22:57, 21 December 2011 (UTC)


 * In your example, you are correctly noting that $$\lim_{x\rightarrow c}\frac{f'(x)}{g'(x)}$$ does not exist. (The graph is also convincing of this fact.) The existence of this limit is a hypothesis of l'Hopital's rule, so this choice of f and g doesn't qualify :) Rschwieb (talk) 23:23, 21 December 2011 (UTC)


 * You missed my point while restating it. The existence of the limit of f ' / g ' is a hypothesis of the rule, and the existence of this limit is ruled out when f itself has no limit. Therefore it is wrong to say that l'Hopital's rule can decide the situation if no information about the existence of the limit of f is given. In fact, if f has no limit, then l'Hopital's rule can never decide the situation.


 * Can you give a single example in which l'Hopital's rule can decide the situation when f has no limit?  Duoduoduo (talk) 23:36, 21 December 2011 (UTC)
 * No, the statement "f'/g' has a limit implies f has a limit" is false. There are lots of examples. Let x go to infinity for f(x)=sin(x) and g(x)=x2. The Squeeze theorem confirms that both the limit of f/g and the limit of f'/g' is zero. I might have missed your point, but to me it looked like you were trying to apply l'Hopital's rule to a pair of functions that does not satisfy the hypotheses of the theorem. Rschwieb (talk) 00:44, 22 December 2011 (UTC)


 * Okay, right, lots of examples exist. Sorry I was careless in suggesting otherwise. But that doesn't alter my basic point. The passage in the article says


 * it can even decide the situation if no information about the existence of the limit of f is given.


 * This implies that it can always decide the situation, which is not true as my example showed. I'm going to change it to insert the word sometimes. Duoduoduo (talk) 16:20, 22 December 2011 (UTC)


 * I'm still not sure what you believe your example shows. It doesn't satisfy the assumptions, so it is not a counterexample to what was written. Under all the hypotheses (including lim g being infinite as the paragraph asserts), no additional assumption about lim f is necessary, and l'Hopital will always decide the situation.


 * It would certainly be bad if the reader suddenly forgot about the hypotheses, so I like your most recent edit. It's definitely worth reminding the reader how critical it is for the limit of f'/g' to exist. Rschwieb (talk) 17:33, 22 December 2011 (UTC)

What was written was not that under the assumptions of the rule, the rule will decide the situation. What was written was that the rule will (implicitly always) decide the situation.

We just have differing interpretations of the original wording -- you read it as implying "under the assumptions of the rule", and I read it as implying "always". But we both like the current wording, so it doesn't really matter what the best interpretation of the previous wording is. Duoduoduo (talk) 18:13, 22 December 2011 (UTC)
 * Yes, that seems to be the source of the misunderstanding :) If it is possible for a reader to forget that hypothesis, then to be consistent there are still lots of missing hypotheses (like differentiability) that would need to be mentioned. I'm going to give it one more shot. Rschwieb (talk) 18:28, 22 December 2011 (UTC)

Conversion
I suggest to add such section to the main article. Let's edit and discuss it here. Sorry for my poor English, feel free point me grammar mistakes and fix them.

There is an interesting problem with converse of L'Hopital's rule. Let's confine with case $$ x \to \infty$$.

Functions f and g are called comparable, if there exists $$k \in \R: f(x) \sim k g(x)$$. Informally l'Hôpital's rule states that if derivatives of two functions are comparable, then functions themselves are also comparable.

Converse of this statement is not true generally: one may take for example $$ f(x) = x,\ g(x) = x + \sin x, \quad f\sim g,\quad\text{but}\quad f'(x) = 1 \not\sim g'(x) = 1 + \cos x$$. Nevertheless, if we impose additional restrictions on the functions, which deny such oscillations, then reverse implication will be true. Good concepts for state such restrictions were discovered by G.H.Hardy. Indeed, if both f and g lies in some Hardy field and comparable, then their derivatives are also comparable.

Ab konst (talk) 08:52, 27 January 2012 (UTC)
 * Crap. "∼" is an undefined equivalence relation on functions, likely even less topical than Hardy field. L'Hôpital's rule is a local statement, and does not require functional spaces – only a sufficient smoothness in one point. Incnis Mrsi (talk) 17:22, 27 January 2012 (UTC)
 * Mrsi, please choose your phrasing. Elements of a Hardy field are germs (ie also local objects), so the statement itself is not ridiculous at all. As to references, I would have a look at Hardy's book "Divergent series". Sasha (talk) 22:02, 27 January 2012 (UTC)
 * Here $$f\sim g$$ means standard asympthotic equivalence of two functions. Sorry, I thought that it is obvoius from context. Besides, main result what I mean could be stated without Hardy fields: see below. Ab konst (talk) 17:16, 28 January 2012 (UTC)


 * More to the point, is there a reference for this material? This article popped up when I Googled it but I don't have access so I'm not sure it's related.--RDBury (talk) 17:55, 27 January 2012 (UTC)


 * I think that the Hardy field results are too technical for a general article. However, the Hardy-Littlewood tauberian theorem is perhaps worth mentioning as an example when the converse is true:

If f is differentiable in [0, 1), and f(x)~C(1-x)&minus;a, where C,a>0 and f'  is increasing, then f'(x)~Ca(1-x)&minus;a&minus;1
 * Sasha (talk) 23:02, 27 January 2012 (UTC)
 * Certainly, proof of this property of germs from Hardy field is rather complex. I suggest to put here only statement of result, without its proof. By the way, your example is quite interesting also. Ab konst (talk) 17:16, 28 January 2012 (UTC)
 * I know this result from 4th book of Bourbaki. Ab konst (talk) 17:16, 28 January 2012 (UTC)
 * Well, while "Hardy field" may be confusing as reference to very hard mathematics, I suggest this variant.

Let's call function f "very smooth" near infinity, if its any higher derivative $$f^{(n)}$$ exists and either strictly positive, strictly negative, or zero in some neighborhood of infinity. Now, converse of L'Hopital rule is true for two "very smooth" functions.
 * Ab konst (talk) 17:18, 28 January 2012 (UTC)
 * Sorry, I got lost in quantifiers. You (or Bourbaki) say that if f and g are such that for any n the nth derivative of both of them does not change sign infinitely often in the neighbourhood of infinity, then the converse holds? Is not it sufficient to assume it for one fixed n? Sasha (talk) 17:50, 28 January 2012 (UTC)
 * To the first question: you are quite right. To the second: I'll try to find answer tomorrow. Ab konst (talk) 19:20, 28 January 2012 (UTC)
 * I'm afraid now that my last sentence is not true at all. It seems very significant that both f and g are in the same Hardy field. My property certainly is necessary for functions in Hardy field; but I don't know is somewhere proven that it is sufficient for belong to any H.f. If it is true and I'll prove it, it would be original research. Ab konst (talk) 19:16, 29 January 2012 (UTC)

g'(x) <> 0 hypothesis and corresponding example

 * Part I

In the article, the formal statement for L'Hopital's Rule includes the condition that g'(x) <> 0 for x <> c in an open interval containing c. This condition is superfluous since it is implied by the condition that the limit of f'(x)/g'(x) exists. This is because if g'(x) = 0, then f'(x)/g'(x) is undefined and, in particular, |f'(x)/g'(x) - L| cannot be less than epsilon.

The book Calculus, third edition, by Michael Spivak (Publish or Perish, Inc., Houston, Texas, 1994, 0-914098-89-6) states L'Hopital's Rule (Theorem 9. Chapter 11) as follows:

"Suppose that lim_{x->a} f(x) = 0 and lim_{x->a} g(x) = 0, and suppose also that lim_{x->a} f'(x)/g'(x) exists. Then lim_{x->a} f(x)/g(x) exists and lim_{x->a} f(x)/g(x) = lim_{x->a} f'(x)/g'(x)."

In the article, the example of f(x) = x + cos(x) sin(x) has an error. It is not true that

lim_{x->infty} 2 cos^{2}(x) / (e^{sin(x)} cos(x) (x + sin(x) cos(x) + 2 cos(x))) = lim_{x->infty} 2 cos(x) / (e^{sin(x)} (x + sin(x) cos(x) + 2 cos(x))).

The limit on the left does not exist because cos(x) is zero in any neighborhood of infinity and thus the expression is undefined at those points. The limit on the right is zero. The two limits are not equal. You can't cancel a factor of zero from the numerator and denominator of a fraction.

The following statements preceding the example are incorrect:

"In practice one often applies l'Hôpital's rule and presumes that it was legitimate to use l'Hôpital's rule if the limit of the derivatives exists, even though that is not a strictly proper application of the rule. Note also the requirement that the derivative of g not vanish throughout an entire interval containing the point c. Without such a hypothesis, the conclusion is false. Thus one must not use l'Hôpital's rule if the denominator oscillates wildly near the point where one is trying to find the limit."

It is true that the derivative of g cannot vanish throughout such an interval, but this is implied by the limit existing, so need not be included in the hypotheses of the theorem. The reason that L'Hopital's Rule does not apply to the example is that lim_{x->a} f'(x)/g'(x) does not exist.

--Davidjmarcus (talk) 03:45, 5 December 2008 (UTC)


 * The reasoning in the first paragraph for removing the g '(x)≠0 hypothesis is faulty. The existence of the limit of a ratio of functions at c does not imply that its numerator is nonzero at c: e.g. lim sin(x)/sin(x) as x approaches 0 from either side is obviously 1, but the denominator zero at 0. See correction below. Every textbook I have seen so far, and a few scholarly papers to boot, has included the g'(x)≠0 hypothesis. Reinstating the hypothesis. Rschwieb (talk) 15:26, 12 December 2011 (UTC)


 * I agree with Davidjmarcus: the existence of the limit of f/g at c implies g!=0 on a punctured open interval containing c, by definition of the limit. Can you please elaborate on why you think his reasoning is faulty ? Matsw (talk) 18:50, 23 February 2012 (UTC)


 * Sorry, I somehow botched the counterexample I intended. Hopefully I don't do it a second time. His claim amounts to: "if the limit approaching c of a ratio of functions exists, then the denominator function is nonzero on some interval with endpoint c". The following counterexample shows this is false. Set f(x)=x*sin(1/x) and g(x)=sin(1/x). Lim f(x)/g(x)=0 as x approaches 0, but g(x) visits zero in every interval-around-0-with-0-deleted. In short, the existence of the ratio limit does not prevent the denominator from visiting 0 infinitely many times in every interval leading up to c. The bad behavior of the denominator is nullifed by the bad behavior of the numerator. Davidjmarcus' assertion caught my attention since it flew in the face of the first three calculus texts I picked up. The handwaving in the original post does not constitute a proof. Rschwieb (talk) 21:20, 23 February 2012 (UTC)


 * I disagree with your counterexample: you simplify the fraction x sin(1/x) / sin(1/x) by canceling sin(1/x), leaving just x, but you are only allowed to do that if you know sin(1/x) to be nonzero, which you don't. The domain of the function f/g in your example excludes infinitely many points, namely the values of x for which sin(1/x) = 0. Moreover, the definition of the limit (see http://en.wikipedia.org/wiki/(ε,_δ)-definition_of_limit) starts like this "Let ƒ be a function defined on an open interval containing c (except possibly at c) and let L be a real number." but your f/g does not satisfy that requirement because its domain has infinitely many holes in any open interval containing 0. In short: $$\lim_{x\to 0} \frac{x \sin\frac{1}{x}}{\sin\frac{1}{x}}$$ does not exist, but $$\lim_{x\to 0} x$$ does.Matsw (talk) 23:11, 23 February 2012 (UTC)


 * Contrary to your presumption, I'm well aware that it's undefined at the points where the denominator is zero. I'm pretty sure that the wiki ε-δ definition is written the way it is for simplicity, but topologically f/g still has limit 0. If you want to prove to the contrary that the limit does not exist, show me an ε>0 such that for every δ>0, there is a point x, |x|<δ but |f(x)/g(x)|>ε. You see, the undefined points don't cause problems.
 * Now if you took my example function f/g and define it to be 1 everywhere where it was previously undefined, then I would agree that the limit does not exist, because this new function continuously jumps back up to 1.
 * Possibly you may still find this unsatisfactory, but I will again point out that even if you don't believe me, numerous (every one I checked) reputable analysis textbook authors insist on using the g'(x) nonzero hypothesis in question. We can surmise they are doing it for a reason, and not out of foolishness. Even if you don't believe them, we should still match the textbooks to remain encyclopedic. Rschwieb (talk) 03:29, 24 February 2012 (UTC)
 * Update After having slept on it I think I can see all sides now. If you demand that the limiting function is defined on a connected interval around c, except maybe at c, then Davidjmarcus' claim is OK. However as my example shows, that is not possible for the domains of some functions from the reals to the reals. I think the reason the g'(x) hypothesis is used is to allow l'Hopital's rule to continue to work for subspaces such as the domain of my example. Without a doubt, my example has a limit at 0 if you are looking at it as a function from its domain into the reals. In any case, I'm still maintaining that we should leave the hypothesis in, to match texts. Rschwieb (talk) 12:05, 24 February 2012 (UTC)
 * Update To be completely clear, I concede that under that particular interpretation of limit (that limiting functions must be defined on a connected interval before c), you and Davidjmarcus are correct. However, in general that interpretation is probably too strict. By leaving in the g'(x) hypothesis in, we may cover more exotic cases. This is probably why the hypothesis appears in so many texts. Rschwieb (talk) 16:08, 24 February 2012 (UTC)
 * It seems plausible that there may be a "meta" issue at work here. In particular, it may be that the definition of a limit is amenable to change to allow more general applicability, in such a way that almost every existing theorem remains true with the substituted definition.  Should such a generalization be possible, the nonzero condition could become necessary.  A theorem (such as l'Hôpital's rule without the nonzero premise) that depends so sensitively on the precise definition of the limit would probably be a source of discomfort, and this may be sufficient reason for most authors to retain the explicit nonzero premise.  — Quondum☏✎ 07:34, 25 February 2012 (UTC)
 * It's an interesting discussion. I think we agree that it depends on the definition of limit that is used. If we use the topological definition (e.g. ), then we need the $$g'(x)\neq 0$$ hypothesis, but if we use the "classical" definition, then we don't. Matsw (talk) 08:45, 27 February 2012 (UTC)
 * Oboy, I'm learning too. Well spotted.  A careful reading of the topological limit in WP and of your EoM reference matches your interpretation.  I'd missed this before.  In particular, the topological limit is defined to be over the domain of the function, and not over the space in which the domain resides (in this case, R).  I can think of another way that uses the classic definition of a limit that would still allow g' to become zero infinitely often and still yield a limit: since we are dealing with derivatives, we are inherently dealing formally with functions and not simply with real values, so we can (somewhat unconventionally) define the ratio of two functions is such a way as to fill in removable discontinuities wherever both functions are continuous.  So either of two apparently sensible and very minor modifications of unrelated definitions is sufficient to break the implication that g'≠0 throughout some neighbourhood.  Rschwieb's original intuition seems to have been spot on if one uses the topological definition of a limit.  This does not mean that g'≠0 is a necessary condition for l'Hôpital's rule to be valid, just that under some reasonable definitions the existence of the limit does not imply g'≠0.  It would be interesting to see whether l'Hôpital's rule remains valid with definitional changes such as these.  — Quondum☏✎ 11:50, 27 February 2012 (UTC)


 * I think we all benefited from this. I don't recall encountering the quirk before, and it's good now that we have a broader picture. Clearly, most references use the g'(x) nonzero hypothesis. Then occasionally (one or two authors that I found) seem to require the stronger limit sense, so that they can "prove" g'(x) is nonzero there. And finally, a few authors simplify by requiring both functions to be differentiable on an entire open interval containing the point we're limiting to (which is of course an even stronger requirement.) Perhaps I should stick a sentence in about this quirk in the literature? Rschwieb (talk) 14:22, 27 February 2012 (UTC)
 * Some statement in the article to cover this seems appropriate, at least on the treatment in the literature, as well as the distinction between the two types of limit. It would seems that some mention of the type of limit (classical or topological) generally used would be appropriate; I've a feeling the classical is generally intended; topological seems dangerous and would need a careful review (it can cause limits to exist where before they didn't, and if the existence of a limit is a premise, things can go awry).  Such a statement might involve checking the choices made by the sources.  — Quondum☏✎ 15:47, 27 February 2012 (UTC)

I'll try to describe what happened as simply as possible. Rschwieb (talk) 15:58, 27 February 2012 (UTC)


 * Part II
 * "$$g'(x)\neq 0$$ on an open interval containing $$c$$" is too strong a requirement because it means that the rule cannot be used recursively. It should be replaced with "$$g'(x)\neq 0$$ on a punctured open interval containing $$c$$". Matsw (talk) 08:36, 23 January 2012 (UTC)


 * Modified the intro statement accordingly. (The statement in the body of the article was OK.) Rschwieb (talk) 16:41, 23 January 2012 (UTC)


 * The same holds for the differentiability of $$f$$ and $$g$$: the functions need not be differentiable at $$c$$. Matsw (talk) —Preceding undated comment added 13:18, 25 January 2012 (UTC).
 * It was probably that way since that section refers to the "simplest form" and not the most general form, given later. Rschwieb (talk) 21:20, 23 February 2012 (UTC)