Talk:Characterizations of the exponential function

Revamp
Give me a sec, I think the "intuitive" proof actually links defs 1 and 3, not 1 and 2, and might actually be able to replace the more clunky Havil proof I have here. But, I'm pooped and will return later. Revolver 01:21, 24 Jul 2004 (UTC)

Okay, I'm going to revamp this article a bit. In particular, I think that 2 of the arguments (the "technical" and "intuitive" ones) can be used to show the equivalent ways of defining the exponential function, not just the number e. Then, the equivalent definitions of e follow trivially by taking x = 1. Hopefully, this works! So, I'm going to change the title of the article to "definitions of the exponential function". Revolver 07:12, 24 Jul 2004 (UTC)

What would people think of moving this to characterizations of the exponential function? Michael Hardy 20:08, 24 Jul 2004 (UTC)

PS: see characterization (mathematics).


 * I don't really care, to be honest. Call it what you want. Revolver 22:42, 24 Jul 2004 (UTC)

Howcome self-derivative isn't included? It's always been the characteristic that jerked my knee... Kwantus 19:44, 4 Sep 2004 (UTC)
 * The main problem that pops into my mind is that it's not immediately obvious why such a function exists, i.e. why there is y s.t. y' = y and y(0) = 1. Without falling back on one of the others defs, I mean. If anyone knows of a nice way to prove this existence independent of the other 3 defs, I'd be happy to include it. Revolver 08:35, 20 Sep 2004 (UTC)

This is really a beautiful article. I like it very much. :) MathKnight 01:20, 1 Oct 2004 (UTC)

Characterization 4
Sorry, we have this problem on the german Exponentialfunktion. Maybe someone knows the exavt proof. I have only a german account. So Greetings Roomsixhu --83.176.135.233 02:50, 31 May 2005 (UTC)

Got it! How do I proove this? roomsixhu --83.176.135.227 19:02, 31 May 2005 (UTC)

Take characterization 4, which is:
 * $$y'=y,\quad y(0)=1.$$

Divide both sides by y and integrate. Get characterization 3, which is:
 * $$\int_{1}^{y} \frac{dt}{t} = x.$$

''is an increasing sequence which is bounded above. Since every bounded, increasing sequence of real numbers converges to a unique real number, this characterization makes sense.''
 * Do you call that a proof? It's not much more than a tautology. It should be explained why it is increasing and why it is bounded.--Army1987 20:31, 19 July 2005 (UTC)

What did I miss?
The proof of the equivelance of charactaristics 1 and 3 isn't actually a proof, it's a circular argument and I've no idea why it's there. It claims that:

$$\lim_{n\to\infty}\ln\left(1+\frac{x}{n}\right)^n=\lim_{n\to\infty}n\ln \left(1+\frac{x}{n}\right).$$

which should only hold surely, if we were dealing with a logarithm, but that's exactly what we're trying to proove, that ln x is the same thing as loge(x)


 * That's easy to prove; integral of dt/t from 1 to x^n, by the substitution u^n=t, becomes the integral of du n u^(n-1)/u^n = n u/du for u = 1 to x. Any good calculus book should have this proof. User:Ben Standeven as 70.249.214.16 03:05, 14 October 2005 (UTC)

Continuity required for characterization 5?
Is there an example of a discontinuous function $$f(x)$$ ($$\mathbb{R} \to \mathbb{R}$$) satisfying $$f(x+y)=f(x)f(y)$$ and $$f(0)=1$$, but for which $$f(x)\neq e^{cx}$$ for all $$c$$?

Of course $$f(x) = e^{cx}$$ if $$f(x)$$ is continuous, as proved in the article (I believe this is a homework assignment in Rudin), but I'm curious to see a counterexample showing that continuity is necessary. (And if it's not a necessary assumption, it would be good to note this in the article and to give a reference.)

—Steven G. Johnson 04:57, 22 November 2006 (UTC)


 * I answered my own question, finally: continuity is required, as one can prove the existence of a discontinuous counter-example. I'm not aware of any constructive counter-example; the existence proof seems to require the axiom of choice.


 * I came up with the proof of a discontinuous $$f(x)$$ satisfying $$f(x+y)=f(x)f(y)$$ on my own after quite a bit of puzzling. Then a friend of mine pointed out that this is also shown in Hewitt and Stromberg, Real and Abstract Analysis (exercise 18.46)...it turns out to be easy to do once you have proved the existence of a Hamel basis for R/Q.


 * —Steven G. Johnson 00:30, 23 February 2007 (UTC)


 * Monotonicity implies continuity: Use monotonicity to show that $$\sup\left\{f(q):q\leq x, q\in\mathbb{Q}\right\}\leq f(x)\leq \inf\left\{f(q):q\geq x, q\in\mathbb{Q}\right\}$$. Then suppose that $$\sup\left\{f(q):q\leq x, q\in\mathbb{Q}\right\}< \inf\left\{f(q):q\geq x, q\in\mathbb{Q}\right\}$$ and show you can find a rational $$q_0$$ such that $$\sup\left\{f(q):q\leq x, q\in\mathbb{Q}\right\}< f(q_0) < \inf\left\{f(q):q\geq x, q\in\mathbb{Q}\right\}$$ which is contradictory. You can then define for $$x\in\mathbb{R}\backslash\mathbb{Q}$$, $$f(x)=f(1)^x:=\sup\left\{f(q):q\leq x, q\in\mathbb{Q}\right\}= \inf\left\{f(q):q\geq x, q\in\mathbb{Q}\right\}$$.


 * To conclude, suppose $$f$$ is not continuous in $$x_0\in\mathbb{R}$$. Then $$\exists\varepsilon_0 > 0,\forall y_1\in f(]-\infty, x_0]), \forall y_2\in f(]x_0,+\infty[), |y_1-y_2|\geq \varepsilon_0$$. This implies that $$\inf\left\{f(q):q\geq x_0, q\in\mathbb{Q}\right\}\neq\sup\left\{f(q):q\leq x_0, q\in\mathbb{Q}\right\}$$, which is not true. Therefore $$f$$ is continuous everywhere.
 * —Deimos 28 06:23, 11 September 2007 (UTC)
 * Hello. I checked the exercise 18.46 of Hewitt and Stromberg, Real and Abstract Analysis, but it concludes that f is exp( i alpha x) for some alpha, with an imaginary unit in the exponent. I think that we should remove Lebesgue integrability from the list of alternative conditions because the exercise is different. What do you think? Caph1993 (talk) 18:06, 3 April 2024 (UTC)

Equivalence of 1 and 3
Hi,

The point raised on the discussion page about the equivalence of 1 and 3 is worthy of being addressed in the main article, I believe, as a perceptive reader will see that one is using a result whose prior establishment is not clear. (Indeed, I thought there was a problem and, having satisfied myself that there wasn't, inserted an edit to clear it up, before I had read the point on the discussion page. [Sorry for not checking first]) I hope the edit is still deemed worthy of retention.

Hugh McManus 14:56, 26 February 2007 (UTC)Hugh McManus

Equivalence of characterizations 1 and 2
The proof given only works for $$x \geq 0$$. In line 6, you are arguing $$(1-1/n)(1-2/n)\leq 1$$ implies that $$(x^3/3!)(1-1/n)(1-2/n)\leq x^3/3!$$ and this requires non-negative $$x$$.

Indeed, the statement in Characterization 1 that $$a_n=(1+x/n)^n$$ is an increasing sequence may not be valid if $$x<0$$. Fathead99 (talk) 14:24, 30 January 2008 (UTC)

Taylor series.
From the limit (char-1) follows the differential equation (char-4) by differentiation, and then the taylor series (char-2).

It is unclear whether the article talks about the real or complex exponential function. The reference to positive values of x indicate real exponentials, but the results are important for complex exponentials.

The formulation "in several cases can be extended to any Banach algebra" is confusing; in which cases can it be extended, and in which cases can it not be extended? Is differentiation defined for any Banach algebra? The Banach algebra article didn't tell.

Bo Jacoby 07:29, 23 July 2007 (UTC).

Real versus complex
For some of the characterizations given here (e.g. power series) one may take the domain to be the whole complex plane, but for others (monotonicity plus the functional equation) one must assume the domain is the real line. (There do exist continuous functions on the complex plane that satisfy the functional equation and the initial condition but are not equal to the natural exponential function; they are nowhere (complex-)differentiable. For example, observe that
 * $$ g(x + iy) = e^x(\cos(2y) + i\sin(2y))\, $$
 * $$ g(x + iy) = e^x(\cos(2y) + i\sin(2y))\, $$

for x and y real, is continuous everywhere in the complex plane and satisfies the functional equation
 * $$ g(z + w) = g(z)g(w)\, $$
 * $$ g(z + w) = g(z)g(w)\, $$

for z and w complex, and the initial condition
 * $$ g(1) = e\,$$
 * $$ g(1) = e\,$$

but not the Cauchy-Riemann equations, and is therefore nowhere complex-differentiable.) Michael Hardy 13:16, 26 August 2007 (UTC)

Motivating the definitions?
I think at least definitions 1., 2., and 5. may seem to a beginning student like rabbits pulled out of a hat. Among all possible limits, why study $$\lim_{n\to\infty} \left(1+\frac{x}{n}\right)^n$$? Among all possible power series, why study $$\sum_{n=0}^\infty {x^n \over n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots$$? Among all measurable functions f satisfing f(x+y)=f(x)f(y), why study the one satisfying f(1)=e? I will try to answer these quations in what follows.

Even definitions 3. and 4. (which are really two sides of the same coin, the relation dy=ydx first when x is considered a function of y and then when y is considered a function of x) may need some motivation. In 3., why start the integral at 1? In 4., why chose y(0)=1? The answer is that otherwise we do not get the properties log(xy)=log(x)+log(y) and $$e^{(x+y)}=e^xe^y$$.

The differential equation y'=y is sufficiently well-behaved to have a power series solution. From 4. we thus get 2. Of course, 3. also leads to a power series: $$log(1+x)=x-x^2/2+x^3/3-x^4/4+...$$. Perhaps this power series should also be included among the definitions.

Although it is commonplace, I find the choice of the variable name "n" in 1. unfortunate. It suggests an integer variable when any complex number is perfectly permissible. I also prefer to use a variable that approaches 0 to one that approaches infinity, so rather than motivating 1., I will motivate: $$e^x = \lim_{h \to 0} {(1+xh)}^{1/h}$$.

If you try to differentiate $$a^x$$ with respect to x in the most straightforward of ways, you get $$\lim_{h\to0}\frac{a^h-1}{h} a^x$$. Since we know that the derivative should be $$log(a)a^x$$, and since a can be any positive real number, we must have come across an expression for the function log. Since the exponential function is just the inverse of log, it is natural to wonder whether you can turn the expression for log into one for the exponential function. You can, and what you get is exactly $$e^x = \lim_{h \to 0} (1+xh)^{1/h}$$. (And once we have this expression it is a good idea to use the binomial series to expand it; this constitutes an alternative path to the power series 2.)

As for 5., it is really incomplete as a definition since it presupposes that you already have a definition of e. It seems to me that what we do have is an interesting characterisation of the function $$f(x)=a^x$$. It is determined once you know: I. that $$f(x+y)={f(x)}{f(y)}$$, II. that f is measurable at one point, and III. the value of f(x) for any value of x except 0. Mattias Wikstrom (talk) 18:02, 8 February 2008 (UTC)

Why characterization 1 makes sense: not valid explanation
I came here looking for exactly why characterization 1 is true. This site generally is good about having formal proofs, rather than intuitive explanations. In this "explanation," the only thing seen is that the sequence is convergent, no information is given about what the limit is. There is not sufficient information to prove that the sequence in characterization 1 converges to the exponential function. —Preceding unsigned comment added by 71.154.242.50 (talk) 16:28, 28 June 2008 (UTC)

The idea is that any of these characterizations could be taken as a definition of the exponential function. It is later proved that all the characterizations are equivalent, but you can't prove anything about the exponential function without first defining it. Argoreham (talk) 19:50, 18 July 2008 (UTC)

There's a reason that we don't have sufficient information to prove that the sequence in characterization 1 converges. The text claims, "It can be shown that the sequence [removed] is an increasing sequence which is bounded above." My calculator, on the other hand, claims that the sequence that approaches exp(-5) is 1, -4, 2.25, -0.296296, 0.003906, 0, etc, which clearly does not increase monotonically. We don't prove our proposition because it isn't true. 209.216.175.60 (talk) 01:29, 27 September 2008 (UTC)


 * I've changed what it says.
 * Do you really need a calculator to see that? Or was that just rhetoric? Michael Hardy (talk) 22:04, 27 September 2008 (UTC)
 * Do you really need a calculator to see that? Or was that just rhetoric? Michael Hardy (talk) 22:04, 27 September 2008 (UTC)

Missing Characterization?
Maybe I'm missing something here, but how come none of the characterizations are the very simple elementary/middle school definition of an exponent? exp(2) = e*e, exp(3) = e*e*e, etc. This can then be extended to all numbers 1/n using integer roots, and from there to all rational numbers, and from there to all real numbers using limits. It seems to me that this should be the very first characterization. Maybe it isn't the most analytically useful, but it is I believe the first definition historically, and certainly the first thing that comes to mind when one thinks "exponential function." 75.83.25.40 (talk) 01:18, 13 November 2008 (UTC)LFStokols


 * This is essentially characterization 5 (which may - like much of the material on this page - be too technical to be accessible to beginners).
 * One could argue, though, that this type of characterisation is incomplete, since it requires one to already have a definition of the number e (and I am afraid that the most natural way of defining e is to say that e=exp(1), where exp has been defined without mentioning e). I think the best way of making them complete is to replace the requirement that exp(1)=e with the requirement that exp'(0)=1, where exp' is the derivative of exp. Such a characterisation may not be one of the "most common", but I do not see why this page should deal only with the "most common" characterizations. Mattias Wikstrom (talk) 10:58, 15 January 2009 (UTC)

Equiv. of 1 & 2
Hi, As someone with training in science and engineering, but not a mathematician, I might lack the sophistication to understand why the laborious proof of the equivalence of characterizations 1 and 2 is needed. To my lesser-trained eye, it would seem that something like the following is sufficient:

$$ \begin{align} R & \equiv (1+\frac{x}{n})^n \\ & = \sum_{k=0}^{n}{n \choose k} \left( \frac{x}{n} \right) ^k \\ & = \sum_{k=0}^{n}\frac{n!}{k! (n-k)!} \left( \frac{x}{n} \right) ^k \\ \end{align} $$

where we used The Binomial Theorem to arrived at the sum, and the the definition of "n choose k".

Now, note that there are as many factors (viz. k of them) in $$ \frac{n!}{(n-k)!} $$ as there are in $$x^k$$, so we'll group them, and then group the other factors as $$\frac{x^k}{k!}$$

\begin{align} & = \sum_{k=0}^{n}  \left[  \frac{n}{n}\frac{n-1}{n} \cdot\cdot\cdot \frac{n-k+1}{n} \right]  \frac{x^k}{k!} \\ & = \sum_{k=0}^{n}  \left[  \prod_{j=1}^{k-1} \left( \frac{n-j}{n} \right)  \right]  \frac{x^k}{k!} \\ & = \sum_{k=0}^{n}  \left[  \prod_{j=1}^{k-1} \left( 1-\frac{j}{n} \right)  \right]  \frac{x^k}{k!} \\ \end{align} $$

Now, take the limit $$n\to\infty$$. In that case, $$\frac{j}{n}\to 0$$, and the product thusly approaches 1, giving

\begin{align} \lim_{n\to\infty} R & = \sum_{k=0}^{\infty}   \frac{x^k}{k!} \\ \end{align} $$

and we have arrived at characterization 2.

What are the nuances that make this proof fail?

Thanks! --Filiusfob (talk) 13:25, 16 July 2010 (UTC)


 * The final bit, taking the limit as n->infinity and asserting that the products
 * $$\prod_{j=1}^{k-1} \left( 1-\frac{j}{n} \right)$$
 * go to 1, isn't strictly rigorous. More important, though, is that we need to figure out how far off the partial sums are from the usual Maclaurin series. Each term in the sum contributes some error--do they add up to a finite amount? That is, does each product above go to 1 quickly enough so that the total error actually decreases as more terms are added? It should be possible to construct a counterexample, replacing (j/n) with a more slowly decreasing function--maybe (j/(n^(1/j))--which satisfies all the requirements of your proof without satisfying the conclusion.
 * That said, your proof is pretty intuitive and would be what I'd expect to see in non-math books discussing this topic. Physics texts are especially bad about using "as x->A, f(x)->B" arguments without really justifying them, in my experience. I think they fundamentally rely on mathematicians pointing out errors over the years if their proofs can't actually be made rigorous. That, or physical intuition to confirm their conclusion and, circularly, their reasoning. 67.158.43.41 (talk) 09:43, 4 November 2010 (UTC)

You don't proof anything
"Equivalence of characterizations 1 and 3

Here, we define the natural logarithm function in terms of a definite integral as above. By the fundamental theorem of calculus,


 * $$\frac{d}{dx}\left( \ln x \right) = \frac{1}{x}.$$

Now, let x be any fixed real number, and let


 * $$y=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n.$$

We will show that ln(y) = x, which implies that y = ex, where ex is in the sense of definition 3. We have


 * $$\ln y=\ln\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n=\lim_{n\to\infty}\ln\left(1+\frac{x}{n}\right)^n.$$

Here, we have used the continuity of ln(y), which follows from the continuity of 1/t:


 * $$\ln y=\lim_{n\to\infty}n\ln \left(1+\frac{x}{n}\right)=\lim_{n\to\infty}\frac{x\ln\left(1+(x/n)\right)}{(x/n)}.$$

Here, we have used the result lnan = nlna. This result can be established for n a natural number by induction, or using integration by substitution. (The extension to real powers must wait until ln and exp have been established as inverses of each other, so that ab can be defined for real b as eb lna.)


 * $$=x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)}{h} \quad \mbox{ where }h=\frac{x}{n}$$


 * $$=x\cdot\frac{d}{dt}\left( \ln t\right) \quad \mbox{ at }t=1$$


 * $$=x\cdot\frac{1}{t} \quad \mbox{ at }t=1$$


 * $$\!\, = x.$$"

This should proof why $$\int \frac{1}{x} dx= \ln x+C,$$ but it don't proving anything. It only proves, that at t=1, $$ln y=x$$:
 * $$\ln y=\ln (e^x)=x\cdot\frac{1}{t} \quad \mbox{ at }t=1,$$
 * $$\ln y=\ln e^x= x.$$

And I don't see any relation in this equation:
 * $$ x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)}{h}=x\cdot\frac{d}{dt}\left( \ln t\right).$$  —Preceding unsigned comment added by 84.240.9.58 (talk) 10:10, 24 December 2010 (UTC)

"Alternative characterizations

Other characterizations of e are also possible: one is as the limit of a sequence, another is as the sum of an infinite series, and still others rely on integral calculus. So far, the following two (equivalent) properties have been introduced:

1. The number e is the unique positive real number such that
 * $$\frac{d}{dt}e^t = e^t.$$

2. The number e is the unique positive real number such that
 * $$\frac{d}{dt} \log_e t = \frac{1}{t}.$$

The following three characterizations can be proven equivalent:

3. The number e is the limit
 * $$e = \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n$$

Similarly:
 * $$e = \lim_{x\to 0} \left( 1 + x \right)^{1/x}$$

4. The number e is the sum of the infinite series
 * $$e = \sum_{n = 0}^\infty \frac{1}{n!} = \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots$$

where n! is the factorial of n.

5. The number e is the unique positive real number such that
 * $$\int_{1}^{e} \frac{1}{t} \, dt = {1}$$."

It should prove, why $$e = \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n,$$ but it don't proof anything.


 * Could you summarize please what your point is thanks. Dmcq (talk) 12:05, 24 December 2010 (UTC)

One more thing, $$\lim_{h\to 0}\ln(1+h)^{1\over h}=\lim_{h\to 0}\frac{\ln(1+h)}{h}=1.$$ And $$\lim_{h\to 0}(1+h)^{1\over h}=e.$$ So $$\ln e=1$$. —Preceding unsigned comment added by 84.240.9.58 (talk) 09:44, 27 December 2010 (UTC)


 * Could you summarize please what your point is thanks. Rather than just writing more equations say which section you think should be changed and why. Dmcq (talk) 09:59, 27 December 2010 (UTC)

To summarize, let's prove, that $$\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n=e.$$ So $$\ln(e)$$ and $$\ln\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n $$, should give the same answer 1, because $$\ln(e)=1$$. So need to prove, that $$\ln\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n =1.$$ So $$y=e$$ and $$y=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n.$$ So let's begin proving:
 * $$y=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n,$$
 * $$\ln y=\ln\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n=\lim_{n\to\infty}\ln\left(1+\frac{1}{n}\right)^n=\lim_{n\to\infty}n\ln \left(1+\frac{1}{n}\right)=\lim_{n\to\infty}\frac{1\cdot\ln\left(1+(1/n)\right)}{(1/n)}=$$
 * $$=1\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)}{h}=1\cdot \ln(\lim_{h\to 0}(1+h)^{1\over h})=1\cdot \ln e =1\cdot 1=1 \quad \mbox{ where }h=\frac{1}{n}.$$ —Preceding unsigned comment added by 84.240.9.58 (talk) 10:13, 27 December 2010 (UTC)

To summarize, let's prove, that $$\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n=e^x.$$ So $$\ln(e^x)$$ and $$\ln\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n $$, should give the same answer x, because $$\ln(e^x)=x$$. So need to prove, that $$\ln\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n =x.$$ So $$y=e^x$$ and $$y=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n.$$ So let's begin proving:
 * $$y=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n,$$
 * $$\ln y=\ln\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n=\lim_{n\to\infty}\ln\left(1+\frac{x}{n}\right)^n=\lim_{n\to\infty}n\ln \left(1+\frac{x}{n}\right)=\lim_{n\to\infty}\frac{x\ln\left(1+(x/n)\right)}{(x/n)}=$$
 * $$=x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)}{h}=x\cdot \ln(\lim_{h\to 0}(1+h)^{1\over h})=x\cdot \ln e =x\cdot 1=x \quad \mbox{ where }h=\frac{x}{n}.$$


 * You obviously have a problem just pointing out where you think there is a problem. I think reading through all this the problem you have is going between the integral and the logarithm. In fact the integral is used as a definition of the logarithm and the logarithm rule can be proved as follows:


 * $$ \ln(tu) = \int_1^{tu} \frac{1}{x} \, dx \ \stackrel {(1)} = \int_1^{t} \frac{1}{x} \, dx + \int_t^{tu} \frac{1}{x} \, dx \ \stackrel {(2)} = \ln(t) + \int_1^u \frac{1}{w} \, dw = \ln(t) + \ln(u). $$


 * Is this what was worrying you and you'd like in or what? 12:33, 27 December 2010 (UTC)
 * You forget to say, that at t=1, because this:
 * $$ \ln(tu) = \int_1^{tu} \frac{1}{x} \, dx \ \stackrel {(1)} = \int_1^{t} \frac{1}{x} \, dx + \int_t^{tu} \frac{1}{x} \, dx \ \stackrel {(2)} = \ln(t) + \int_1^u \frac{1}{w} \, dw = \ln(t) + \ln(u), $$
 * can be rewritten as this:
 * $$ \ln(tu) = \int_1^{tu} \frac{1}{x} \, dx \ \stackrel {(1)} = \int_1^{1} \frac{1}{x} \, dx + \int_t^{tu} \frac{1}{x} \, dx \ \stackrel {(2)} = (\ln(1)-\ln(1)) + \int_1^u \frac{1}{w} \, dw = 0 + \ln(u), $$
 * so you still moving in circles and don't proving anything. Proof should be about why $$\frac{d}{dx}\left( \ln x \right) = \frac{1}{x}$$ and not about why $$\ln (c\cdot d)=\ln(c)+\ln(d).$$ 16:37, 6 January 2011 (UTC)
 * The integral is the definition of log. Therefore the deriviative is 1/x by the fundamental theorem of calculus. One doesn't nmeed to prove it is 1/x - that's the definition. I was showing the definition worked to give a log type function. The log is not defined as the inverse function of exponentiation. Dmcq (talk) 17:09, 6 January 2011 (UTC)
 * See Logarithm where it says 'the right hand side can serve as a definition of the natural logarithm'. That is exactly what is done here. Dmcq (talk) 17:12, 6 January 2011 (UTC)
 * So it's just definition like rest of integrals and guessing, that $$\int\frac{1}{x}\; dx=\log_e x.$$ But one thing you still can not do by math rules in this equation:
 * $$ \ln(tu) = \int_1^{tu} \frac{1}{x} \, dx \ \stackrel {(1)} = \int_1^{t} \frac{1}{x} \, dx + \int_t^{tu} \frac{1}{x} \, dx \ \stackrel {(2)} = \ln(t) + \int_1^u \frac{1}{w} \, dw = \ln(t) + \ln(u), $$
 * you can't say, that $$\int_t^{tu} \frac{1}{x} \, dx=\int_1^u \frac{1}{w} \, dw.$$ You only can claim, that $$\int_t^{tu} \frac{1}{x} \, dx=\ln(tu)-\ln(t)=\ln(t)+\ln(u)-\ln(t)=\ln(u).$$
 * And this means moving in circle, repeating, that $$\ln(t\cdot u)=\ln(t)+\ln(u).$$ See for yourself, if w=x/t, then assume this equation is correct:
 * $$ \ln(tu) = \int_1^{tu} \frac{1}{x} \, dx \ \stackrel {(1)} = \int_1^{t} \frac{1}{x} \, dx + \int_t^{tu} \frac{1}{x} \, dx \ \stackrel {(2)}, $$

but I think you can't do this (sum up two parts of integrals) by math rules. But let's assume, then:
 * $$ \ln(tu) = \int_1^{tu} \frac{1}{x} \, dx \ \stackrel {(1)} = \int_1^{t} \frac{1}{x} \, dx + \int_t^{tu} \frac{1}{x} \, dx \ \stackrel {(2)} = \ln(t)-\ln(1) + \int_t^{tu} \frac{t}{x} \, d(x/t) = \ln(t)-0 + t\ln(tu)-t\ln(t)=\ln(t)+t\ln(u)+t\ln(t)-t\ln(t)=\ln(t)+t\ln(u), $$

where dw=d(x/t)=dx/t; dx=t*dw.


 * If you looked at that section I pointed you at you'd see that the equality of those two integrals was shown by the change of variables substitution w = x/t. Dmcq (talk) 01:27, 7 January 2011 (UTC)

Missing proof
Looking through this, I notice that there is a proof that 5 implies 1, but not that 1 (or one of the others) implies 5. Should be easy to fix (a proof should be in Rudin or any other standard textbook). — Steven G. Johnson (talk) 18:09, 11 December 2011 (UTC)

Characterization 4 again
Am I missing something or does the article never link characterization 4 to anything else? Dmcq (talk) 04:49, 8 February 2012 (UTC)


 * Plugging the Taylor series (characterization 2) into characterization 4 quickly shows the relation. About characterization 5, even though it is a nice characterization, it is far less common than the other four characterizations. It should probably be not placed on the same footing.  Perhaps the main list should have only 4 characterizations (we can keep the 5'th, writing something like:  Besides these, there are other characterizations as well, for example: .. #5).  71.229.28.197 (talk) 17:38, 22 February 2014 (UTC)MvH


 * Perhaps Dmcq's point is that we should have a subsection Equivalence of characterizations 2 and 4, rather than leaving it to the reader? —Quondum 18:13, 22 February 2014 (UTC)

Error in series for exponent
This article contains the errorneous series for exponent
 * $$e^x = \sum_{n=0}^\infty {x^n \over n!}$$

But it's clear that for $$x=0$$ the first item $${0^0 \over 0!}$$ is nonsense because $$0^0$$ isn't correct mathematical expression, see Indeterminate form.

I modified the text to correct syntax:
 * $$e^x = 1 + \sum_{n=1}^\infty {x^n \over n!}$$

but undid my modification with comment: «Unsourced claim of error in edit summary, and atypical presentation». No source is needed for the obvious failure. LGB (talk) 10:40, 14 February 2014 (UTC)


 * You're confusing the indeterminate form 00 with the expression 00. Talk:Exponentiation and several archives on the topic should make it clear that it is not so simple. —Quondum 06:29, 15 February 2014 (UTC)


 * (1) The section «the expression 00» has no reference to sources. (2) The same Exponentiation section warns honestly: «Not all sources define $$0^0$$ to be 1». (3) You have no basis to suppose that all Wikipedia readers know and support such strange convention — even without any commentary and reference to Exponentiation section. My modified formula is clearly better for a reader and I hope you'll not insist upon equivocal and confusing formula. Otherwise let Wikipedia talk:WikiProject Mathematics will discuss the subject. LGB (talk) 11:22, 15 February 2014 (UTC)


 * You're welcome to do discuss the matter there, but be aware that this topic has been discussed at length on Wikipedia, at times quite heatedly. It would make sense to first acquaint yourself with the existing consensus (or lack thereof!) by finding and examining the extensive past discussions on the topic, such as the example to which I linked, before reopening the discussion, as many will have tired of it already. If the rationale behind your edit were to be accepted, it would imply a similar revision to many other articles, including Exponentiation, Exponential function, Taylor series, Laurent series, Binomial theorem, Derivative and at a guess no less than hundreds of others. There is no shortage of sources that use the expression $x^{0}$ to mean 1 in exactly the way that you object to. To find a source that carefully avoids its use with this meaning might be a challenge, though. —Quondum 16:29, 15 February 2014 (UTC)


 * I agree with LGB that it is wrong to say in one page that 0^0 is undefined, while assuming in another page that 0^0 will be evaluated to 1. Unfortunately, that is precisely what many calculus textbooks do.  The mathematical solution to this problem is very easy (accept the empty product rule, which makes 0! and 0^0 equal to 1).  The goal in wikipedia is to say what the textbooks say, so if the textbooks contradict themselves, then wikipedia will do likewise.  This means that the only way to fix this contradiction on wikipedia is to convince authors of calculus books that 0^0 is 1.  MvH (talk) 18:51, 21 February 2014 (UTC)MvH


 * This article says nothing of the sort. Perhaps you're confusing an expression with a limiting form? The shorthand 00 for the limit $$\lim_{x\to 0^+,y\to 0}x^y$$ looking like the expression might be confusing, but that is all it is: a shorthand. —Quondum 02:38, 22 February 2014 (UTC)


 * Quondum, yes, I agree that one can do this correctly (0^0 = 1, 0/0 = undefined, while "0^0" and "0/0" refer to limits that require more than direct substitution). Widespread confusion arises because there are many calculus textbooks that leave 0^0 undefined, even though the same textbooks expect the reader to evaluate 0^0 to 1 in many formulas.  It's not the readers fault when they are confused. In the past, 1 was considered a prime number, until it was found that this was inconvenient in an increasingly large number of situations.  The same will happen with the view that 0^0 is undefined, and for the same reason.  The option "undefined" is inconvenient in an increasingly large number of places, so eventually people will settle on "1". In research papers, 0^0=1 is already the status quo. Calculus textbooks will likely be the last ones to adopt this.  71.229.28.197 (talk) 16:03, 22 February 2014 (UTC)MvH
 * I agree with you about problems relating to any (lack of) definition of 00 and the consequent confusion, but that was not my point, so I guess I was unclear. I'm saying that since "00" and "the indeterminate form 00" refers to distinct concepts, one must take care to use the full phrase in the latter. Conflating the two distinct concepts seems to have been underlying LGB's assertion. The eventual resolution may occur differently from your prediction. I would like to see notation arising that distinguishes between the two fundamentally different types of exponentiation, which will allow 00=1 without those in analysis objecting (other than to the change of notation). —Quondum 17:50, 22 February 2014 (UTC)

As LGB knows that$$\sum_{n=0}^\infty {x^n \over n!}= 1 + \sum_{n=1}^\infty {x^n \over n!}$$ he obviously knows that $${x^0 \over 0!}= 1$$ for all values of x. People who deny that 00=1 are spreading confusion. Bo Jacoby (talk) 12:24, 17 February 2014 (UTC).

Missing characterization
One characterization I've seen used in calc textbooks (eg Briggs, Cochran, Gillett): let $e$ be the number such that $\lim e^h-1/h=1.$ Then $e^x$ is the exponential function with that base. This should also be shown equivalent. -lethe talk [ +] 04:29, 21 October 2015 (UTC)

More general proof that 1 implies 2 using the Dominated Convergence Theorem
The generalisation to negative values of $$x$$ is done in an ad-hoc way. It raises the question of how to do the same for the complex numbers.

I think that using the Dominated convergence theorem allows for a generalisation to all unital Banach algebras.

--Svennik (talk) 21:50, 30 September 2019 (UTC)