Talk:Euler's formula/Archive 1

Interesting ideas
I have some interesting ideas that might be worth adding to this article. Suppose a and b are real then:

$$ b^i = e^{i\ln(b)} = \cos(\ln(b))+i\sin(\ln(b))$$

Suppose $$ \ln(i)=z=a+ib$$ then

$$ e^z=i=e^{a+ib}=e^a * e^{ib}=e^a(\cos(b)+i\sin(b))=e^a \cos(b) + ie^a \sin(b)$$

So $$ e^a \cos(b) = 0 $$ and $$ e^a \sin(b) = 1 $$. The only solution is $$b=\pi/2+2\pi*k$$ and $$a=0$$, where k is Integer.

So $$ \ln(i)=i(\pi/2+2\pi*k)$$. Now $$ i^i = e^{i\ln(i)} = e^{-(\pi/2+2\pi*k)}$$. Notice that this is a multifunction of completely real values. I find this quite amazing!

150.203.208.200 02:28, 28 May 2007 (UTC)Dmitry Kamenetksy

Another way of demonstrating the formula
Can you show a proof of Euler's equation?

There is another way of demonstrating the formula

which I find to be more beautiful:

Let z = cos t + i sin t

then dz = (-sin t + i cos t) dt = i (cos t + i sin t) dt = i z dt.

Integrating:

int dz/z = int i dt

or

ln z = i t.

Exponentiating:

z = exp i t.


 * let $$\ln z = it + C_1$$


 * $$ z = e^{it +C_1} = e^{C_1}\cdot e^{it}$$
 * $$ C = e^{C_1} \rightarrow z = Ce^{it}$$

— Preceding unsigned comment added by Vaizata (talk • contribs) 10:18, 12 October 2003‎ (UTC)

Proof using Taylor series is silly
The proof using Taylor series is silly! If one is allowed to assume the Taylor expansions of exp(x), sin(x) and cos(x), then just add the series for cos x + i sin x and note that it is the same as the series for exp(i x). --zero 09:38, 12 Oct 2003 (UTC)

You have an error anyway in your proof: i(-sin t + i cos t) = - (cos t + i sin t) = -z. I don't think you can differentiate like you're doing in any case since z is a complex variable (I could be wrong, I haven't done any complex analysis stuff for a while). Dysprosia 10:03, 12 Oct 2003 (UTC)

No, that part of the proof is fine. The only problematic step is the integration, since it really gives ln z = i t + C for a constant C. One then has to find an argument that C=0. --zero 12:46, 12 October 2003 (UTC)

The argument that C=0 can be easily found by substituting t=0 and evaluating. --Komp, 10th Sept 2004.

Taylor Series for e^x
I'm a little confused about one thing for the e^ix = cosx + (i)sinx derivation. It looks like the Taylor Series of e^ix is exanded around the point a = 0. Wouldn't that mean the proof is only valid near x = 0?

The series is valid for all x.

Charles Matthews 09:42, 18 Dec 2003 (UTC)


 * Radius of convergence of exp x is infinite, btw. Dysprosia 09:48, 18 Dec 2003 (UTC)

That explains it, thanks a lot!


 * You could expand it about any point, and as long as you took all (an infinite number of) the elements, it would still work. If you're only going to use a few terms you should expand it about whatever local operating point you're using. moink 05:12, 13 Jan 2004 (UTC)

Using the definition of an entire function or radius of convergence with TSE does not mean much except to mathematicians. The proof would then require additional definitions if one wanted to present the information to someone who's not familiar with the field. I suggest presenting an algebraic proof instead of the function analysis type proof.

Let us consider f(x)=e^x. We know from its TSE about any point a that,

$$e^x = e^a\left(1 + (x-a) + {\frac{(x-a)^2}{2!}} + {\frac{(x-a)^3}{3!}} + ... \right)$$

Let x=a+b, with no conditions on the value that b can take. This gives,

$$e^{a+b} = e^a\left(1 + b + {\frac{b^2}{2!}} + {\frac{b^3}{3!}} + {\frac{b^4}{4!}}+... \right)$$

The above implies that, $$e^{b} = 1 + b + {\frac{b^2}{2!}} + {\frac{b^3}{3!}} + {\frac{b^4}{4!}}+... \,\,\,\,\,\,\forall b$$

This is the proof that the exponential is an entire function.

Now, let us consider the cases when $$f(x)=\cos{x}$$ and $$\sin{x}$$. We know from their TSE about a that,

$$\cos{x} = \cos{a} - \sin{a} (x-a) - \cos{a} \frac{(x-a)^2}{2!} + \sin{a} \frac{(x-a)^3}{3!} + \cos{a} \frac{(x-a)^4}{4!} - \sin{a} \frac{(x-a)^5}{5!}+ ...$$ $$ \sin{x} = \sin{a} + \cos{a} (x-a) - \sin{a} \frac{(x-a)^2}{2!} - \cos{a} \frac{(x-a)^3}{3!} + \sin{a} \frac{(x-a)^4}{4!} + \cos{a} \frac{(x-a)^5}{5!}+ ...$$

Let $$x=a+b$$, with no conditions on the value that b can take. This gives, $$\cos{(a+b)} = \cos{a} - b\sin{a} - \cos{a} \left(\frac{b^2}{2!}\right) + \sin{a} \left(\frac{b^3}{3!}\right) + \cos{a} \left(\frac{b^4}{4!}\right) - \sin{a} \left(\frac{b^5}{5!}\right)+ ... $$ $$\sin{(a+b)} = \sin{a} + b\cos{a} - \sin{a} \left(\frac{b^2}{2!}\right) - \cos{a} \left(\frac{b^3}{3!}\right) + \sin{a} \left(\frac{b^4}{4!}\right) + \cos{a} \left(\frac{b^5}{5!}\right)+ ...$$

Now separating the $$\cos{a}$$ and $$\sin{a}$$ terms we get,

$$\cos{(a+b)} = \cos{a}\left[1 - {\frac{b^2}{2!}} \!+\! {\frac{b^4}{4!}} \!-\! {\frac{b^6}{6!}} \!+\! ...\right] - \sin{a}\left[b \!-\! {\frac{b^3}{3!}} \!+\! {\frac{b^5}{5!}} \!-\!...\right]$$

$$\sin{(a+b)} = \sin{a}\left[1 - {\frac{b^2}{2!}} \!+\! {\frac{b^4}{4!}} \!-\! {\frac{b^6}{6!}} \!+\! ...\right] + \cos{a}\left[b \!-\! {\frac{b^3}{3!}} \!+\! {\frac{b^5}{5!}} \!-\!...\right]$$

Let $$p=1 - {\frac{b^2}{2!}} \!+\! {\frac{b^4}{4!}} \!-\! {\frac{b^6}{6!}} \!+\! ...$$ and $$q=b \!-\! {\frac{b^3}{3!}} \!+\! {\frac{b^5}{5!}} \!-\!...$$. This gives,

$$\cos (a) \cos (b) - \sin (a) \sin (b) = p\cos{(a)}	- q\sin{(a)}$$

$$\sin (a) \cos (b) + \cos (a) \sin (b) = p\sin{(a)}	+ q\cos{(a)}$$

Solving the above for p and q we get, $$p=\cos{b}$$ and $$q=\sin{b}$$. Thus,

$$\cos{b} = 1 - {\frac{b^2}{2!}} + {\frac{b^4}{4!}} - {\frac{b^6}{6!}} + {\frac{b^8}{8!}} - ...,\,\,\,\,\,\forall b$$

$$\sin{b} = b - {\frac{b^3}{3!}} + {\frac{b^5}{5!}} - {\frac{b^5}{5!}} + {\frac{b^7}{7!}} -...,\,\,\,\,\,\forall b$$

This proves that sin and cos functions are also entire functions. This shows that the MacLaurin's of the exponential, sine and cosine are also their TSE's. This proof should be easier to understand and self-contained. --128.2.48.75 (talk) 12:09, 7 April 2010 (UTC)

Well, scratch the whole thing above. The fallacy there is of self-referencing. The TSE's are only valid by definition if b is very small so as to limit a+b to be within the neighborhood of a itself. So what I derived there is sill the MacLaurin expressions. Bah! --128.2.48.75 (talk) 12:27, 7 April 2010 (UTC)

An easy proof is the apply the supposition that $$f(x)=e^x = \sum^{\infty}_{m=0}p_m x^m$$ and use the condition (which is also one of the definitions of the exponential) that $$f'(x)=f(x)$$ to derive the series expression for the exponential. It is not very satisfying but it works to show that the Maclaurin form of the exponential is also the Taylor form. This proof does not use the radius of convergence.

The same can be done for sine and cosine functions. Use $$f(x)=g(x)+h(x) = \sum^{\infty}_{m=0}p_m x^m$$, and the supposition that if g or h are to represent sine or cosine, then the conditions which have to satisfied are $$g(x)=-g(x)$$, $$h(x)=-h(x)$$, $$g'(x)=h(x)$$ and $$h'(x)=g(x)$$. Using arguments about linear independence, this allows the derivation of the series form of sine and cosine which are true for all values of x. —Preceding unsigned comment added by 128.2.48.75 (talk) 12:46, 8 April 2010 (UTC)

Move complex analysis to the top
I would like to suggest moving the complex analysis to the top, above the other one. In my experience it's much more common. moink 05:12, 13 January 2004 (UTC) How about more -- split this into 2 articles; the two results have almost nothing to do with each other. They don't belong together.

Defintion of proof
I think it should be more emphisized in the article that before you can prove the theorem you need a definition of what e^(ix) is. The first proof does give such a definition in passing but that is all.

I propose replacing the e^ix = cosx + (i)sinx derivation by the following simplified version.

It is known that exp(x), sin(x), and cos(x) have Talyor series which converge for all complex x:
 * $$ \exp(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots $$
 * $$ \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots $$
 * $$ \cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} + \cdots $$

Adding the series for cos(x) to i times the series for sin(x) gives the series for exp(ix).


 * It misses some parts of the proof though; the periodicy where i^2 = -1 ; i^3 = -i ; i^4 = 1. &#9999; Sverdrup 14:52, 6 May 2004 (UTC)
 * Sverdup is right, but I think the notation in the current proof is more a hindrance than a help. It's much easier to visually see what's going on by writing "dot-dot-dot"'s and collecting terms than by using a jillion sigma notations.


 * I suggest we use the proof on top of this talk page to motivate the formula, and keep the current taylor series proof as the proof. We need to be accurate, and we are also elegant if the math is done right with summation etc. &#9999; Sverdrup 22:27, 6 May 2004 (UTC)
 * I'm not sure what you mean by "the" proof -- most results have multiple proofs, and this is no exception. The proof using "dz/z = it dt" is good motivation, yes, but it's also a completely rigorous proof, as well, so by including it as a "real" proof, we would not lose any accuracy. I still maintain that the Taylor series proof is much easier to understand without sigma notation, without losing any rigor -- "dot-dot-dot's" are fully rigorous, as long as it's obvious what is intended, which is the case here if enough terms are spelled out. Having four different summations with "4n", "4n+1", "4n+2", and "4n+3" is only going to confuse people who aren't used to the notation for partitioning integers into congruence classes -- they will have to spell out what the sums say for themselves, so why not do it for them? (BTW, in case you wonder why the "dz/z = it dt" proof is rigorous, it comes down to this. We are basically dealing with the analytic continuation of the real exponential to the entire complex plane -- this is known to exist because the Taylor series at z = 0, say, has infinite radius of convergence. So, we can define the exponential as exp(z) = Taylor series. It's pretty trivial to show that d/dz(exp(z)) = exp(z) for all z, everything's abs/unif converg, etc. By the chain rule, d/dz(exp(iz)) = i*exp(iz), i.e. exp(iz) satisfies the diff eq w' = iw. Now, note that if w = cos(z) + isin(z), then this w also satisfies the equation; this means w = C*exp(iz) for some constant C; z = 0 gives 1 = w = C, so w = exp(iz) = cos(z) + isin(z). Now, just take z = x to be real. This is basically what is going on with the shorthand notation "dz/z = it dt". The shorthand notation proof is somewhat glossy over a couple of these details, but then again, a lot of proofs at wikipedia are really just "sketches of a proof".) Let me put a copy of what I would have as my Taylor series proof here, so people can see it and compare.

Here is my proposal to replace the current Taylor series proof:

Derivation
Here is a derivation of Euler's formula using Taylor series expansions as well as basic facts about the powers of i:



i^0 = 1, \ i^1 = i, \ i^2 = -1, \ i^3 = -i, \ i^4 = 1, \ i^5 = i, \ i^6 = -1, \ i^7 = -i, \ i^8 = 1, \dots $$

The functions ex, cos(x) and sin(x) (assuming x is a real) can be written as:


 * $$ e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots $$


 * $$ \cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots

$$


 * $$ \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots

$$

and for complex z we define each of these function by its series. This is possible because the radius of convergence of each series is infinite.

Now, take z = ix, where x is real, and note that


 * $$e^{ix} = 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \frac{(ix)^5}{5!} + \frac{(ix)^6}{6!} + \frac{(ix)^7}{7!} + \frac{(ix)^8}{8!} + \cdots$$


 * $$= 1 + ix - \frac{x^2}{2!} - \frac{ix^3}{3!} + \frac{x^4}{4!} + \frac{ix^5}{5!} - \frac{x^6}{6!} - \frac{ix^7}{7!} + \frac{x^8}{8!} + \cdots$$


 * $$= \left( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} + \cdots \right) + i\left( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \right) $$


 * $$= \cos (x) + i\sin (x) $$

The rearrangement of terms is justified because each series is absolutely convergent.

QED

I think people will find it much easier to follow this proof.


 * One problem is that you wrote "z = ix" but z is not defined and it does not appear elsewhere. Also, the proof works for all complex x but you limited it to real x.  --Zero 09:27, 7 May 2004 (UTC)
 * I see how might not be entirely clear -- actually, I did say what z is, when I said, "for complex z, we define each of these functions by these series". Also, yes, the proof works for all complex z, but "Euler's formula" is usually taken to mean when z is purely imaginary, primarily for historical reasons (Euler "derived" it for ix, not z); also, when most people say "Euler's formula", they're usually intimating at the periodic nature of exp around the unit circle. But, it's certainly true for any z, and I can add this.


 * If z = a + bi; ez = eaebi, so there is no problem with complex x. &#9999; Sverdrup 13:12, 7 May 2004 (UTC)


 * I'm convinced, this looks very good. &#9999; Sverdrup 13:12, 7 May 2004 (UTC)


 * The "dz/z = i dt" argument can be made even more legit for most folks by taking the 2nd order linear diff eq, w '' = -w, gotten by iterating the 1st order one, then everything is real and you don't have to think about analytic continuation, etc.


 * It would be interesting to note how Euler actually "discovered" this. The way he "proved" it is completely backwards from how it is usually presented in modern form -- he assumed DeMoivre's identity, and did some clever fooling around with i's (infinitely small numbers) and w's (infinitely large numbers), treating them as ordinary numbers. Of course, not rigorous at all, but historically very interesting, providing insight into Euler's brain circuitry.

Moved Euler characteristic material
I moved the material about the euler characteristic to the Euler_characteristic page. My reasons were
 * elimination of duplicated material
 * article should only focus on one topic and not on two totally unrelated topics.

For people who are looking for the euler characteristic on this page I have put a short note at the top.

I have tried to fix all broken links but I probably forgot some.MathMartin 22:18, 2 August 2004 (UTC)

Minor Move
Ok, It's probably not as important as the discussions on the proofs, but, I have moved the "see also" section so that it is ahead of the references and external links, Purely cosmetic, as I think it looks better to have the internal links ahead of the external ones. I hope no one minds. Help plz 16:19, 25 June 2006 (UTC)

Orignal proof
Does anyone know how Eulers orignal proof went? Also are we certain that Eulers "proof" really was a proof? For one thing there would not have been any commonly accepted definition of e^(ix) I imagine.
 * Euler was actually not the first to prove the inaptly named "Euler's formula." I imagine there was at least one mathematically legitimate definition for $$e^{z}$$ to use for the proof. However, that doesn't mean Euler's proof was rigorous, seeing as he seemed to have plenty of proofs that were not at all rigorous. Eebster the Great (talk) 02:00, 23 February 2009 (UTC)

Problems on some proofs
In some of the proofs it is not clearly defined what e^(ix) means, you can only proof this formula after that.

In this sense arguments like the ones in "using calculus" and "using differential equations" are incomplete (or should state that they are heuristic), without at least a clear definition any argument becomes an heuristic argument.

I know of three ways of defining e^(ix).

(1) From the complex series of e^(z) defined as sum of z^n/n!

(2) as the limit of n to infinity of (1 + ix/n)^n


 * but are these the root definitions of ez for z real? i don't think so.  ez is the inverse function of log(z) where that is defined to be such a function that log(zw) = log(z) + log(w) and is scaled so that the derivative is 1/z.  now you can analytically extend this meaning from real z to complex z, but you have to worry about all the same things (like where it is analytic) that you do for any complex function of a complex variable.  the only other "assumption" you make is that the imaginary unit is constant and that when you square it, that square is -1 (which means it cannot be a real number).  that is sufficient for definition.  you don't need any of these (1) (2) or (3) as axiomatic, but they follow. r b-j 03:33, 29 March 2007 (UTC)


 * (2) can be used in the real case since it is equivalent to the Taylor series one and it is easy to show exponential properties by it. In my POV exponential is more basic then log, introducing log first "may be" easier from rigorous point of view, but that is arguable in the real and in the complex case. Where would you prove the properties for log? Seeing the properties for log as a consequence of properties of exponential seems more interesting to me since properties of exponential like e^(3)*e^(2)=e^5 are more intuitive. But fell free to put that as a possible definition.


 * How you are doing this analytic continuation? It is not easier to define e^z as a Taylor series and prove it is analytic everywhere? Ricardo sandoval 19:56, 29 March 2007 (UTC)


 * i agree that the exponential function of an unspecified base is more fundamental than the log and from the fundamental property that ax+y = axay, this can be easily extended to any real and rational x and y. if we allow ourselves to avoid going into the nasties justifying the extension to the irrationals, then, from the definition of the derivative we can show that (d/dx)(ax) = A ax and that there is only one choice of base a such that A becomes 1:a=e.  the proofs here (that someone called "pseudo-proofs" which i disagree) assumes as axioms the algebraic and calculus properties of the exponential function, that the imaginary unit i is a constant and squares to -1 and that, although it is not real, we define the exponential function just as it has been before and that whether i is real or not, we define the exponential function so that the properties of it do not depend on that difference.  we only need to insure that i is a constant and, as an algebraic symbol, can be treated just like any other constant (can be added, multiplied, distributed, etc) but when you see an i 2, you replace it with -1.  the Maclaurin series for ex is not a definition, but it is a consequence of the definition.  this is true whether x is real or not. r b-j 21:01, 29 March 2007 (UTC)


 * When you define something by its properties you have to ensure that something with that properties actually exists (since maybe the properties that you are asking are logically incompatible). How can you take the derivative of a function if you don't know its values? Asking for its properties before it is properly constructed is a healthy exercise but it is not logically closed.


 * By the way I changed the entry on the proofs part.


 * The construction that you referred to for the real case is a good one, defining by limits of rationals. For complex exponentiation I don't know if a basic approach like that could work. 2^i should have modulus 1 since 2^i*2^(-i) should be 1 and they should be conjugates and 2^(2i) should have the double of the angle but what about 2^(i/2)? what angle should it have? And after you argue that angles are proportional to imaginary part of the exponent. You still have to find that "e" is the constant that makes the angle the same as the imaginary part of the angle. Never saw this line of argument completed and after all these shoulds you still have to clearly define everything.

(3) or directly as cos(x) + i sin(x)

Either way only after defining e^(ix) is that you should show that it has the properties like e^(ia)*e^(ib)=e^(i(a+b)), or (e^ix)'=ie^(ix) that you would expect it to have, and some writers used them fearlessly.

Number (1) is already represented, I think number (2) would be a nice thing to cite since it is analogous to the real case and can also be interpreted geometrically (Richard Feynman used this one for a reference). But using (3) is totally misleading because it doesn't show why it should be true.

That is why I think heuristic arguments are needed to provide "a reason" for us to believe that such a thing should be true. Using circular motion(that was erased) seems to me much more simple and much less 'out of the blue' then for example the "using calculus" approach.

The circular motion approach that was posted by me uses the same ideas as in "simpler differential-equation proof" in the discussion page.

While an encyclopedia is not a textbook for a full discussion on the formula it should be reliable avoiding circular arguments and imprecisions. And should state explicitly when it is using an heuristic argument.

Ricardo sandoval 22:48, 27 March 2007 (UTC)

I guess that is only a POV, but anyway, in the "using calculus" proof one should say 'assuming' that (e^(ix))'=ie^(ix) and e^(ix)*e^(-ix)=e^0=1 since you cannot use any properties of e^(ix) before defining e^(ix) ((and proving them)).


 * i don't get it. we define eix as the usual ez, but evaluated with the arguement ix and use the definition of the imaginary unit to be what it is (some "imaginary" number that squares to be -1).  is adding the fact that i is constant something to be proved? r b-j 03:39, 29 March 2007 (UTC)

Something similar goes for the "differential equations proof". And to make a clearer article one should define e^(ix) explicitly.

After the Taylor series demonstration I am planning to put all other demonstrations together in a "alternative proofs" section. Commenting on the possible definitions and the properties needed.


 * You are right that a definition of eix can be done that way, but nothing is said in the article itself or at least cited somewhere, we could show first that (ez)'  = (ez) and the chain rue takes care of the rest. By this definition it is obvious that e^(i0)=1 but you still need e^(ix)*e^(-ix)=e^0=1, you could avoid it entirely in the " calculus" approach by using e^(ix)*(cos x -i sin x) instead, but not sure if it would be better. I retracted on my previous "alternative proofs idea". But something is needed anyway.


 * By the way a rigorous definitions of "i" are kind of tricky because you want them to have consistency and it is not clear you have it if you just define it out of the blue, no wonder early mathematicians were suspicious of them. My favorite way to define them come from fixed origin similarity operations on the plane, or certain matrices as in the complex number article. Because adding angles properties just fall in your lap no need to use the trigonometric identities. In fact you also prove them in an elegant way.


 * The new "direct integration" proof is nice but it assumes a lot on the readers, if you already proved that integral of 1/z is log z in the complex plane generally you would have already seen the Euler's formula. Here it is nastier because you cannot just use Taylor series because it cannot possibly cover around the origin since you have a singularity there. BY defining log z as log (module) +i (angle) you just changed one problem by another. If you don't say how you defined something here things can became very circular. I still prefer the "circle proof". Ricardo sandoval 19:25, 29 March 2007 (UTC)


 * Why the previous demonstration using differential equations was taken out? And the introduction to the proofs? Before proving any properties of e^{z} or e^{ix} you must define it(there was a definition by Taylor series in the applications part but I think it should be on the proofs part). You argument that i is a constant doesn't follow, let me try to explain better. When defining e^{x} in the real case you only show properties that work for reals. It makes a lot of sense to say that (e^{ix})'=ie^{ix} but you still have to prove it somehow and for that you will need a definition.


 * The alternative definition of e^{z} using limits is really used, see exponentiation, or the algebra section on Lecture on Physics by Richard Feynman, the previous demonstration using differential equations was already pointed out by someone other then me, please explain better why it was taken out. —The preceding unsigned comment was added by Ricardo sandoval (talk • contribs) 21:55, 11 April 2007 (UTC).

Independently discovered by Ramanujan?
What is the source for the claim: The formula was independently discovered by the Indian mathematician Srinivasa Ramanujan at the age of 11 (circa 1898-99).? Paul August &#9742; 21:42, 28 December 2005 (UTC)
 * Good point. I would remove that from the text anyway. It is known that half or so of Ramanujan's results were not new, as while he was a mathematical genius, he did most of his work in isolation (at least until gettting to Britain anyway). So, are we now go visit all the articles for which Ramanujan rediscovered a given concept and mention that? If that article has a history section, and one can fit this observation along the original discoverer and other info, I am fine. Otherwise I would be against it. Oleg Alexandrov (talk) 21:52, 28 December 2005 (UTC)
 * Actually, this article does have a history section. So back to Paul's original question. :) Oleg Alexandrov (talk) 21:53, 28 December 2005 (UTC)


 * Just to set the record straight, and as a CYA, please note that I did not add this comment to the article, and I have no knowledge of its authenticity or lack thereof. But I did make some edits after the claim was added for the reasons noted in the history.  I also suspected that this discussion would result.  I don't know who added the sentence or what his source is.  But I felt it was important to make the changes that I did in the meantime.  -- Metacomet 22:01, 28 December 2005 (UTC)

I don't think it is worth mentioning. Probably it has been "rediscovered" many times. --Zero 22:18, 28 December 2005 (UTC)


 * I am not an expert on this topic, but I agree with Zero and Oleg. Even if it is true, I don't think it is important enough to merit a mention in this article.  If nobody objects, I will remove it from the article within the next few days or so.  -- Metacomet 18:31, 29 December 2005 (UTC)

Well unlike Zero, I doubt that it has been "rediscovered" many times. So if true I think it would be reasonably significant, so I'm not opposed to it being in the article &mdash; but of course it needs a source. Without one it should definitely be removed. &mdash; Paul August &#9742; 19:32, 29 December 2005 (UTC)


 * I myself rediscovered Euler's formula at the age of 17, right after my high school calculus teacher wrote it on the blackboard. ;-)  Sorry, I couldn't resist a little humor (okay, very little).  -- Metacomet 19:39, 29 December 2005 (UTC)


 * As you may have noticed, I just removed the sentence from the article for the reasons mentioned in the revision history. If someone does eventually find a verifiable source for this claim, I would recommend adding the sentence to the article on Srinivasa Ramanujan  with a link to this article, but not including the sentence here.  -- Metacomet 00:04, 2 January 2006 (UTC)

About absolute values
Let $$\begin{matrix} x=ln(y) \end{matrix}$$

So $$\begin{matrix} y = e^{x} \end{matrix}$$

$$\begin{matrix} \frac{dy}{dx} = e^{x} \end{matrix}$$

$$ \frac{dx}{dy} = \frac{1}{e^{x}} $$

$$ \frac{dx}{dy} = \frac{1}{y} $$

$$ dx = \frac{1}{y} dy $$

Integrating both sides

$$ \int 1 dx = \int \frac{1}{y} dy $$

$$ x = \int \frac{1}{y} dy $$

$$ ln(y) = \int \frac{1}{y} dy $$

Although the function may not be defined for some values, I don’t think an absolute value is necessary in this case. --Sav chris13 12:43, 25 July 2006 (UTC)
 * I removed this section (again). There is no complex-differentiable function "ln" on all of C&times;, so it would be necessary to explain what is meant by "ln" and why it does not matter which branch is chosen and so on. Much too complicated IMHO.--gwaihir 08:04, 27 July 2006 (UTC)

I removed the proof. We have enough proofs, and this proof is not very correct. You are using the integrating factor, but it does not work for complex variables. It can be fixed, but things are subtle in complex analysis, see antiderivative (complex analysis). Oleg Alexandrov (talk) 02:46, 28 July 2006 (UTC)

Generalization of e^(a+bi)?
I found a formula for $$x^{a+bi}$$, which is a generalization of $$e^{a+bi}$$. The formula is as follows: $$x^{a+bi}=x^a(\cos(\ln(x)\cdot b)+ i \sin(\ln( x)\cdot b)) \quad \mbox{if}\quad x>0$$

Is this new? Has this already been discovered before? --WiiStation360 22:28, 25 January 2007 (UTC)


 * It's a rather basic consequence of Euler's formula. Fredrik Johansson 07:31, 26 January 2007 (UTC)

Ok, thanks. Is there a version of this formula for when x<0?--WiiStation360 21:11, 26 January 2007 (UTC)

How does the Euler's Formula prove double anlges
I am not sure how the Euler's Formula proves this. → sin(x±y) = sin(x)cos(y)±cos(x)sin(y)

Perhaps it is because I am not familiar with the complex number line/rules

As far as I know complex numbers derive from $$sqrt(-1)$$

If someone could explain that would be really helpful —The preceding unsigned comment was added by 207.228.140.159 (talk) 18:31, 17 February 2007 (UTC). 207.228.140.159 02:51, 18 February 2007 (UTC)


 * In the first place, that's not a double-angle formula; rather the double-angle formula is a corollary of that statement. In the second place, please note the difference between
 * $$sqrt(-1)\,$$
 * and
 * $$\sqrt{-1}.\,$$
 * Now observe:
 * $$\begin{align}
 * Now observe:
 * $$\begin{align}
 * $$\begin{align}

&{}\qquad \cos (x + y) + i\sin(x+y) = e^{i(x+y)} = e^{ix}e^{iy} \\ &{}= (\cos x + i\sin x)(\cos y + i\sin y) \\ &{}= (\cos x\cos y - \sin x\sin y) + i(\cos x\sin y + \sin x\cos y). \end{align}$$
 * The very last equality comes from the usual multiplication of complex numbers. Now recall that complex numbers are equal only if their real parts are equal and their imaginary parts are equal.  That gives you the two identities. Michael Hardy 01:44, 19 February 2007 (UTC)
 * The very last equality comes from the usual multiplication of complex numbers. Now recall that complex numbers are equal only if their real parts are equal and their imaginary parts are equal.  That gives you the two identities. Michael Hardy 01:44, 19 February 2007 (UTC)

Thank you for clarifying it    oh...and in my title I meant compound angles 207.228.142.47 16:37, 4 March 2007 (UTC)

Antonio Gutierrez links
I'm going to remove these if no one objects--the first one is hardly informative compared to the material presented in the article itself. The second is a puzzle and obviously has no place in an encyclopedia. They both smack of personally inserted links to me. —The preceding unsigned comment was added by 130.15.126.81 (talk) 03:55, 24 February 2007 (UTC).

Definition section
I think we need a definition section for $$ e^{ix} \,$$ or for $$ e^{z} \,$$ since different definitions (equivalent) are used in different sources. Maybe a new section is not needed but we certainly need to cite those definitions!! The ones that I can recall(and intend to find sources) are:

As the Taylor series:


 * $$ e^{ix}= 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \cdots \, $$

As the limit:



e^{ix}=\lim_{n\rarr\infty}\left(1+\frac{ix}{n}\right)^n\, $$

As the unique solution of the differential equation:


 * $$ f'(x)=if(x) \,$$ with $$ f(0)=1 \,$$

Directly (and I guess deceptively) as


 * $$ e^{ix}=\cos(x)+i\sin(x) \, $$

Or by first defining $$ \log(z) \,.$$

I will try to find sources(books) for each one (if they exist). Someone could help on that?

Once one chooses one definition one needs to show the others as consequences, Euler's formula itself, and the property of "exponents" $$ f(x+y)=f(x)f(y) \,$$. So there is no way to avoid the "heart" of the matter.

Ricardo sandoval 14:25, 14 April 2007 (UTC)


 * the problem, Ricardo, is that normally, when one is showing or proving a fact, one is not allowed to define such a fact as true at the outset. that leaves little left to prove. defining
 * $$ e^{iy} = \cos(y) + i\sin(y) \, $$
 * is such an example. what we have to work with going into this proof is what we obtain, out of calculus and algebra, regarding the exponential function, the sinusoidal functions, various results of differentiation (like the chain rule or the quotient rule which have been used in the existing proofs), or the Maclaurin series for ex, cos(x), sin(x) (used in the first of the proofs), and finally what we already know (from algebra) of imaginary and complex numbers, particularly the imaginary unit.  that's it. that's all we have to work with.
 * if you can come up with an otherwise self-contained proof that begins with this result from calculus:


 * $$ e^x = \lim_{n \rightarrow \infty} \left(1 + \frac{x}{n} \right)^n \ $$


 * and leads us to Euler's result when x is purely imaginary, that would be something that would be an addition of value to the article. perhaps you would need similar limits for the cos(x) and sin(x) functions to complete the proof, but if you do, those results must be obtainable for real variables preceding any appeal to complex numbers. r b-j 22:58, 15 April 2007 (UTC)

Just did a little research, most authors I saw defined e^z by the power series (Ex. Curtiss(1978), Polya(1974), Courant(1965), Rudin(1966)) and most books I saw with some exceptions.

"Directly" as e^{x+iy}=e^{x}(cos(x)+isin(x)) (Alhfors "Complex Analysis" (1953), Robert B. Ash "Complex Variables"(1971), Anthony B. Holland "Complex function Theory" (1980), Greene/ Krantz "Function Theory of One Complex Variable"(2002), T. Gamelin "Complex Analysis"(2001))

By the lim (1+z/n)^n (E. Townsend "Functions of a complex variable" 1915)

By first defining log(z) (Hardy "Course of Pure mathematics" (1908))

By the unique solution of f'(z)=f(z), f(0)=1 (Lars V. Ahlfors "Complex analysis" (1966)) Ricardo sandoval 01:33, 16 April 2007 (UTC)


 * references are useful but defining ex+iy = ex(cos(y) + i sin(y)) will not prove Euler's equation. (tautologies, being vacuous truths, are true, but do not say very much.) i dunno what is missing from our other discussion, but exponential functions (not base-e) have meaning before calculus, derivatives, or Taylor series.  it takes calculus to give meaning to the natural logarithm and exponential and also to derive a Taylor series for it.  this exists for real x.  no mention of complex or imaginary numbers at all.  it is fine (i would not call it a definition, though) to begin with the Taylor or Maclaurin series (for both the exponential and the sinusoidal functions), the definition of i  (which is essentially that i2 = -1) and come up with Euler's formula.  that's perfectly legitimate, i think it's how Euler did it himself.  now, keep in mind that the Maclaurin series for all three functions are derived from knowledge of the functions and their derivatives at x=0.


 * it is also perfectly legitimate to skip the intermediate step of equating the power series and derive Euler's formula straight from the properties of the exponential and sinusoidal functions. those properties would be knowing the functions and derivatives.  that is what is used in the other two proofs.  again, i am not sure how using the fact that
 * $$ e^x = \lim_{n \rightarrow \infty} \left(1 + \frac{x}{n} \right)^n \ $$
 * to get to Euler's formula when iy is substitued for x and using i2 = -1, but if you have a proof doing that, Ricardo, please add it to the other three in the article. r b-j 02:27, 16 April 2007 (UTC)


 * just fiddling a little:


 * $$ (a+b)^n=\sum_{k=0}^n \frac{n!}{k!\,(n-k)!} a^{n-k}b^k \ $$
 * combined with
 * $$ e^x = \lim_{n \rightarrow \infty} \left(1 + \frac{x}{n} \right)^n \ $$


 * is
 * $$ e^x = \lim_{n \rightarrow \infty} \sum_{k=0}^n \frac{n!}{k!\,(n-k)!} \frac{x^k}{n^k} \ $$


 * as n gets large, the early terms of the summation (where k<<n) become


 * $$ e^x = \lim_{n \rightarrow \infty} \sum_{k=0}^n \frac{(n)(n-1)(n-2)...(n-(k-1))}{k!} \frac{x^k}{n^k} \ $$


 * $$ = \lim_{n \rightarrow \infty} \sum_{k=0}^n \frac{(n)(n-1)(n-2)...(n-(k-1))}{n^k} \frac{x^k}{k!} \ $$


 * $$ = \lim_{n \rightarrow \infty} \sum_{k=0}^n (1) \left(1-\frac{1}{n}\right) \left(1-\frac{2}{n}\right)...\left(1-\frac{k-1}{n}\right) \frac{x^k}{k!} \ $$


 * $$ \rightarrow \sum_{k=0}^{\infty} \frac{x^k}{k!} \ $$


 * which get us nothing more than starting with the Maclaurin series, which has already been done. what's another way to use this fact (and analytically extending from real x to imaginary):


 * $$ e^{iy} = \lim_{n \rightarrow \infty} \left(1 + \frac{iy}{n} \right)^n \ $$
 * to get, in the limit, to
 * $$ e^{iy} = \cos(y) + i \sin(y) \ $$ ??


 * do you have a good idea to get there (without repeating the Taylor series proof), Ricardo? r b-j 02:54, 16 April 2007 (UTC)

I included some more books above: The reference Townsend (that you can see at google books) has one proof using the limit, it is messy but being a little more relax there is:

$$ \cos(x)+i\sin(x)= (\cos(\frac{x}{n})+i\sin(\frac{x}{n}))^n \approx (1+i\frac{x}{n})^n $$ for big n. So we should have

$$ \cos(x)+i\sin(x)= \lim_{n \rightarrow \infty} (1+i\frac{x}{n})^n $$

Which I found in some oriental version of this page. Feynman "Lecture on Physics" from what I remember uses somewhat the reverse order with the licenses physicists have. From this limit definition we can also prove e^(z)e^(w)=e^(z+w).

I don't like the "Direct" definition but when you prove from it all the other properties there is nothing to complain from the logical point of view(check http://www.math.gatech.edu/~cain/winter99/ch3.pdf if you don't believe me). Some of them give motivations for that (Ahlfords(1953) makes a decent point).

So we have some variation in the literature and I don't see a reason not to represent that here. Ricardo sandoval 04:08, 16 April 2007 (UTC)

By the way I am not implying that we should put the proof above here since it is heuristic and tricky to formalize.Ricardo sandoval 04:47, 16 April 2007 (UTC)


 * i know that the equality


 * $$ \cos(x)+i\sin(x)= \left(\cos\left(\frac{x}{n}\right)+i\sin\left(\frac{x}{n}\right)\right)^n \ $$


 * is true, but i know that only because i know of Euler's formula. without making a circular appeal to Euler, how is that known to be true?  i suppose, for integer n, you can apply the binomial theorem, but i can't see on the surface, how that gets us any closer to showing this equality. r b-j 04:51, 16 April 2007 (UTC)


 * you can prove
 * $$ (\cos(x)+i\sin(x)) (\cos(y)+i\sin(y)) = \cos(x+y)+i\sin(x+y) \ $$
 * by trigonometric identities, then
 * $$ (\cos(x)+i\sin(x))^n = \cos(nx)+i\sin(nx) \ $$
 * by induction or by multiple application of the last one.


 * When n is big x/n is small so cos(x/n) is almost 1 and sin(x/n) almost x/n. So that is the logic. To make it fully rigorous its painful that is why I guess authors don't commonly use that kind of approach. Can we move to other issues? Ricardo sandoval 06:43, 16 April 2007 (UTC)


 * actually, Ricardo, i think that this can be made into a pretty good proof. the sorta-kinda anal-retentive aspects regarding what allows for analytic extension of operations (like reversing limits and summations) from real arguments to imaginary arguments is not the most critical thing here, as far as i can see.  i know the second part of the proof involves cos(x/n) going to 1 and tan(x/n) going to x/n in the limit.  this can be a proof that approches this from another angle that is informative and that's what encyclopedia articles are good for.  Wikipedia is not a junior or senior level textbook in Complex Variables.  the two serve different purposes. r b-j 05:51, 17 April 2007 (UTC)

Reply to rbj above

1) If you want to make a proof out of the idea above I am in full support.

2) Even if wikipedia is not a textbook we should certainly point to credible references and explain concisely how they handle the problems at hand, right? Doing otherwise would deceive students that want to dig deeper in the literature.

3) Maybe wikipedia is also the place for insightful/informative explanations and that is exactly why I posted the other differential equation proof in the first place. To my mind a insightful/informative proof also comes out of it, and certainly there is a proof in that direction.

4) I guess I see why you like the proofs that are on the article right now (other then the Taylor one) but we should be responsible in explaining somewhere what kind of rigor they provide, and since there is no definition of e^(ix) on the article(again out of the Taylor proof) the rigor is not complete. Ricardo sandoval 15:34, 17 April 2007 (UTC)

Picture
As far as I understood, exponentiation with imaginary arguments gives us a helix, right?

You have the cartesian product of the complex plane and the imaginary line with a helix along the imaginary line starting at point <1,0> in the complex plane, and period of 2pi.

Could someone plot this and upload it into the article?

It would give an instant visualization of the whole concept... and you know that a picture is worth more than a thousand words.

I am gonna put this observation into the article "helix" too. —The preceding unsigned comment was added by 200.164.220.194 (talk) 01:26, 17 April 2007 (UTC).


 * it's a picture that would be nice. maybe i can figure out how to get Octave to draw it.  i dunno.  it isn't critical in my opinion, but it would look nice and is a parametric equation with a trace that can be viewed in three dimensions. r b-j 05:51, 17 April 2007 (UTC)


 * It's not critical but it would certainly make it easier to grasp (although I have to say the article is pretty great as it is). As an exemple of how conventions (such as the depiction of this function as a moving point on plane, rather than a helix in 3d) can hinder comprehesion, one user came to my talk page saying that e^xi is not a helix, which is 3d by definition, but rather a point trapped in a plane. I am having a hard time explaining to him that this function takes 1-tuples but returns 2-tuples, thus being 3d, but he seems hung up on this picture we have, which is actually only the range of e^xi. You see, people get hung up on this conventions without even noticing, and that may sometimes hinder comprehesion. That's why I think this picture would be of great help. So if you could plot this, I would be very thankful to you. :-) —The preceding unsigned comment was added by 200.164.220.194 (talk) 03:19, 18 April 2007 (UTC).

You're not stating your point precisely. He could reasonable misunderstand because of your vagueness. The graph of the function whose argument is the real number x and whose value is the complex number eix is a helix. Michael Hardy 20:59, 27 October 2007 (UTC)

Real or complex?
The article says:
 * Euler's formula states that, for any real number x,
 * $$e^{ix} = \cos(x) + i\sin(x) \!$$
 * $$e^{ix} = \cos(x) + i\sin(x) \!$$
 * $$e^{ix} = \cos(x) + i\sin(x) \!$$

If it had said "complex number" rather than "real number", the identity would of course still be correct. The question is whether that statement ought to be called "Euler's formula"? Michael Hardy (talk) 17:06, 19 January 2008 (UTC)

polar form
Regarding this excerpt:


 * "The polar form reduces the number of terms from two to one, which greatly simplifies the mathematics."

What about operations like $$+, -, \mathrm{Re},\ \mathrm{Im}\,$$ ?

--Bob K (talk) 09:14, 17 February 2008 (UTC)

Phi or x?
I notice that the formulae and images aren't consistent with respect to their use of x or &phi; as the variable for the angle. Which should be used in the article? I lean in favor of using &phi;. SharkD (talk) 07:23, 20 February 2008 (UTC)

Image in Application to Trigonometry
I think I would like to suggest we remove the image in this section. The content of the image may be nice to add, but the image is: What do other people think? Thenub314 (talk) 14:32, 5 January 2009 (UTC)
 * awkwardly large on my display.
 * Take several minutes to play through
 * Cycles so your never sure if your at the beginning ending or middle (unless it is the first thing you look at on the page.)
 * The "movie" take several minutes to play on my computer
 * It trying to pack in so much info it is not clear at any point what it is trying to explain.


 * I agree with you. And I would add that its purpose has nothing to do with trigonometry.  Its purpose is to explain what a graph of function $$e^{ix}$$ would look like.
 * --Bob K (talk) 15:43, 5 January 2009 (UTC)


 * That makes me smile! Thenub314 (talk) 16:22, 5 January 2009 (UTC)


 * I linked to it as a "See Also". That OK? --Steve (talk) 19:47, 5 January 2009 (UTC)


 * I have no problem with that. Thenub314 (talk) 20:21, 5 January 2009 (UTC)

That's better, but if we're going to reference it, then I have a few more issues:
 * 1) All it really does is trace the 3D vector [x, cos(x), sin(x)] in [x,y,z] space, which could also have been done long before Euler discovered his formula. Thus it doesn't actually require Euler's insight.  It's more appropriate to an article about helixes, in my opinion.
 * 2) The image description says: "Explaining the sine wave [is?] as geometrically fundamental as the circle." I think there's a typo.  And if "explaining the sine wave" is the objective, why are we doing that here?
 * 3) The image description says: "The sine function is the orthogonal projection of the rotated unit circle." rotated unit circle???  I don't think that will help anyone who actually needs help.
 * 4) The image description says: "In three dimensions, the unit circle, sine and cosine are the unit helix as viewed from each axis." No, I would say the locus of vector [x, cos(x), sin(x)] is a helix.  The unit circle concept in 3 dimensions is a sphere.  And is "unit helix" a valid mathematical term?

--Bob K (talk) 20:30, 5 January 2009 (UTC)

Re #1, I think the y and z dimensions are meant to be the real and imaginary axes of a complex plane. Re #4, I think he/she means that you look at the helix projected on the yz plane, it's a circle, projected on the xz plane it's a sine, and projected on the xy plane it's a cosine. Or something. Anyway, I didn't make the movie and I'm not about to argue that it's perfect and can't be improved. You should probably discuss this on the talk page of the movie's creator, or leave a note there directing that person to this talk page. :-) --Steve (talk) 22:36, 5 January 2009 (UTC)


 * I'd rather just not reference it, because that's easier than fixing all its problems, and I don't think it adds value to this article. I'm not planning on spending any more time on this now (or in the forseeable future).
 * --Bob K (talk) 23:56, 5 January 2009 (UTC)

general form?
212.93.97.181 says: Recast in a general form, the formula can be written


 * $$a^{ix}=\cos(x\ln(a))+i\sin(x\ln(a))\!$$

where a is any positive real number and ln is the natural logarithm. That is not "a general form", because you can derive it from Euler's:


 * $$a^{ix} = e^{ix\ln(a)}\,$$        (obvious by taking ln of both sides)


 * $$= \cos(x\ln(a))+i\sin(x\ln(a))\,$$        (by Euler's formula)

--Bob K (talk) 20:48, 14 February 2009 (UTC)

My intention was to highlight that any number can be raised to the power of i. Notice that
 * $$a^{ix}\!$$

revolves around the circle more quickly if a>e and vice versa. —Preceding unsigned comment added by 212.93.97.181 (talk) 21:40, 14 February 2009 (UTC)


 * The standard form for that is:
 * $$e^{i\omega t} = \cos(\omega t) + i \sin(\omega t)\,$$
 * All you are doing is redefining one constant, $$e^{\omega}\,$$ as another constant, $$a\,$$.
 * It's a trivial point.
 * --Bob K (talk) 09:52, 15 February 2009 (UTC)

Multiplicative property definition?
Unless I'm missing something, that section can't be right as it stands. Doesn't the function f(z) = eRe(z) satisfy the conditions in that definition? ciphergoth (talk)
 * You're quite right. The statement is only true for real numbers. One has to use analytic continuation to extend this to complex numbers. Dmcq (talk) 13:27, 20 February 2009 (UTC)
 * At this point, it seems heavily redundant with the other "definitions" (e.g. the "analytic continuation definition"). I went ahead and deleted it, let me know if anyone objects. --Steve (talk) 23:26, 21 February 2009 (UTC)

Roger Cotes original proof?
Was Roger Cotes original proof that $$\ln(\cos x + i \sin x) = ix\,$$ lost? I did a Google-books search, I found nothing mentioning how he did it. Albmont (talk) 23:16, 18 March 2009 (UTC)
 * It is only equal mod 2πi. Dmcq (talk) 10:56, 19 March 2009 (UTC)
 * I'm interested in History, not in perfectionism. Cotes worked with Newton, and it seems that he was aware of Calculus. It's not hard to conclude that for an infinitesimal x, cos x = 1 and sin x = x, so it makes sense that ln(cos x + i sin x) = ln(1 + i x) = ix, but what is the leap of illogic that passes from an infinitesimal x to any x? Albmont (talk) 13:31, 26 March 2009 (UTC)
 * Actually the whole history part has a lot of loose ends. How far did Bernoulli and Euler reach? Simple algebraic manipulation gives a value of $$ tan (x) = \frac {1} {i}\frac {e^{2ix} - 1} {e^{2ix} - 1}$$, under the assumption they knew that $$\int \frac{dx}{1+x^2}=arctan(x) + C$$. But did they know it by that time?
 * And regarding Cotes, did he come up with his equation by integration or derivation? Because I agree that formula seems to be coming from nowhere, it's picked out of context. —Preceding unsigned comment added by 80.216.137.161 (talk) 11:54, 14 February 2010 (UTC)


 * The reference for unity. this says Cotes stated this without proof. There is some speculation he derived it somehow while finding the nth roots of unity. Dmcq (talk) 12:24, 14 February 2010 (UTC)

Stupid question
Okay... Here's a stupid question, and I'm not gonna hide it that I'm not in college yet and that I'm a flat idiot in terms of advanced mathematics. Sorry if this question is disturbing in any way. According to Euler's formula, $$\ln (-1)=i \pi$$; and according to what little I know, $$2 \ln (-1)=\ln{(-1)^2}=\ln 1=0$$, thus $$0 = 2i \pi$$. This could be a common misunderstanding of the formula for beginners, so could someone explain it here and maybe in the article too (since encyclopedias are meant to educate the public)? Thanks in advance. Wyvernoid (talk) 05:59, 5 June 2009 (UTC)


 * There is a longer description at Exponentiation but basically the complex logarithm doesn't just have a single value, adding any integer multiple of 2πi also gives a valid value. Both 0 and 2πi are valid logarithms of 1. It is exactly the same as sin-1 0 can be either 0 or π or 2π or in fact any multiple of π. S Dmcq (talk) 10:39, 5 June 2009 (UTC)


 * Ohh thanks a lot. Maybe a link to that page should be included in the article? Wyvernoid (talk) 13:00, 5 June 2009 (UTC)


 * [In the spirit of sofixit:] Feel free to insert a link into the article. AGK 12:36, 18 October 2009 (UTC)

History Help? dx/??
In the history section, it shows this 2 equaitons:


 * $$\frac{dx}{1+x^2}=\frac{1}{2} \left(\frac{dx}{1-ix}+\frac{dx}{1+ix} \right).$$


 * $$\int \frac{dx}{1+x}=\ln(1+x),$$

While this is probably a stupid question, how is it possible to have a dx at the top with no 'd' with respect to something at the bottom? Is it not the differentiation with Leibniz's notation that ues a d/dx?

If no concise answer could be given, could someone at least redirect me to a page where I can find out about this? I can't seem to find anything on this. —Preceding unsigned comment added by 218.186.9.242 (talk) 13:40, 18 January 2010 (UTC)


 * You can think of it as a Differential (infinitesimal) but it's probably easier just to remove it. I'll do that and make the connection to the integral clearer by multiplying x by a constant. Dmcq (talk) 14:07, 18 January 2010 (UTC)

atan2
Really? This notation is not at all standard -- try finding a calculus book (or even complex analysis book) that defines/uses atan2. —Preceding unsigned comment added by 128.97.41.120 (talk) 19:00, 15 March 2010 (UTC)


 * And the funny usage of tan-1 for the same purpose causes innumerable mistakes. Swings and roundabouts, but I think thanks for whoever stuck that in there rather than keeping up the old stupidity even if it is more common. Dmcq (talk) 19:09, 15 March 2010 (UTC)


 * atan2 is a very standard function in the physical sciences. It has been a normal part of numerical computation at least since the early 1970s and probably earlier. Zerotalk 22:26, 15 March 2010 (UTC)


 * I had never heard of atan2 before today. But I'm very happy to have learned about it! It's the perfect function to use here.
 * In my experience it's very common on wikipedia to refer to things by a more specific name than is common in literature, because textbooks and papers can use slightly-vague terms and have it be clear from context, but an encyclopedia article often can't. --Steve (talk) 23:57, 15 March 2010 (UTC)

Issues with definition and proofs
I have some issues with the definitions and proofs in this page. I plan to make changes in accordance with these issues in about a week if no one responds.

First, the whole discussion about raising e to real number powers (starting with integers, then rationals, then irrationals) at the beginning of the discussion section is not necessary. The function $$ e^x $$ for real x is usually defined by either a series, as the inverse of the ln (which is defined as an integral), or the unique solution of an initial value problem (see exponential function).

Second, the differential equation definition is really a property of the complex exponential function (not a definition). Note the definition presupposes that the function being defined is analytic (ie. you can take the complex derivative), and so it's really the same as the analytic continuation definition together with an intial value problem definition of $$ e^x $$ for real x. Also note that if you interpret the derivative in this definition as $$ (d/dz)\, f(z) = 1/2(d/dx - i d/dy)\,f(x+iy) $$, as might be natural if you don't wish to include as part of the definition that f must be analytic, then you lose uniqueness. Thus the fact that $$ d/dz(e^z) = e^z $$ for complex z is really a property which should be proved. Actually I think the series definition should be stressed the most (and put first) since this is the one most accessible to the target audience of this article (IMHO). The fact that this is an analytic continuation should merely be mentioned after the series definition.

Third, both of the calculus proofs are difficult to understand and, I think, slightly less than rigorous as currently written. They can be made rigorous, but the way they are phrased now it is not even clear which definition is being used for $$ e^{ix} $$. I think the key property on which both of them rely is that $$ d/dx(e^{ix}) = i e^{ix} $$, and this is not proven. This follows either from the chain rule for holomorphic functions, or from term by term differentiation of the infinite series definition. I would prefer to mention the latter since the former requires an appeal to more advanced complex analysis. Holmansf (talk) 14:48, 9 April 2010 (UTC)


 * Yes I can't see what all that stuff about the exponential function itself is in there for. A reference to Characterizations of the exponential function would cover that I think. And there is no need for three proofs of the formula especially as none of them has a citation. Dmcq (talk) 14:55, 9 April 2010 (UTC)


 * As I recall, I put in the "differential equation definition" because the second and third proofs were implicitly using it. That's the only reason it's there. For sure you could remove both those proofs together with the associated definition if you want. --Steve (talk) 15:45, 9 April 2010 (UTC)


 * I agree that the basic stuff about the exponential function can be taken out (even though I worked on it a little, just to make it better than it was before). But I think the other two alternative proofs should be left in along with the Taylor series proof.  The reason is that people who have just learned some calculus (and I mean first year, not "Advanced Calculus" or "Real Analysis") know about some properties of the natural exponential (like it's the derivative of itself) and are comfortable with that, but the Taylor series might be less familiar. There are at least 3 reasons for why eix = cos(x) + i sin(x) and those 3 reasons should be shown for their educational value.  And they all depend upon analytic extension; whatever properties ez has for real z, it should also have for imaginary or complex z. 96.252.13.17 (talk) 05:07, 10 April 2010 (UTC)


 * I don't necessarily think those two calculus proofs should both be removed. However, as I said above I think it should be explained why (using the notation of the second proof) $$ g'(x) = i e^{ix} $$ from one of the given definitions. 108.10.102.151 (talk) 00:02, 11 April 2010 (UTC)

"Not essential, deep."
User:Dmcq [asserts] that this wording:


 * Euler's formula', named after Leonhard Euler, is a mathematical formula in complex analysis that demonstrates the deep relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x,


 * $$e^{ix} = \cos x + i\sin x \!$$

is better than this wording:


 * Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the essential relationship between the trigonometric functions and the exponential function when one or both have a complex argument. Euler's formula states that, for any real number x,


 * $$e^{ix} = \cos x + i\sin x \!$$

I really think that such an assertion needs to be defended. It certainly is not more accurate prima facie. First of all, IP 65.34.191.97 (who is not me, BTW) is correct. "deep" is just too subjective. It's not about some guru and his om. A word like "profound" or "fundamental" might work. So also might "intrinsically" or "inherent". "Essential" just says there is some common "essence" in the relationship between the trig and exp functions. And Euler's identity establishes that connection. It doesn't "demonstrate" anything, but applications of Euler's theorem demonstrate certain facts or properties. There is also other wording differences where Dmcq's preferred version is just not as accurate (it's too specific).

Dmcq, can you defend your summary judgment a bit? 64.223.106.222 (talk) 00:58, 24 October 2010 (UTC)


 * The word 'deep' is specifically used in relation to Euler's formula in the literature, for instance
 * Mathematical Intelligencer vol 12 No 3 'Are these the most beautiful' by David Wells
 * Euler's gem: the polyhedron formula and the birth of topology by David Scott Richeson pages X and 9
 * Demoivres Formula to the Rescue by Bella Wiener and Joseph Wiener page 1
 * Essential is not used that I know of. Demonstrates is perfectly okay but I'll stick in establishes since you prefer it. Dmcq (talk) 23:02, 24 October 2010 (UTC)

Feynman
"Richard Feynman called Euler's formula "our jewel" and "the most remarkable formula in mathematics" (Feynman, p. 22-10)."

If I'm correct, Feynman was referring to Eulers identity in particular, not the formula. Should this be changed? -- He Who Is[ Talk ] 12:47, 10 July 2006 (UTC)

In addition, that citation needs to refer to which work of Feynman's that is from. Then maybe we can look it up to see which Euler's he was talking about. I say it goes. Lizz612 02:22, 1 August 2006 (UTC)

Me, I am wondering why we should listen to the opion of a mere physicist :-)

Agree. This is completely irrelevant. —Preceding unsigned comment added by 68.103.205.129 (talk) 04:04, 21 September 2007 (UTC)

Well, I tried to get rid of it because it's totally irrelevant, but my changes were undone by Oleg Alexandrov with little explanation. I guess that's wikipedia. Spacefem 20:52, 27 October 2007 (UTC)

The formula is extremely important in Electrical Engineering and Physics in general. This is why Feynman's comment is interesting and relevant. I think he was actually referring to the formula, but am not sure. Would be interesting to check. Sergivs-en (talk) 03:31, 19 May 2010 (UTC)
 * Just confirmed it, he clearly means the formula. Sergivs-en (talk) 05:47, 19 May 2010 (UTC)

If anyone cares, the Feynman citation is from The Feynman Lectures on Physics, Vol. I, Chapter 22, "Algebra." I don't have the reference in front of me but I'm pretty sure Feynman was talking about the relation e^(i*pi) + 1 = 0 Alan Canon (talk) 23:40, 21 February 2011 (UTC)

Bernoulli's Help
OK, so it says that Bernoulli was the first one who got an inkling of Euler's formula, but it doesn't say which one. Both Jakob and Johann were still active and I'm curious if the two of them worked together on this (a rare occurance if they did). —Preceding unsigned comment added by 141.216.1.4 (talk) 17:07, 17 March 2010 (UTC)


 * I already asked User:99c who added the paragraph for more references. If he could not prove it, we can remove the paragraph righteously. (About 2 weeks later) --Octra Bond (talk) 03:47, 9 August 2011 (UTC)


 * OK. This is solved. It was Johann Bernoulli, he told lately. --Octra Bond (talk) 14:08, 11 August 2011 (UTC)

The "by calculus" proof
Hi, I would really like to understand this proof. I understand everything, up to the part that it says that:

integral of (dz\z) = integral of (i).

This is ok, but then the continuation is that:

ln z = ix + c.

The right side of the equation is understood. but why does the integral of (dz\z) = ln z?

I know that the integral of (1/z) = ln z, but this is not the case, the case is (dz\z), and dz is not equal to 1. So how come you can say that the integral of (z'/z) is like the integral of (1\z)?

I'd really be grateful for an explanation.


 * Whenever intergating, you need to specify which variable you are working with. The integral of (dz/z) is just a simplified way of writing the integral of (1/z) with respect to z (the 'dz').  I'm not sure how to use the formulas on Wikipedia, so I made an image of it and put it on my talk page.  Hope this helps.  timrem 03:22, 21 March 2006 (UTC)
 * I think I figured this out...
 * $$\int\frac{1}{z}\,dz=\int\frac{dz}{z}$$ timrem 21:00, 30 March 2006 (UTC)

Calculus method oversight
For some real-valued variable x, $$\int \frac{dx}{x} = \ln{|x|} + C$$. I'm not well-informed on how complex numbers affect integration rules, but is there any justification for dropping the absolute value when the variable is complex, as the calculus method does? -- anon
 * Things are much more complex for complex variables. |x| is no longer +/-x, and the log, at least its principal branch, is no longer defined for z real and negative. I could offer a longer explanation, but the short answer is that the log in the complex plane is a very different function than the log on the real line (for example, log(ab)=log(a)+log(b) may not hold. Oleg Alexandrov (talk) 18:57, 18 May 2006 (UTC)
 * This calculus method is strange anyway. Why not just show that $$\frac{\cos x+i\sin x}{e^{ix}}$$ has vanishing derivative?--gwaihir 13:06, 18 May 2006 (UTC)

that would only show that $$e^{ix} = \{k\}\{{\cos x+i\sin x}\}$$ where k is a real constant

Another proof using calculus (under construction)
I hope this makes things clearer.

I intended to show full working for this proof, should I remove some intermediate steps?

What do you mean "There is no complex-differentiable function "ln"...."? The natural logarithm is defined for complex arguments and its derivative is 1/x. Or do you mean something else?

Anyhow the point is moot. This method is verifiable, see the following sources:

http://mathworld.wolfram.com/EulerFormula.html

http://www.answers.com/topic/euler-s-formula

http://everything2.com/index.pl?node_id=138398

http://mathforum.org/dr.math/faq/faq.euler.equation.html

http://www-structmed.cimr.cam.ac.uk/Course/Adv_diff1/Euler.html --Sav chris13 13:41, 27 July 2006 (UTC)

Let Z be a complex number
 * $$ Z = \cos(\phi) + i \cdot \sin(\phi)$$

Where $$\begin{matrix} \phi \end{matrix}$$ is the angle Z makes with the real axis (see the above diagram). So $$ \phi = \arg(Z) $$
 * Differentiate with respect to $$\begin{matrix} \phi \end{matrix}$$
 * $$ \frac{dZ}{d\phi} = -\sin(\phi) + i \cdot \cos(\phi)$$
 * $$ \frac{dZ}{d\phi} = i \cdot i \cdot \sin(\phi) + i \cdot \cos(\phi)$$
 * $$ \frac{dZ}{d\phi} = i [i \cdot \sin(\phi) + \cos(\phi)]$$
 * $$ \frac{dZ}{d\phi} = i [\cos(\phi) + i \cdot \sin(\phi)]$$

Now remember $$ Z = \cos(\phi) + i \cdot \sin(\phi)$$

So


 * $$ \frac{dZ}{d\phi} = iZ $$
 * $$ dZ = i \cdot Z d\phi $$
 * $$ \frac{1}{Z}dZ = i d\phi $$

Integrating both sides
 * $$ \int \frac{1}{Z}dZ = \int i d\phi $$
 * $$\begin{matrix} \ln (Z) & = & i \phi + C \end{matrix}$$
 * $$\begin{matrix} Z = e^{i \phi + C} \end{matrix}$$

To find the C value, consider that when Z=1, $$ \phi = \arg(1) = 0 $$
 * $$\begin{matrix} 1 = e^{i \cdot 0 + C} \end{matrix}$$
 * $$\begin{matrix} 1 = e^{C} \end{matrix}$$
 * $$\begin{matrix} C=0 \end{matrix}$$

Therefore
 * $$\begin{matrix} Z = e^{i \phi} \end{matrix}$$

Recall that
 * $$\begin{matrix} Z = \cos(\phi) + i \cdot \sin(\phi) \end{matrix}$$
 * $$ e^{i \phi} = \cos(\phi) + i \cdot \sin(\phi) $$

Q.E.D.


 * Good method on how to get the C value. But who can fix the posted proof? If


 * $$f'(x)\,$$


 * $$= \displaystyle\frac{(-\sin x+i\cos x)\cdot e^{ix} - (\cos x+i\sin x)\cdot i\cdot e^{ix}}{(e^{ix})^2} \ $$
 * $$= \displaystyle\frac{-\sin x-i^2\sin x}{e^{ix}} \ $$
 * $$= 0 \ $$


 * Therefore, $$f $$ must be a constant function. Thus,




 * $$f(x)=f(0)=\frac{\cos 0 + i \sin 0}{e^0}=1$$
 * }
 * But where did this come from? If I had
 * But where did this come from? If I had


 * $$f(x)=f(2 \pi / 3)=\frac{\cos 2 \pi / 3 + i \sin 2 \pi / 3}{e^{2 \pi / 3}} \ne 1$$
 * }
 * or
 * or


 * $$f(x)=f(\pi)=\frac{\cos \pi + i \sin \pi}{e^{\pi}} \ne 1$$
 * }
 * instead of
 * instead of


 * $$f(x)=f(0)=\frac{\cos 0 + i \sin 0}{e^0}$$
 * } ?
 * Please explain... I think the correct expression should be
 * Please explain... I think the correct expression should be


 * $$f(x) = \frac{\cos x+i\sin x}{e^{ix}} \ = C$$
 * }
 * and then you will just find an argument that C = 1. Please help... --Kevin philippines 12:00, 9 September 2006 (UTC)
 * and then you will just find an argument that C = 1. Please help... --Kevin philippines 12:00, 9 September 2006 (UTC)

i disagree with your two inequalities. why do you say that the result is not equal to 1? - oh, i see, you dropped two $$ i\ $$ factors:




 * $$f(x) = f(2 \pi / 3)=\frac{\cos 2 \pi / 3 + i \sin 2 \pi / 3}{e^{i 2 \pi / 3}} = 1$$
 * }
 * }




 * $$f(x) = f(\pi) = \frac{\cos \pi + i \sin \pi}{e^{i \pi}} = 1$$
 * }
 * }

r b-j 01:34, 12 November 2006 (UTC)


 * Try this version.--gwaihir 15:25, 11 September 2006 (UTC)

Simpler differential-equation proof
The proofs given in the article are needlessly complicated. The easiest way to prove Euler's formula is to note that both sides of the equation satisfy the differential equation $$f'(x) = i f(x) \ $$ and coincide at x = 0. The statement of the proof needn't be any longer than that! (Well, a reference to the Picard–Lindelöf theorem is perhaps needed for completeness.)

A qualitative explanation is possible: If we identify complex numbers with vectors in the plane, the function $$f(t) = \cos t+i \sin t \ $$ describes motion along the unit circle. In circular motion around the origin, the velocity vector is at a 90&deg; angle with the position vector (and of the same magnitude). Counterclockwise 90&deg; rotation is the same thing as multiplication by i, and velocity is the derivative of position. Putting this together gives said differential equation.

Fredrik Johansson 20:06, 25 January 2007 (UTC)


 * doing this for $$f'(x) = i f(x) \ $$ is not sufficient. you need $$f(x) = i f'(x) \ $$ also.  it needs  to be 2nd order with two linearly independent solutions and two initial conditions.  otherwize you could multiply the i sin(t'') with any constant you want and it would still satisfy the constraints you have started with here (but, of course, would not be correct). r b-j 20:15, 25 January 2007 (UTC)


 * I don't see what you mean. $$\cos x + C i\sin x$$ does not satisfy the differential equation unless C = 1. Fredrik Johansson 20:43, 25 January 2007 (UTC)


 * I just wrote a proof using this idea "heuristic argument using the circular motion" I hope its well explained. It is nice that it also justifies the derivatives for the sine and cosine. 68.111.49.104 03:09, 23 March 2007 (UTC)

One could also simply demonstrate that $$ y = e^{it} $$ satisfies $$ y'' + y = 0$$. Since this is a linear, homogenous second-order differential equation, it must have exactly two linearly independent solutions. Since sin and cos both already satisfy this equation, $$ e^{it} $$ cannot be linearly independent of them. This is not a complete proof per se, but demonstrates the principal of the relationship between the functions. —Preceding unsigned comment added by 67.194.65.124 (talk) 02:53, 21 March 2011 (UTC)

The "by calculus" proof
...is wrong because it ignores the constant of integration. Please fix it! --Zero 03:23, 5 Dec 2004 (UTC)


 * Hey; zero, I understand what you are saying and i am not the person who made that post, but is it possible to "fix" this proof? I don't know that it is.

Seconded Zero. I think the proof by calculus is an example of circular reasoning because you already implicitly assume that e^{ix} = cos x + i sin x. Robbyjo (talk) 02:40, 9 December 2007 (UTC)


 * It does not assume e^{ix} = cos x + i sin x at all. Read the proof, where does it assume that?  It defines f(x) as


 * $$f(x) = \frac{\cos x+i\sin x}{e^{ix}}. \ $$


 * Then it shows that no one is dividing by zero (a no-no).


 * Then it shows that the derivative of f(x) (or f'(x)) is zero, which means that f(x) is a constant. Then it shows that the constant is 1 which means the denominator of f(x) is equal to the numerator.


 * What's the problem? 207.190.198.130 (talk) 08:58, 9 December 2007 (UTC)


 * The problem is that it does not include the proof at limiting values, that is, minus and plus infinity. Unless you already assume that they're both equivalent, then you'll need to show that the limit of f(x) at both minus and plus infinity are in fact integrable. Robbyjo (talk) 20:51, 21 December 2007 (UTC)


 * Why? If it's not integrable, so what?
 * --Bob K (talk) 21:06, 21 December 2007 (UTC)


 * Then it's not differentiable at those two points (i.e. minus and plus infinity). So, in effect, the proof by calculus essentially proving that it's only valid at all points except minus and plus infinity, which is not quite the same as claiming that they're equivalent at all points. Before anyone slams me down, the concept of differentiability and integrability are both linked. I'd say that they're two sides of the same coin, really.


 * To illustrate my point, can you show how to examine f(infinity) without assuming e^{ix} = cos x + i sin x at all? Repeat with f(-infinity). If you can, then show the proof and the rest is valid. I think it would again resort to Taylor series, really. Robbyjo (talk) 21:25, 21 December 2007 (UTC)


 * I don't think that the proof speaks to that issue nor needs to. 207.190.198.130 (talk) 01:25, 22 December 2007 (UTC)


 * To add, this statement is the one I'm having a problem with: "This is allowed since the equation: $$e^{ix}\cdot e^{-ix}=e^0=1 $$ implies that $$e^{ix} $$ is never zero.". This is circular reasoning. Robbyjo (talk) 21:32, 21 December 2007 (UTC)


 * If the properties already ascribed to the exponential function are to be retained (and that is what this is all about), we already know that $$e^{x}\cdot e^{-x}=e^0=1 $$ for any real x, so is the definition of the exponential of an imaginary argument  however it is defined, violate that property? It is not circular because if $$e^{ix}$$ and $$e^{-ix}$$ mean anything (a real or complex number or some other element of a metric space where we can define the operation called "multiply" with the same properties, such as an identity element, that we currently have for "multiplication") whatever they mean, when you multiply them together, you get 1.  Otherwise, it is not the already existing exponential function that you are extending to imaginary (and later to complex) arguments.  207.190.198.130 (talk) 01:25, 22 December 2007 (UTC)


 * Please forgive me for interspersing replies. You say many different things...


 * I interspersed your replies too. BTW, please sign in. Also, your language is a little too inflammatory.


 * There are a few very good reasons that I am not logging in and remaining an anonymous IP. Unfortunately, to spell out the reasons why would obviate the reasons for being an anon IP.  I'll try to control "inflammatory", but please, in return, I ask you to argue fairly and not divert the issue.  Saying one point that is false (or disputed and unproven) once is enough.  Repeating it exaserbates the discussion and frustrates others.


 * I understand you're trying to do that, but the problem is as follows. It is written: $$f(x) \ \stackrel{\mathrm{def}}{=}\ \frac{\cos x+i\sin x}{e^{ix}} = (\cos x+i\sin x)\cdot e^{-ix}\ $$.


 * It defines f(x) as $$f(x) \ \stackrel{\mathrm{def}}{=}\ \frac{\cos x+i\sin x}{e^{ix}} $$. The second equality is not in the definition.


 * That's why I didn't put $$ \ \stackrel{\mathrm{def}}{=}\ $$ at the second equality.


 * Then don't put it in.


 * I just wanted to make a point. That's why.


 * Now if you do not assume that $$e^{ix} = \cos x + i\sin x$$, then you cannot make the aforementioned connection.


 * Not true at all. We never said in the definition that f(x) = 1 (which would then imply that eix = cos(x) + isin(x)).  We are first just creating an expression,


 * $$\frac{\cos x+i\sin x}{e^{ix}} $$


 * where all of the contents have existing definition except for the real variable x. So that expression is a function of x.  Change x and the expression potentially changes value.  We don't know if it does or not and the rest of the proof investigates whether or not if it does.  Turns out, after investigation, that this expression does not change value even as the only variable inside of it, x, does change its value.


 * I perfectly understand that creating an expression is a way to do a proof. I was just saying that I fail to make a logical connection that "$$e^{ix}\cdot e^{-ix}=e^0=1 $$ implies that $$e^{ix} $$ is never zero" has anything to do to the validity of the construction of f(x).


 * So dividing by zero is okay with you?


 * No, it's not. On the other hand, that phrase doesn't show what it tried to show. (See below).


 * If you want to make an argument, a better way would be to show the limit towards the infinity (or negative infinity or both) and that limit exists.


 * Totally non-sequitur.


 * Sequitur. Why? Because if you want to do differentiation (and infer the result through indefinite integration), you'll need to show that the function is defined from negative infinity to positive infinity. And you have not shown that.


 * No, you only need to show that the function is differentiable in regions where it is claimed to be. For instance, for real x the function f(x) = +&radic;(x) is differentiable for all x>0, but not defined at all for x<0.


 * You have to obtain it from some other way.


 * No. It's a definition. I don't have to "obtain it" at all.  I just define it.  Now, it is true that I have to obtain the fact that f(x) is constant and, additionally, that the constant is 1.  But the proof does that.


 * Otherwise, the statement $$e^{ix}\cdot e^{-ix}=e^0=1 $$ adds nothing to the argument.


 * Oh, come on! If we set up an expression that is a fraction and the denominator takes on the value of zero, we have problems.  It's useful, at the outset, to make sure we're not dividing by zero.  And we know we are not because if eix was 0, then multiplying it by anything (particularly e-ix which is qualitatively the same) cannot result in something that is non-zero.


 * You yourself said that "If we set up an expression that is a fraction and the denominator takes on the value of zero, we have problems." Now, in real space, e^-infinity = 0. Then how'd you reconcile this statement to the imaginary space without assuming Euler's formula in the first place?


 * Baloney. I'm not evaluating it at infinity.  And I don't need to.  Neither do we need to do that in the real case when we divide by ex.


 * Yes, you do not evaluate it at infinity, but if you want to make your result apply for all x (which includes infinity), then you need to show that your construction is valid for the infinity case, which you haven't shown. As I already stated above, the proof there only valid when x is not at plus or minus infinity, where in fact that it should also be valid for +/- infinity as well. BTW, the word "baloney" is completely unnecessary.


 * On the other hand, you may assume the behavior of exponential from the real space. But, if you try to make it through the behavior of the real space, e^infinity = infinity, which is a disconnect from the imaginary space.


 * We're not asking that question and we don't need to. Nor do we need to settle what the behavior of the real function ex for infinite x is to define it.


 * I understand that e^finitenumber = finitenumber, so that f(x) is okay for finite numbers. But for infinity cases, it must be handled differently.


 * Who gives a rat's ass? So what if ex or eix have to be finessed for infinite x?  The properties of ex (and its derivatives) exist, are quantifiable and expressable, without ever pushing it to the infinite limits.


 * To be honest, I've never seen any math books that prove Euler's Formula through differentiation. I only saw it through Taylor expansion. Robbyjo (talk) 03:04, 22 December 2007 (UTC)


 * That fallacy has a name: Argument from lack of imagination.


 * Not necessarily. Especially if the proof isn't shown valid yet.


 * No, you are saying that the proof isn't valid. It's as valid as the "Taylor expansion" proof in that both ascribe to operations with imaginary numbers properties that those same operations have with real arguments.  If you're going to take issue with the extension of such operations from reals to imaginary/complex in the latter two proofs, then I will make the same objections to the proof you like.


 * I'm trying to show you where my objection was. You can try to object any proofs, as long as your objection is valid. What makes me frustrated is that you repeatedly deny that this particular f(x) is irrelevant when x = +/- infinity whereas Euler's formula is supposed to be valid for +/- infinity.


 * Who (besides you) is insisting on that requirement? I don't even see that as a requirement of definition of the real ex.  +/- infinity are not numbers.  They are concepts often used in limits.  But they are not numbers and real functions are mappings of one real number to another.  And there are functions that very well have portions of their domain x that have no mapping defined.  Not just at infinity, but at specific sets of real numbers.


 * I was saying that this proof is not perfect (see my comments above). You're saying that my objection for +/- infinity is not valid at all? I was saying that at +/- infinity, that particular construction of f(x) is not valid, unless you implicitly assume e^{ix} = cos x + i sin x.


 * It was a side comment anyway.


 * But it may be indicative of where the objection is coming from.


 * BTW, we could apply petty nit-picking (in the guise of rigor) to the Maclauren series (what you call "Taylor expansion") proof, too. Who says we can take ix to some integer power n?  What does it mean to do that?  Who says we can add these terms together when they contain imaginary parts?  Who says we can apply rules like the distributive property when the terms and factors are imaginary?  We do all of that to expressions with imaginary values (and the sum of imaginary to real, which is simply a complex number) because we do it to the reals, and we are extending the definitions and rules (that we already have established for reals) to the imaginary (and complex).  All three of those proofs are doing it, and if your only concept of the validity of Euler's formula is what comes from expanding ex, cos(x), and sin(x) in a Maclauren series and seeing that it works out, then I would say your calc prof (or text) missed a few opportunities to teach. 207.190.198.130 (talk) 03:48, 22 December 2007 (UTC)


 * Its Taylor expansion proof is valid, because Taylor series that it can take any x (be it real or complex) and that the expansions of e^x, sin(x) or cos(x) assume nothing about the complex numbers.


 * I wasn't picking on that. How about the concept of powers of imaginary numbers?  And what about the distributive property applied to such?  Who says you can factor out the i out of terms with i in it?  We can do that because we extend the meaning of addition and multiplication and such to imaginary numbers in such a way that the rules are the same as they were for real numbers.  With essentially two additional rules:
 * 1. Purely imaginary numbers can be added to purely real, but not simplified further (3 + 4i cannot be combined into a single term).
 * 2. i2 = -1.
 * 3. I should explicitly add that other rule (or axiom) in the extension of existing rules of real mathematics to the complex is that otherwise i is treated just like any other constant value. Rules of commutativity, associativity, distributivity (among others) apply to i as the imaginary unit just as they would apply to some other constant that might be real.
 * So you can treat i just like any other constant. That's what allows you to do what you do with the "Taylor Expansion" proof, and likewise, that is what allows us to do the other two proofs.  That's what the word "extension" (of properties) means.  We can multiply these sums of real+imag to each other and follow the same rules we would if i was any other constant.  Same for division, powers, differentiation.  So why stop when we get to exponentiation?  Whatever eix is, if you differentiate it w.r.t. x, it has to be i eix or else we are not extending the meaning of differentiation and/or the natural-base exponential to imaginary i.  If i were some real constant, we would have no problem saying that (d/dx) eix = i eix.  If the exponential is to continue to have the same properties that it had for reals, then axiomatically, the same property has to apply for imaginary i.


 * Whereas I did not say anything about that. To say that I don't know the concept of powers of imaginary number is nothing less than condescending.


 * I am not saying that. I am making a point that whatever axioms, rules, and extensions that you are using to make the Maclaurin series proof work are the very same axioms, rules, and extensions that make the "By calculus" proof work.  Remember, in the real function, f(x) = ex has meaning and has properties long before you get around to expressing it as a Maclaurin series.  It is the very fact that (d/dx) ex = ex that allows you to obtain those particular coefficients for the Maclaurin series.  For real x, ex is not defined by its Maclaurin series.  Neither need it be for complex or imaginary x.  But, if we're extending the meaning of ex to complex or imaginary arguments, the same properties of the base-e exponential remain, namely that  ex+y = ex ey, exy = (ex)y and that (d/dx) ex = ex.  You might end up bringing those properties into the proof (from the outside), but you do not bring into the proof the prior knowledge that eix = cos(x) + isin(x).  And I would agree with you that to do that would be circular reasoning.  However, I disagree with you that the proof that you don't like actually does that.


 * The objective here is to provide a good proof of Euler's formula, which this particular f(x) construction doesn't provide. IIRC, prior to Euler, nobody knew the behavior of imaginary numbers outside of the basic tenet like you mentioned and its consequence, i.e. exponentiation with real numbers. Euler's formula provides a link to do beyond that.


 * There are already 3 good proofs of Euler's formula that attack it from 3 different perspectives, which has pedagogical value. If something is true, it's nice to see more than one reason to believe it; it solidifies the validity of the result.  What Euler's formula does is provide an explicit (i.e. an explicit real part and explicit imaginary part) mapping of the exponential function to imaginary arguments (that can easily be extended to complex arguments).


 * So, the nitpick you talked about really doesn't apply here. IIRC, the relation e^{ix} = cos(x) + i sin(x) only exists after Euler's formula.


 * Of course, by definition "Euler's formula" is eix = cos(x) + isin(x). But the concept of the base-e exponential exists before its Maclaurin series.  Same for sin and cos.  The reason that those functions are equal to their Maclaurin series is because of the properties of their derivatives.  Rather than using those properties to derive the Maclaurin series and then show that Euler's formula is valid, the other two proofs do it directly from those properties, skipping over the intermediate results of the Maclaurin series.


 * I don't speak for the third proof, but for the current construction of f(x) for proof by calculus isn't quite valid. I found a better proof using the calculus that completely sidestep this issue.


 * The current proof is fine, despite your objections and despite that infinities are literally non sequitur. The proof does not bring the subject of infinity into the discussion.  It is literally not a topic of discussion until you brought it in.


 * And we know the behavior of complex numbers is defined for polynomials because we define i = sqrt(-1).


 * That is imprecise. Doesn't -i also have equal claim to be &radic;(-1)? We actually define i to be an "imaginary number" (since no real number has this property) that squares to -1.  There are two quantitatively different (yet qualitatively identical) numbers that do that, and only one of them gets to be i.  But we can pick either one, and once we do, the other one is -i.


 * Yes, I agree that the concept of the base-e exponential exists before its Maclaurin series and same for sin and cos. But the behavior of e^{ix} wasn't completely characterized prior to Euler, IIRC.


 * That's true. It wasn't.  Before Euler, human beings did not know explicitly what the real and imaginary parts of eix were.  But, whatever those expressions for real and imaginary parts would come out to be, IF it's the exponential function that is operating on a real, imaginary, or complex argument, these properties of it must remain:


 * ex+y = ex ey,
 * exy = (ex)y, and
 * (d/dx) ex = ex
 * and, in the proof you don't like, the chain rule and quotient rule of differentiation remains. That's enough.  With those definitions of behavior, it turns out that there is essentially one complex expression for eix satisfies these existing rules.  (Sure we could express it with integer multiples of 2&pi; added to the cos and sin arguments, but that is a trivial extension and only serves to confuse.)  So, just like how we sometimes integrate functions, where we guess at an anti-derivative and then check our guess by differentiating it and comparing to the function we are trying to integrate, we can similarly make a judicious guess at the explicit expressions for the real and imaginary parts of eix and then check to see if our guess then satisfies the above stated properties of the base-e exponential function.


 * I was saying that if we assume that e^{ix} behaves like real e^x (which is not), then we'll have a problem in that particular proof. But, as you said, the objective in the proof is just whether or not e^{ix} is differentiable and that it doesn't create bad behavior (division by zero). If we assume e^{ix} to behave like e^x (its real counterpart), then you're safe for x=finitenumber, but not when x=+/- infinity.


 * So far, no one but you are making x = +/- infinity an issue. Of course we know (after we get an expression for eix) that it doesn't converge for real x as x grows without bound.  Neither do the functions cos(x) and sin(x).  Big deal.  It is still non sequitur.


 * If we don't assume that, then how can the phrase "e^{ix} x e^{-ix} = 1" will help with anything (i.e. explaining the division by zero part)?


 * Check that the expansions of e^x, sin(x) or cos(x) involves only polynomials with integer exponents plus some constant.


 * No, they're infinite series. A polynomial is of finite order.  Nonetheless, you are sorta being the pot that calls the kettle "black" when you are importing all of these facts about ex, cos(x), and sin(x) from the real domain, you import rules about what we can do with that pesky i from the real domain (but for some reason object to doing that in the proof you do not approve of).  If you can do all of this manipulation of i that you do in the Taylor expansion proof, why can't I (or the person who originally plopped that proof here) do the same extensions of mathematical fact in the latter proofs?


 * Yes, they're infinite series, yet of polynomial form, loosely speaking. That's an incomplete sentence, BTW.


 * Edit: Seems like my browser has a problem. OK, if you define the manipulation of i in e^{x} just like any real scalar constants, then you'll run into problem. (See my answer in previous paragraph).


 * BTW, it's Maclaurin series. And Maclaurin series is a special case of Taylor series. Since Taylor series expands an expression, it's often called "Taylor expansion".


 * Yeah, yoose ta be i cudn't even spel "enjunear", now i are won. Also, "Maclaurin series" is the precise term since the constant offset to x is zero for the series for ex, cos(x), and sin(x) that are used in the first proof.


 * After scouring a bit from the web, I saw this proof: http://www.bbc.co.uk/dna/h2g2/A346295 This is valid because it doesn't presume e^{ix} = cos x + i sin x anywhere in the proof.


 * And neither does the proof you object to. Why do you repeat this red herring?


 * The proof currently shown in the main page does have a problem. You don't accept that it's a problem yet you don't explain how the connection from the definition to the phrase that you claim to explain non-zero part (i.e. "e^{ix} x e^{-ix} = 1"). To me, it does not explain anything. It hints toward circular reasoning.


 * So, yes, this is the first time I saw Euler's formula proven by differentiation only. Let me quote it real quick here Robbyjo (talk) 04:35, 22 December 2007 (UTC)

y = cos x + i sin x Continuing to treat i like any other number, we have, by differentiation: dy/dx = -sin x + i cos x = i(cos x + i sin x)    => dy/dx = iy     => i dx/dy = 1/y => ix = ln y + c But when x = 0, y = 1. So c = 0. => ix = ln y    => y = e^{ix} So cos x + i sin x = e^{ix}


 * The existing proofs are just as valid and you are wrong about your objections to them. 207.190.198.130 (talk) 05:36, 22 December 2007 (UTC)


 * No it's not. My objection still stands about the +/- infinity. The proof I show above managed to sidestep the infinity case since sin(x) and cos(x) are bounded between +/-1, whereas the behavior of e^{ix} (esp. at +/- infinity) was not really known prior to Euler. So, to use a construct with e^{ix} to prove Euler's formula runs risk at those boundary cases.


 * I don't see any issue about x = +/- infinity that needs to be sidestepped. The proof you don't like doesn't need to settle issues about the value of eix for real and infinite x.  You've stated that it does need to establish some behavior for real and infinite x, but I'm not sure you stated what that required behavior is, nor why such conditions are needed.


 * I believe I've shown my case clear enough. I don't want to argue any further. Robbyjo (talk) 06:42, 22 December 2007 (UTC)


 * I understand you don't want to argue further and you don't need to, if you don't want. I will respond to a couple things, because I believe that it will outline  the net differences in POV, and also the net difference in what is salient.  Rather than intersperse comments, I'll copy:


 * ... After scouring a bit from the web, I saw this proof. This is valid because it doesn't presume e^{ix} = cos x + i sin x anywhere in the proof.


 * And neither does the proof you object to. Why do you repeat this red herring?


 * The proof currently shown in the main page does have a problem


 * But the alleged problem is not that it defines eix = cos(x) + isin(x) before showing that eix = cos(x) + isin(x) . We agree that if it did do that, it would be circular reasoning and that the proof would be invalid.  Where we don't agree is that the disputed proof actually does that and makes such a definition or assumption.


 * ... You don't accept that it's a problem yet you don't explain how the connection from the definition to the phrase that you claim to explain non-zero part (i.e. "e^{ix} x e^{-ix} = 1"). To me, it does not explain anything. It hints toward circular reasoning.


 * Do you accept that if a and b are numbers, and if a x b = 1 that neither a nor b can be 0? 207.190.198.130 (talk) 02:35, 23 December 2007 (UTC)


 * Just one very quick answer: You're effectively saying that if a number or a function has an inverse, then it can never be zero. It's untrue. Check with e^x. When x = -infinity, e^-infinity = 0. Yet e^x has an inverse, which is e^-x. So, I can say e^x x e^-x = 1, but can I guarantee that e^x is never zero? No. That's why, I said "e^{ix} x e^{-ix} = 1" does absolutely nothing in showing that it'll never be zero, unless, of course you already implicitly presume eix = cos(x) + isin(x). I hope you can now see my point of view. Robbyjo (talk) 09:00, 23 December 2007 (UTC)


 * Infinity is not a number. All sorts of functions have definition and work without having their behavior for unbounded arguments nailed down.  Again sin and cos are such functions.   For any number x, there is an additive inverse -x (often called the "negative") so that x + (-x) = 0.  For such a number there is the exponential mapping ex.  Can that number be zero?  Perhaps you cannot, but I can guarantee that ex cannot be zero.  This +/- infinity crap is a red herring.  A distraction.  Not a single place have you succeeding in showing that it has to be dealt with, either to define the exponential function, or to explore the properties of such a function, or (in the final analysis) how to extend the function and meaning of such a function from reals (finite reals) to imaginary argument (and then to complex).  Robbyjo, you've failed.  Your argument does not persuade.  And, I think that you might be finding this out, it failed not because we are dummies and just can't grasp what you're saying. 207.190.198.130 (talk) 21:44, 23 December 2007 (UTC)


 * Also see the first bullet at Picard_theorem.
 * --Bob K (talk) 02:43, 27 December 2007 (UTC)


 * I dunno what the Picard theorem is, but it seems to already have a notion of ez for complex z. Does it already know of or use the results of Euler?  If so, then because Picard says it, doesn't help prove it for Euler without being circular.  Doesn't change the issue with me, though.  If we accept the manipulations of imaginary and complex numbers that are done in the Maclaurin series proofs (you know, where we treat i as any other constant, but with the additional knowledge that i2 = -1), and if we accept that ex, cos(x), and sin(x) have the Maclaurin series they do (which comes from their properties of derivatives), we can jump over the intermediate results of the Maclaurin series, take the very same properties of ex, cos(x), and sin(x), the very same extension of use of i as any old constant except with the key property that i2 = -1, take all those together and derive Euler's formula.  I was the one who added the diff eq. proof (shhh! Bob, don't tell anyone, nasty admins will come after me), but was impressed with the simplicity of the "by calculus" proof that was supplied by someone else.  And, despite Robbyjo's objections, the proof is sufficient.  It begins with the same axioms about i that the Maclaurin series proof does and uses the same properties of ex, cos(x), and sin(x) that are used to get the Maclaurin series of each.  It just skips over the Maclaurin series as an intermediate result.  And the behavior of either of these three functions at infinity is simply not an issue.  These functions have properties and derivatives without considering what they may do for unbounded argument.  I have no idea what Robbyjo is thinking that makes him/her feel that such an issue is important.
 * And where he/she says: "Euler's formula is supposed to be valid for +/- infinity", I have absolutely no idea what meaning or salience that has. None of the constiuent functions are even well defined for +/- infinity, although for real x, the limit of ex as x goes to -inf is, of course, zero.  But the rest of us know that ex itself never gets to zero for any real x.  And for complex or imaginary x, we know there is the negative -x that exists and ex e-x = e0 = 1.  That is true because that is axiomatically what exponential functions do with their exponents and before Euler, we know that, even for complex or imaginary x, there exists its additive inverse, -x.  That's why we know, even before we figure out that eix = cos(x) + isin(x) that whatever eix is, it ain't zero.  And before Euler, we figgered out how to divide by complex numbers and we know we can do it if either the real part or imaginary part are non-zero.  It's a complete proof and just as valid as the Maclaurin series proof. Later, Bob.  (BTW, at first I thought you were wrong in saying that 0 is an imaginary number, but now I'm not so sure.  The textbooks don't help.  Are you sure, as a matter of definition, that even zero, which we know is real, can also be imaginary?) 207.190.198.130 (talk) 08:35, 27 December 2007 (UTC)


 * Hi. I assume you are talking about Imaginary number.  And, no, I am not sure.  That claim was already made before I came along.  (see 17-Nov-07)  I made an edit that contradicted the claim, and it was questioned here.  So I revised my edit.  All I can say is the obvious... that 0 is the only number on both the real and imaginary axes, so the claim seems quite reasonable to me, and it avoids a seemingly unnecessary discontinuity in the imaginary number line.  What's not to like?
 * Every number line passing through 0 is closed under addition, because 0 is the additive identity. For example, complex numbers of the form r+ri, where r is real, all lie on the same number line.e.g. (1+i)+(6+6i)=7+7i. For another example, complex numbers of the form r-3ri also lie on the same line passing through 0, e.g., (5-15i)+(7-21i)=12-36i. The imaginary number line passes through 0, and thus the imaginaries must be closed under addition, and 0 must be imaginary. 96.229.217.189 (talk) 17:39, 21 February 2012 (UTC) Michael Ejercito
 * I don't know what the Picard theorem is either. I just happened to stumble across it while reading Complex argument (continued fraction), and I decided to link it here in case it helps.
 * --Bob K (talk) 13:41, 27 December 2007 (UTC)

All calculus proofs should be deleted
The problem with the calculus proofs IMO is, there are two questions that need to be addressed in this article: (1) What does it mean to raise e (or any number) to a complex power? (2) Why is Euler's formula true? We can't answer (2) until we've answered (1), and this article is obligated to fill in the whole gap from (1) to (2) (at least sketchily), because this is probably the first thing that anyone would learn about complex exponentiation. Both of the calculus proofs currently in the article, and the various ones on the talk page and article history, all assume that it's perfectly obvious that the complex exponentiatial function should satisfy the same calculus identities as the real exponential function. But it's not obvious at all, given the definitions (1) that we've supplied. I think it encourages sloppy thinking to apply complex derivatives to the complex exponential function as if it is exactly the same as applying real derivatives to the real exponential function.

Therefore I suggest deleting the calculus proofs. What do other people think? :) --Steve (talk) 21:31, 24 March 2011 (UTC)


 * Diametric opposition.


 * (1) What does it mean to raise e (or any number) to a complex power?


 * $$ e^{x+iy} = e^x \ e^{iy} $$


 * So now it's a question of what it means to raise e to an imaginary power. This is fundamentally what Euler's formula is about.  We surmise (not quite the same as derive, but this is better than "at least sketchily") that when y is 0, then eiy must degenerate to 1.  And we treat i as a constant, albeit an imaginary constant (and we keep in mind that i2=-1 just as we must for the Maclauren series proof).  We also surmise that


 * $$ e^{i(u+v)} = e^{iu} \ e^{iv} $$


 * Now a completely algebraic proof can be constructed from these facts and from knowledge of the trigonometric sum of angle formulae:


 * $$ \cos(u+v) \ = \ \cos(u)\cos(v) \ - \ \sin(u)\sin(v) $$
 * $$ \sin(u+v) \ = \ \cos(u)\sin(v) \ + \ \sin(u)\cos(v) $$


 * but that proof is more difficult than the calculus proof. It turns out that, in order to derive the derivatives of sin and cos, we require these trig identities anyway, but if the reader is happy to accept that the derivative of sin is cos and the derivative of cos is -sin, then to get from the fundamental meaning of the exponential, that is:


 * $$ a^{b+c} = a^b \ a^{c} $$


 * and the fundamental meaning of e (which comes from calculus):


 * $$ \frac{d}{dx} e^{x} = A \ e^{x} $$


 * where A=1 (no other exponential base can make that claim), then, given other well-known rules of freshman-level calculus (like the chain rule), we then surmise that the only meaningful derivative of the base-e exponential with an imaginary argument must satisfy:


 * $$ \frac{d}{dy} e^{iy} = i \ e^{iy} $$


 * From that we come up with the only meaningful and consistent (with the rest of the mathematical universe) identity for eiy, which essentially answers your question (2).


 * Now, Steve, this article should serve the purposes of persons that haven't taken an Advanced Calculus or Real Analysis course where we get really anal about the meaning of limits and derivatives. These would be students (or graduates) of science and engineering that are not math majors (or graduates).  We should not make this article into one that would serve only the purposes and interests of math majors, math grad students, and their professors.


 * 71.169.188.170 (talk) 21:13, 25 March 2011 (UTC)


 * In cases of disagreement like this we should just fall back to Wikipedia policy and get a citation for any proofs that are included. So I'll stick citation needed on those three proofs. Lets see if anyone can provide a citation which looks reasonably similar to any of them and anything that doesn't get a citation within a couple of weeks should just be deleted. Dmcq (talk) 23:04, 25 March 2011 (UTC)


 * My question is: Are there readers who can understand differential equations involving complex-valued functions, but cannot understand the Taylor series definitions of sin, cos, and e^x? If so, who? (What field and what stage in the education?)
 * It seems to me, you only need to know simple algebra to understand the Taylor series definitions. (It's not so easy to derive these Taylor series, and not so easy to determine whether they converge, but it is very very easy to understand what the symbols mean.) In my own grade-school education I learned algebra first, and calculus second. So I would have been able to follow the Taylor-series proof many years before I could follow the differential-equation proof. But I guess other people's education may be different. Can you help me understand who the audience is for the differential-equation proof? :-) --Steve (talk) 02:42, 26 March 2011 (UTC)
 * What you should be worried about are the readers who remember the basic rules of differentiation in calculus, but are more foggy regarding the Maclaurin series for ex cos x, and sin x. The power series proof requires more of a conceptual assumption (regarding those Maclaurin series) and a bigger step dealing with powers of i higher than i2.  All of our educations are different, but Steve, you seem to want to eliminate all of the proofs other than the most abstruse one.
 * The original diff eq proof was one with a simple second-order diff eq that was significantly different than the diff eq proof we have now. It seems that the current diff eq proof seems to be saying little different than the calculus proof.  Should we replace the present diff eq proof with that earlier one? 70.109.189.158 (talk) 22:13, 19 April 2011 (UTC)
 * I know some people start with the series definition for the exponential function, but then it sort of appears out of thin air. For manyt people the original definition as a limit when calculating compound interest or the one using a differential equation to express it grows according to its size are the easiest. Differential equations are something that are introduced early in the curriculum in some places and I think it is quite right to do so. Dmcq (talk) 11:02, 26 March 2011 (UTC)

IMO, Wikipedians work unnecessarily hard at compromising between different disciplines (EE, math, physics, etc) and between different levels of education, and all we end up with is a compromise... optimum for nobody. That mindset is better suited for a space-limited, hard-cover encyclopedia. Seems to me that with virtually unlimited space and internal linkage, there ought to be a better result. --Bob K (talk) 15:30, 26 March 2011 (UTC)


 * I reworded the calculus proofs to make it clearer what's going on . For example, I said "it turns out" that (d/dz)e^z = e^z for complex z -- it's plausible and it's true, but it's not proven, at least not in this article. Then later I called it a "starting assumption". With those changes, I don't object to these (so-called) proofs anymore, but it's still worth adding citations of course. --Steve (talk) 01:55, 8 May 2011 (UTC)
 * "...but it's not proven, at least not in this article." It proves it in every manner that the Maclauren series proof does it. Both use the concept of analytic continuation of the rules of algebra (from with the rules of calculus are derived) from they are for the reals to the complex domain.  And the both depend on i2=-1 axiom.  That's it.  Given that i is a constant where i2=-1 and we're extending the rules of algebra (and then calculus) to complex with i as that constant, either proofs are proven.  To claim that the power series proof is proven from these axioms, yet the two proofs based on the properties of the natural exponential are not, is just silly.  You guys have been consistently mistaken about that.  It really shows a personal preference toward the power series proof (as the "only" proof) and is hardly NPOV. 70.109.181.192 (talk) 01:37, 9 May 2011 (UTC)
 * The step that I'm specifically concerned about is
 * "(d/dx)e^x = e^x for real x; therefore, (d/dz)e^z = e^z for complex z".
 * This is not "the rules of algebra" or "the rules of calculus", this is an extrapolation of a specific property of a specific function from one domain to another. This kind of extrapolation does not always work. For example, "(d/dx)e^x* = (e^x)" is true for real x, but "(d/dz)e^z* = (e^z)" is false for complex z (* is complex conjugate). For another example, "sqrt(xy)=sqrt(x)sqrt(y)" is true for real positive x and y, but false when x and y are complex or negative. Therefore this is a nontrivial and specific property of the complex exponential function, and we shouldn't just state it without proof in a section called "proofs". :-)
 * On the other hand, I am not objecting to assertions like "we can use the chain rule for complex derivatives" or "The complex derivative of f(x)=i*x is i" or "i*i=-1". Those are fine. They are not specifically about the complex exponential function, therefore they are outside the scope of this article, sort of "general facts we can expect people to know or look up". (Moreover, they're proven more-or-less the same way for real vs complex numbers.) By contrast, we cannot assume that people know any facts about the complex exponential function, except for facts stated and justified in this article, because this is the first article that most people read about the complex exponential function.
 * Do you understand this distinction I'm trying to make? What are your thoughts? :-) --Steve (talk) 02:55, 9 May 2011 (UTC)
 * That's why these things should be sourced. Math is nice, but it's easy to make logical errors, and only proofs and derivations that are vetted in a reliable source should be reported.  In this case, the key property comes from the idea of "analytical extension".  If a function is found for which the real derivative extends this way to the complex derivative, then the function is analytic, and there are things we can do from there.  The complex conjugate operation is not an analytic function, but the exp can be is extended as shown.  The logic of that is not at all clear in the article, which just says "as it turns out".  Dicklyon (talk) 06:16, 9 May 2011 (UTC)
 * Agree a source is always best. That step in the article is confused because the starting point is not well defined, it does not say how the exponential function is characterized and therefore it is hard to show the extension of the characterization to complex numbers is consistent. If you start with the exponential function being defined by the differential equation and starting point for instance then the equation would hold for the complex one by definition and one would have to show it defines a reasonable complex function. Dmcq (talk) 09:14, 9 May 2011 (UTC)
 * I went looking for books that give the calculus proof, but instead I found a proof starting from the "limit definition" (1+z/n)^n. I put that one in, I like it! Not all the details are in the source--some are left to exercises--but hopefully I got everything OK. If anyone finds a more explicit source they should add it and rewrite anything that differs. I tried to write it to be assume as little as possible: In particular, I didn't use big-o notation or Taylor series. --Steve (talk) 19:51, 9 May 2011 (UTC)

All calculus proofs should be deleted? Are you serious? If you say it doesn't explain "(1) What does it mean to raise e (or any number) to a complex power?", I would say: Taylor series is also based on Calculus. The n-series on Taylor series is calculated using the nth derivative of e. And it is assumed that the complex-valued exponential function also satisfies the same differential equation as the real-valued. (Wisnuops (talk) 06:25, 16 January 2012 (UTC))


 * Articles should not normally include proof, the readers should be directed to references to provide that unless the proof is particularly short or notable. And even for notable ones that are long just the main points outlined. If there isn't even a citation for a proof then notability hasn't been demonstrated. I don't believe the article needs all those proofs. A case could probably be made in this instance for including a proof but citations are definitely needed, I thought I'd stuck cn on them before but obviously not, so I'll do that and remove them if none is provided soon. Dmcq (talk) 13:27, 16 January 2012 (UTC)
 * Removed the last one which had no citation. Are all three remaining ones interesting? Dmcq (talk) 13:32, 16 January 2012 (UTC)
 * I agree with your removal of the last calculus-related proof; it was redundant (was rather similar to the previous one) and comparatively obscure (requiring deeper results such as uniqueness of solutions, for which it referred one to another more involved proof). As to the remaining three, my preference would be to keep at least the first (Using power series) and the third (Using calculus). The latter was new to me, and struck me as pretty neat.  They all require assuming "familiar" results.  Having such readily understandible proofs allows one to quickly form a sense of solidity.  While a proof based on the limit definition would be good because this definition is so universal, I find that one (Using the limit definition) unduly clumsy and hence of little value.  Disclaimer: I'm commenting as a reader, not as someone seeking to apply the guidelines. — Quondum☏✎ 14:21, 16 January 2012 (UTC)


 * Wisnuops, take a look at the Taylor series proof in the article. It goes: (A) The Taylor series of real e^x is 1+x+x^2/2+.... (B) Let us define e^z for complex z as e^z=1+z+z^2/2+.... (C) The Taylor series of real sin x and real cos x is.... (D) Therefore, e^it=cos t + i sin t. Parts (A) and (C) involves calculus (of real variables) to prove, but you can understand them without any calculus, and moreover the proofs are (A) and (C) are off-topic for this article. So as far as this article is concerned, there is no calculus involved whatsoever. Certainly, there is no requirement to derive (using complex-variable) calculus the Taylor series of complex e^z, because we are starting with the Taylor series and defining it as e^z. --Steve (talk) 14:46, 16 January 2012 (UTC)
 * Quondum, I'm happy to replace the limit proof by a short summary...curious souls can figure out from the animation how and why it works, or if not they can read the reference. --Steve (talk) 14:54, 16 January 2012 (UTC) UPDATE: I just tried shortening this proof . --Steve (talk) 18:29, 16 January 2012 (UTC)
 * The shortening is in my view a definite improvement from a readibility perspective: one can look at it and pretty rapidly get a sense of why it works. I'm not sure whether $x = π$ is the ideal choice for the illustration, though I'm aware that it is simply what was available as a GIF. — Quondum☏✎ 19:08, 16 January 2012 (UTC)