Talk:Derivative of the exponential map

Put in a summary BCH formula collapsible box
At the risk of another chef spoiling the broth, I furtively stuck in an outline proof of the popular formal expression for BCH in a box. You'd probably want to move it, or turn it into a subsection, or... In any case, such would belong here, and not burden the BCH article. If it stays, it could be wikilinked from BCH. Cuzkatzimhut (talk) 11:09, 19 October 2014 (UTC)
 * Excellent idea. I was thinking about putting the proof over at BCH, but you are right, it fits much better here. I threw in the actual integral formula as well. One of the "easily seen" (the latter) might not be that transparent to the reader. I'll put in a hint. YohanN7 (talk) 14:26, 19 October 2014 (UTC)

Would it be appropriate to include an outline of the proof of Dynkin's formula too? Start with
 * $$e^{Z(t)}=e^{tX} e^{tY},$$

obtain expression with two terms for $dZ(t)⁄dt$ (similar to the one in the integral formula proof). Then use a trick,
 * $$\mathrm{-ad_Z} = \mathrm{log}(\mathrm{exp(\mathrm{-ad}_Z)})) = \mathrm{log}(1 + (\mathrm{exp(\mathrm{-ad}_Z) - 1)})),$$

and the power series for $log$ to get rid of the fractions. Finally expand $exp$ and integrate term by term. I think five formulas (ansatz, expression for $dZ(t)⁄dt$, trick, formula before expansion of $exp$, but including expansion of $log$, final result (the monster)) should suffice. YohanN7 (talk) 22:33, 19 October 2014 (UTC)


 * Just came back from travel. Yes, if you want to do that, concisely, in a separate subsection? (e.g. 3.2?) I suspect the above might merit a short subsection, (3.1) too. Cuzkatzimhut (talk) 00:17, 20 October 2014 (UTC)


 * Sound fine. I'll get to it in a day or two. YohanN7 (talk) 00:30, 20 October 2014 (UTC)
 * Perhaps there are other applications that are better suited for inclusion. The Dynkin proof, while it wouldn't hurt, would add little new to the article. YohanN7 (talk) 01:05, 20 October 2014 (UTC)
 * Do you want a couple of condensed paragraphs by me? I fear they would be too elliptical. On a completely (perhaps not?) different front, I noticed the AIP steers you to the PU  [Mudd library] for rights to VB's pic. Cuzkatzimhut (talk) 13:51, 27 October 2014 (UTC)
 * Paragraphs (whether condensed or not), of course, are welcome. Please also have a look in my sandbox, where you can see what I'm up to. I haven't been very productive the last few days, but hope to get the proposed section done in reasonable time. YohanN7 (talk) 14:32, 27 October 2014 (UTC)
 * Mudd contacted. YohanN7 (talk) 14:45, 27 October 2014 (UTC)
 * One problem with the section I'm working on is that it isn't entirely general. Sternberg uses matrix Lie groups as well, probably to get simpler formulas for pushforwards and pullbacks. YohanN7 (talk) 14:57, 27 October 2014 (UTC)


 * I put something in, along you suggestion, and the "words" ending in Y are straightforward, while the words ending in X need to be massaged a little. However, I only just became aware of Bose's 1989 JMP article, where he indicates Dynkin's method is far more abstract.... echt math! He just writes the "words" of the power series expansion of log (expX expY) and then maps collectively all these words of the free associative algebra to the corresponding words of the free Lie algebra in them, through his ( & Specht's & Wever's) inspired map... So a combinatoric proof in this Draft article here is neither simpler nor barking up the whole tree... I was going to delete these paragraphs (Dynkin subsection), but am leaving them there for you to delete, in case there is a useful chunk for something else in there... I feel real silly, but I was just uninformed... Wisdom begins in the end. In any case, there is mention of trasnslations in Bose's references, E. B. Dynkin, Am. Math. Soc. Transl. 9, 470 (1950); E. B. Dynkin, Math. Rev. 11, 80 (1949); E. B. Dynkin, Math. Rev. 9, 132 (1947), none of which I've been able to access, or even find on line...  Cuzkatzimhut (talk) 15:34, 28 October 2014 (UTC) However, one can read the whole paper in Google books. Cuzkatzimhut (talk) 17:40, 28 October 2014 (UTC)


 * (EC) Dynkin's original proof may have been more abstract, but this proof is, as far as I can see, is waterproof (provided one stays inside radii of convergence). Rossmann manipulates
 * $$Z'=  \frac{\mathrm{ad}_{Z}}   {1 - e^{-\mathrm{ad}_{Z} }   }  ~ ( e^{-t \mathrm{ad}_{Y}}X + Y) \text{ (is there a bug here?)}$$
 * to become
 * $$Z'=  \frac{\mathrm{-ad}_{Z}}   {1 - e^{\mathrm{ad}_{Z} }   }  ~ (X + e^{t \mathrm{ad}_{Z}}Y) \text{ (this one I have verified before)},$$
 * uses the trick, and then arrives at a formula similar to your next to last, expands it in a series of brackets (of brackets) which he finally integrates to get the final result. (Dynkin's original proof is also outlined as a problem in Rossmanns book.)
 * I can modify the proof so that it agrees with Rossmann's if that sounds good to you. YohanN7 (talk) 17:55, 28 October 2014 (UTC)


 * Sure, I don't mind, if you wish. It is just a huffer and a puffer, when Dynkin's proof is true almost by inspection, given his map; whereas here you have to work and rearrange at the bottom step, in this form for Y. I can tell an expert mathematician would roll his eyes at the overkill... But it's good finger exercises....
 * There is no bug anywhere; the derivative of expZ is    expZ  Y +  expX X expY = expZ Y + expZ exp(-Y) X expY =expZ (Y + Ad(-Y) X), so
 * $$Z'=  \frac{\mathrm{ad}_{Z}}   {1 - e^{-\mathrm{ad}_{Z} }   }  ~ ( e^{-t \mathrm{ad}_{Y}}X + Y) =   \frac{\mathrm{ad}_{Z}}   {1 - e^{-\mathrm{ad}_{Z} }   }  ~ e^{- \mathrm{ad}_{Z}}(X +   e^{ \mathrm{ad}_{Z}}  Y)  $$

Cuzkatzimhut (talk) 18:17, 28 October 2014 (UTC)
 * Ah, ok. But simpler is to write d/dt expZ is expZ Y +  X expZ and arrive at the same thing. Rossmann's version pops out after the RHS is multiplied and divided by exp(adZ). YohanN7 (talk) 20:28, 28 October 2014 (UTC)


 * Sure, they are all the same... given exp(adZ) =exp(t adx) exp(t adY) and its inverse, exp(−adZ)= exp(−t adY) exp(−t adx), all these expressions are trivially connectible to each other. But my point is these types of proof are "just so" manipulative proofs, whereas Dynkin's proof does a college level transformation to get a series in the free associative (universal Lie) algebra,  and then, "magically" recasts it into the Lie algebra in his map, Once it is known (which it is) that it is really in the Lie algebra.  Dynkin's formula is totally dysfunctional computationally, cf the numerous papers trying to use it to compute explicit terms, but, mathematically, it is sparse and elegant, which, I suspect, is the reason mathematicians love it, not because it is practical. By contrast, what we discuss here is neat computationally and geometrically, but a math snob would wonder why we'd strain so much to derive something so obvious. Personally, I learn something from seeing this, as it is yet another trick in my bag, but.... Ah, never mind me... Cuzkatzimhut (talk) 23:11, 28 October 2014 (UTC)
 * One minor point (but still a point) about the Rossmann proof (he attributes it to Duistermaat-Kolk Lie Groups (1988)) of Dynkin's formula is that it dispels the air of mystery surrounding the formula. It is relatively short and can be found by elementary means. This should be seen in view of all able mathematicians that have sought for it. Also, the explicit formula (useless or not) didn't arrive until 1947 (with an elegant and non-trivial proof). But $dexp$ and a little manipulation of known series expansions is all that is needed. I think this is remarkable. YohanN7 (talk) 01:21, 29 October 2014 (UTC)
 * Agreed, if by "elementary means" you mean without the benefit of the heavy machinery (Friedrichs' theorem) ensuring Z is in the Lie algebra. indeed, Dynkin's application of the mind-boggling Dynkin-Specht-Wever map requires that one knows that already, otherwise the re-writting of the universal Lie algebra elements won't work. Here, by contrast, the fact that Z is in the Lie algebra is manifest. So, yes, the section is of some use. (On a tangent, however, I could not desist defacing the Dynkin bio wiki by inserting a presently red-link entry on the Dynkin-Specht-Wever lemma, which clearly ought to be written, but not by me... No time, as usual...) Cuzkatzimhut (talk) 14:53, 29 October 2014 (UTC)
 * To let you know, I am working on a collapse box, "The gory details" intended for the proof Dynkin's formula, User:YohanN7/sandbox. Actually, not that gory and hopefully it will be reasonably short. The pace of development is slow though. One equation per day is what I have time for now. YohanN7 (talk) 14:50, 31 October 2014 (UTC)
 * Thanks, a worthwhile task; the first term on the r.h.side in your section is missing the rightmost argument X operated upon by the adjoint ops. Cuzkatzimhut (talk) 15:06, 31 October 2014 (UTC) Nice Hidebox stuff. I would conclude with, instead of "note the striking similarity...",   "The striking similarity with (99) is not accidental: It reflects the Dynkin−Specht−Wever map, underpinning the original, different,  derivation of the formula." Cuzkatzimhut (talk) 14:42, 1 November 2014 (UTC)

Gory details now in place. I went basically all the way. Even though it is simple, it is easy to go astray due to the monstrosity of the thing. The passage from the integrated form with two terms to the standard form of the Dynkin formula isn't entirely obvious (wasn't for me). So, I filled in everything in painful detail and leave it to you to rinse out what you think shouldn't be there. YohanN7 (talk) 15:05, 1 November 2014 (UTC)

Another thing. While this proof to some extent (internally in the article) seemingly relies on the matrix Lie group proof of $dexp$, an easy argument I found in a passage in Rossmann's book I have managed to miss until now shows that Dynkin's formula holds in the general case. it is based on the fact that if $X$ is an element of the Lie algebra, then
 * $$\varphi(\mathrm{exp}(\tau X)p) = \sum_{k = 0}^{\infty}\frac{\tau^k}{k!}X^k\varphi(p),$$

where the left $exp(τX)$ is, of course, an element of the group and on the right, $X$ acts as a differential operator on analytic functions. Since the usual series expansion of $exp$ occurs in this way, it follows (by appeal to $exp$ being bi-analytic in an nhood of 0) that Dynkin's formula holds. Roughly, the two expressions in equation boxes in the new hidden section are equal viewed as power series in non-commutative entities, regardless of what they are (they are of course vector fields, and the commutator is the Lie bracket). I think this is pretty marvelous; The explicit BCH formula can be proved entirely using elementary means, the proof being valid for general Lie groups. YohanN7 (talk) 15:05, 1 November 2014 (UTC)


 * I reduplicated D's formula, to the always visible subsection--it is too much to ask the reader to go to a hidebox to see punchline. By the way, I strongly disagree with the Bourbakis assessment... By the time of Dynkin, most of the other algorithms were around for a long time, and used relentlessly. Admittedly, Dynkin's formula appears "closed" and immediate, but it really requires partitioning the indices, etc... to get explicit results, hence the cottage industry of interfacers telling one "how" to evaluate it. The more implicit-looking formula in the previous section is not just an "existence" expression, it is a practical algorithm (still not preferred by physicists, at least---they go for the Hausdorff stuff.) Not that it matters, but still, the Bourbakis are often brilliant at stretching if not missing the point... Dynkin and Friedrichs ushered in the modern age, and that should be enough of a stellar achievement! Cuzkatzimhut (talk) 18:01, 1 November 2014 (UTC)
 * I interpret the Bourbaki statement as if Dynkin wasn't entirely convinced, not that the Bourbaki's were unhappy with the existing algorithms. My French isn't what it ought to be, and I don't have the book. It is quoted in Rossmann. Suggestions? YohanN7 (talk) 18:17, 1 November 2014 (UTC)
 * No, no suggestions... I fear it says more about the Bs than Dynkin...  "chacun considére que les démonstrations de ses prédécesseurs ne sont pas convaincantes" means "everyone considers that the proofs of his predecessors are not convincing"... or something to that effect... Cuzkatzimhut (talk) 19:15, 1 November 2014 (UTC)
 * Oops! I read it as "not everyone is convinced...". It will have to go out. YohanN7 (talk) 19:24, 1 November 2014 (UTC)

Duhamel name
I suspect I got to the bottom of the silly parochial habit of some physicists’ misnaming the formula “Duhamel’s formula” or "Duhamel's identity". It is the primary place they encounter it, in scattering theory. Courant & Hilbert call it "Duhamel's integral" routinely. Unfortunately, the relevant WP article, Duhamel's principle, is all but useless. The formula they have in mind is the standard scattering integral equation which comes out of
 * d⁄dt (exp(it (H₀ +V))  exp (−it H₀))  = i exp(it (H₀ +V))  V  exp(−it H₀),

so, then, integrated,
 * exp(it (H₀ +V)) exp (−it H₀) = 1 + i ∫₀s  ds   exp(is (H₀ +V))  V   exp(−is H₀),  hence
 * exp(it (H₀ +V)) = exp (it H₀)  + i ∫₀s  ds   exp(is (H₀ +V))  V   exp(i(t−s) H₀),

indefinite iteration of which yields the Dyson series. But I don’t think it is worthwhile to link up to pages or stubs making this evident affinity more explicit… It suffices to just drop the name where you did, clearly misapplied to describe the general out of a particular application (metonymy). Cuzkatzimhut (talk) 17:49, 7 November 2014 (UTC)

What links here?
See also
 * Exponential map (two spots)
 * Baker–Campbell–Hausdorff formula
 * List of exponential topics
 * Magnus expansion
 * Matrix exponential
 * Exponential map
 * Baker–Campbell–Hausdorff formula

Notation for tangent spaces
The notation in the article for tangent spaces at a point of the manifold seems highly unusual. I am not aware that this notation is common, but then again I am not that well read in the mathematics literature. If someone more expert than me agrees, please edit accordingly.

Clarification of formula for non-matrix group
The main formula (1) on the page currently reads $$\frac{d}{dt}e^{X(t)} = e^{X(t)}\frac{1 - e^{-\mathrm{ad}{X}}}{\mathrm{ad}{X}}\frac{dX(t)}{dt}.$$ But this doesn't make sense in general. $$e^{X(t)}$$ is an element of G, the left hand side is in a tangent space to G at $$e^{X(t)}$$, and $$\frac{1 - e^{-\mathrm{ad}{X}}}{\mathrm{ad}{X}}\frac{dX(t)}{dt}$$ lives in $$\mathfrak g$$, the tangent space G at the identity. I think it doesn't make sense to multiply an element of G by an element of $$\mathfrak g$$.

It is clear from the subsequent explanation that the formula (1) is not meant to apply only to matrix Lie groups. Could it be that the formula is meant to read $$\frac{d}{dt}e^{X(t)} = dL_{e^{X(t)}}\Big(\frac{1 - e^{-\mathrm{ad}{X}}}{\mathrm{ad}{X}}\frac{dX(t)}{dt}\Big)$$, where L is the left-multiplication transformation on G? I think in the case of matrix groups this is the same thing? Or perhaps I am wrong and a much bigger change to the page is required? --Jfr26 (talk) 12:26, 15 May 2020 (UTC)


 * You might go to the refs cited. Indeed, the inverse of $$e^{X(t)}$$ of the r.h.s. could be moved to the l.h.s. to satisfy formal fussbudgeting. It is of paramount importance to avoid excess formalism and defocussing caveats, which, if compelled, you might include in a clarifying cogent footnote aside, strictly; propose yours here, not the article. This is a stub for algorithmic practitioners and application users, not Lie-algebraists, obviously. I understand the OP splintered it off the BCH article, soundly in my opinion. You probably wish to  review Universal enveloping algebra or Lie group–Lie algebra correspondence. First, do no damage. Cuzkatzimhut (talk) 13:13, 15 May 2020 (UTC)