Talk:Fresnel integral

Exponent?
Current text (definitions) says:

Some authors, including Abramowitz and Stegun, (eqs 7.3.1 – 7.3.2) use $$\frac{\pi}{2}t^2$$ for the exponent of the integrals defining S(x) and C(x)

The usage of the word "exponent" probably comes from the complex representation
 * $$C(x)=\Re \int_0^x \exp(\mathrm{i}t^2)\,dt$$

(actually one would have to say $$\mathrm{i}\frac{\pi}{2}t^2$$ above in this case...). However, it is confusing when you are using trigonometric functions.

Possible rephrasing:

Some authors, including Abramowitz and Stegun, (eqs 7.3.1 – 7.3.2) use $$\frac{\pi}{2}t^2$$ for the argument of the trig functions in the integrals defining S(x) and C(x) —Preceding unsigned comment added by AmitAronovitch (talk • contribs) 09:07, 2 July 2008 (UTC)

Question regarding acceleration
At the Cornu spiral: a vehicle with constant speed will not have a constant rotational acceleration, but a constant _change_ of r.a., won't it?
 * In this image of the [[Image:Cornu_Spiral.svg|spiral]], the radius of curvature is zero at the origin, and tends to a constant towards the two attractors ("holes"). So a vehicle travelling along this yellow-brick road would experience zero acceleration as it passes the origin, then increasing acceleration as it spirals in towards either attractor, then in the limit constant acceleration, that is, it would experience a circular orbit (within diminishing error). That isn't exactly what you said, but does it answer the question ok? Pete St.John (talk) 16:03, 10 March 2008 (UTC)
 * short answer, someone corrected that in the text already; now it says "constant change" :-) So good catch Pete St.John (talk) 16:10, 10 March 2008 (UTC)

Cornu spiral images
The article says that clothoid is another name for a Cornu spiral. But the clothoid loop image is quite different to that of the spiral, and I think it must confuse readers of the article. Also, why does the caption say "$$f(r)=r-\pi$$"? Is the image actually a clothoid loop or is it an Archimedean spiral? JonH (talk) 12:36, 10 March 2008 (UTC)


 * One image is the graph in polar coordinates, the other in cartesian coordinates. Pete St.John (talk) 16:12, 10 March 2008 (UTC)

Plotting S(t) and C(t) as cartesian or polar coordinates would give different shapes, but the article talks about a clothoid as being the same a Cornu spiral and there is no mention of polar coordinates. Also I think that $$r=\theta-\pi$$ is a formula for the polar coordinates of an Archimedean spiral, not for a Cornu spiral. JonH (talk) 18:27, 10 March 2008 (UTC)


 * The diagram caption at that section labels f as a function of r, shows a background of polar coordinates, and calls it a parameterization, so it would seem that representing the function from Cornu Spiral in polar coordinates is supposed to give the shape "Clothoid". From Archimedean spiral it would seem the Clothoid is a special case of Archimedean (the constant equals pi). I admit the section is not very clear, we might check a reference. Pete St.John (talk) 19:29, 10 March 2008 (UTC)
 * I think I've been blithe, I have to think about this some. What happens if we express r and theta in the archimedean spiral parameterized by arclength? Pete St.John (talk) 20:39, 10 March 2008 (UTC)


 * "Cornu spiral" and "clothoid" are synonyms, and refer to the curve with an S-like shape with two spiraling ends. Apparently there is such a thing as a "clothoid loop", but the article does not give a definition, nor does it explain how it is related to the clothoid curve. From some web pages I get the impression that the loop consists of two segments of the clothoid glued together (e.g. here), but I don't know of a reliable source for this definition. The "Parametric graph" sure looks like an Archimedean spiral if you let the angle run across 0. I don't get the meaning of the equation $$f(r)=r-\pi$$, which looks very wrong. Possibly $$r = \theta-\pi$$ is meant. --Lambiam 17:35, 11 March 2008 (UTC)
 * In the Archemedean spiral case where the constant is -pi, the equation would be theta = r - pi (so f(r) is theta). I think that's what was meant. I haven't had time yet to answer my own question, expressing r and theta as functions of arclength, I want to. And sorry for the primitivist typography, I never typed much math myself, it would have been nroff in my day :-( Pete St.John (talk) 02:58, 12 March 2008 (UTC)
 * You mean Eqn? What will you do with your expressions of r and theta as functions of arclength, once you've got them? I think arclength can easily be expressed as a function of r or theta, but you can't invert the transcendental equation into a closed form for r and theta as functions of arclength. It is easy, though, to study the asymptotics for small or large arclengths. --Lambiam 08:31, 12 March 2008 (UTC)


 * I meant from the Archimedean spiral $$/vartheta$$ = r - $$/pi$$, write r and theta as functions of arclength. I'm wondering how to interpret C and S for this "parameterized" graph. But I'm rusty (at best) and would need some uninterupted time with pencil and paper. (And yeah, Eqn would have been available by then but I wasn't aware of it. I paid a professional who had the greek and math balls for her Selectrix :-) Pete St.John (talk) 12:46, 12 March 2008 (UTC)

I've removed the "clothoid loop" image: until someone can describe exactly what this is, and how it relates to the ordinary clothoid, it does not belong in the article. Not only was it of questionable relevance, it also drew attention away from the correct image. -- The Anome (talk) 10:51, 13 March 2008 (UTC)


 * Update: I've now found a cite for the use of the term "clothoid loop" as used in rollercoaster design: I've put it in the vertical loop article and made the term redirect there, and adjusted this article accordingly. The image stays out: I'm not sure what it illustrates, but since it seems neither to be a clothoid curve or a rollercoaster "clothoid loop", and does not have any idenfiable mathematical significance, it does not seem relevant to either article. -- The Anome (talk) 12:02, 13 March 2008 (UTC)

Evaluation
In the picture the parameterization of gamma 3 should be z=t*exp(i*pi/4) —Preceding unsigned comment added by 74.69.117.252 (talk) 17:44, 6 August 2008 (UTC)

Clothoid/Cornu Spiral section should be a separate page
See for example the French Clothoïde versus French Intégrale_de_Fresnel

The German page Klothoide would be particularly worth mining for its civil engineering applications of clothoidal spirals, which are useful but would be out of place in an article on Fresnel integrals and functions.

Note also that a number of the language links to this page are inappropriately clothoidal (eg ttp://fr.wikipedia.org/wiki/Clothoïde ) rather than Fresnel-Integral (eg http://fr.wikipedia.org/wiki/Intégrale_de_Fresnel ); this ought to be cleaned up once an English-language Clothoid page is created. —Preceding unsigned comment added by 76.191.212.109 (talk) 02:36, 20 September 2008 (UTC)

Notice a similarity?
I glanced through this article, and when I came to the graph, I immediately thought "black hole", however, I would like this to be run by a physicist or a mathematition before it is added. -- Wyatt  915  ✍  15:10, 13 October 2008 (UTC)

t or x?
There seems to be an inconsistency in the variable that is used. For example

S(x) = integral(sin(t^2)dt.

The variable of interest should either be t or x. I think that x would be best as it is more general (t implies 'time' to a certain extent). —Preceding unsigned comment added by Jezwells (talk • contribs) 10:48, 15 March 2010 (UTC)


 * No, it is right: S(x) = integral from 0 to x sin(t^2)dt, so that's how S is a function of x. t is commonly used for the parameter in a parametric plot (probably because it often represents time, admittedly). ChrisHodgesUK (talk) 15:18, 15 March 2010 (UTC)

Yes, of course. Thanks for the response--Jezwells (talk) 19:58, 22 March 2010 (UTC)

switch to &pi;/2 convention
The convention with a &pi;/2 factor in the integrand argument seems to be far more common. e.g. A&S, Mathematica, Mathworld, and Maple all use it. Arguably the Wikipedia article should use this convention as well.

— Steven G. Johnson (talk) 20:29, 24 December 2012 (UTC)

Even worse, the two figures (graph of C(t) and S(t) vs t and parametric plot of (C(t),S(t))) are made with different conventions. The former follows the A&S normalization, easily recognized by the limit of 0.5 for t tending to positive infinity, while the latter has limits (visible as the coordinates of the eye of the spiral) around 0.6, presumably matching the article's convention. — Preceding unsigned comment added by 82.185.105.200 (talk) 15:52, 9 October 2013 (UTC)

Moreover, the formula relating C(t) and S(t) to the error function is wrong because it uses the &pi;/2 convention for C and S, in conflict with the definition on top of the page. -- This probably also has led to the confusion with the wrong results from the numerical calculation discussed elsewhere on this page. — Preceding unsigned comment added by 138.251.106.5 (talk) 10:13, 8 April 2015 (UTC)

I plan to go ahead and change the article to use the &pi;/2 convention in the next few days, unless someone responds here to explain why I should not. As User:Stevenj points out, pretty much every standard reference source uses it.Ichbin4 (talk) 05:13, 24 June 2018 (UTC)

Definitions given here via erf agree with the builtin fresnel integrals in python after rescaling. The following code demonstrates that for z=2.686.
 * Small correction in the code. janboc (talk) 10:51, 04 September 2018 (UTC)

Computation of limits without complex analysis
I feel like it would be simpler to use a method analogous to the integral of the Gaussian curve over the entire real line (using a double integral) to explain the limits of these functions as the argument approaches infinity. The only possible issue is that $$\lim_{x \to \infty}\sin{x^{2}}$$ and $$\lim_{x \to \infty}\cos{x^{2}}$$ do not exist; the correct answer $$\sqrt{\frac{\pi}{8}}$$ is obtained when the aforementioned limits are assumed to equal zero. Nevertheless, I'd find this a simpler explanation than complex analysis.

The method I'm thinking of is to use the following, derived from the angle sum formulae for the integrands and Fubini's theorem (note that $$\mathbf{I}$$ here refers to the first quadrant):

$$\iint\limits_{\mathbf{I}} \sin{(x^2+y^2)} dx dy= \int\limits_{0}^{\infty} \cos{x^2} dx \int\limits_{0}^{\infty} \sin{y^2} dy + \int\limits_{0}^{\infty} \sin{x^2} dx \int\limits_{0}^{\infty} \cos{x^2} dy$$ and

$$\iint\limits_{\mathbf{I}} \cos{(x^2+y^2)} dx dy= \int\limits_{0}^{\infty} \cos{x^2} dx \int\limits_{0}^{\infty} \cos{y^2} dy - \int\limits_{0}^{\infty} \sin{x^2} dx \int\limits_{0}^{\infty} \sin{y^2} dy$$

The double integrals can be easily computed via a change to polar coordinates, and the values of the limits can be obtained by solving the system of two equations for the two unknowns. The first double integral appears to come out to be $$\frac{\pi}{4}$$, and the second 0. If this can be assumed, then the correct values of the limits of the Fresnel integrals are obtained.--Jasper Deng (talk) 01:59, 31 August 2013 (UTC)


 * If it gives you the right numbers, then there's often some way to make it work out. Rather than integrating over the region I, you should define finite regions It.  Apply the same algebraic identities on those regions; you'll get the same result but with an error term, and you now need to show that the error term vanishes as t tends to &infin;.


 * My hunch is that this is not a new way of evaluating these integrals. If it is, though, then it's probably worth a short note in an appropriate journal.  Ozob (talk) 16:27, 31 August 2013 (UTC)
 * I really doubt I discovered a new method (i.e. it probably has been mentioned in some source before, that we could possibly cite), given that it's almost nearly as straightforward as Gaussian integral, albeit with this strange thing about the limits of sine and cosine as their arguments tend to infinity.--Jasper Deng (talk) 19:49, 31 August 2013 (UTC)


 * Let's do this. Define It to be the square region 0 ≤ x ≤ t, 0 ≤ y ≤ t.  Then:
 * $$\iint_{I_t} \sin (x^2 + y^2)\,dx\,dy = 2\int_0^t \cos s^2\,ds\int_0^t \sin s^2\,ds = 2S(t)C(t)$$
 * $$\iint_{I_t} \cos (x^2 + y^2)\,dx\,dy = \left(\int_0^t \cos s^2\,ds\right)^2 + \left(\int_0^t \sin s^2\,ds\right)^2 = C(t)^2 + S(t)^2.$$
 * To evaluate the double integrals we split them up into two regions. One is the positive quadrant of the disc of radius t centered at the origin, which we call Dt; the other is whatever's left.  We evaluate the double integral on Dt using polar coordinates:
 * $$\int_0^{\pi/4}\int_0^t \sin r^2\,r\,dr\,d\theta = (\pi/4)\int_0^{t^2} \sin u\,du = -(\pi/4)(\cos t^2 - 1),$$
 * $$\int_0^{\pi/4}\int_0^t \cos r^2\,r\,dr\,d\theta = (\pi/4) \int_0^{t^2} \sin u\,du = (\pi/4)(\sin t^2).$$
 * Let's sum these two; then difference them:
 * $$C(t) + S(t) \approx (\pi/4)(1 + \sin t^2 - \cos t^2) = (\pi/4)(1 - \cos 2t^2),$$
 * $$C(t) - S(t) \approx (\pi/4)(-1 + \sin t^2 + \cos t^2) = 0,$$
 * where by $$\approx$$, I mean "up to some integrals on It - Dt".
 * So this has me worried. I'm not sure what to do with that $$1 - \cos 2t^2$$; as t &rarr; &infin;, it oscillates wildly instead of converging.  So the part that I was thinking of as an error term doesn't tend to zero, it cancels things.
 * Let's try again. If I try to evaluate the double integral on all of It, then I get:
 * $$\int_0^{\pi/8}\int_0^{t/\cos\theta} \sin r^2\,r\,dr\,d\theta + \int_{\pi/8}^{\pi/4}\int_0^{t/\sin\theta} \sin r^2\,r\,dr\,d\theta$$
 * $$= \int_0^{\pi/8} (1 - \cos (t/\cos\theta)^2)\,d\theta + \int_{\pi/8}^{\pi/4} (1 - \cos (t/\sin\theta)^2)\,d\theta,$$
 * and I don't think that's going anywhere.
 * Let's try a third time. This time we're going to evaluate the double integrals in rectangular coordinates.
 * $$\int_0^t \int_0^t \cos(x^2 + y^2)\,dx\,dy = \int_0^t (C(\sqrt{t^2 + y^2}) - C(y))\,dy.$$
 * I don't know what to do with that, either. Any ideas?  Ozob (talk) 23:18, 31 August 2013 (UTC)
 * Let's try Cartesian coordinates for the double integral using the substitution $$y=xs, dy=x ds$$ as mentioned in the Gaussian integral article. Then our double integrals become:
 * $$\iint\limits_{\mathbf{I}}\sin{(x^2+y^2)} dx dy = \int\limits_{0}^{\infty}\int\limits_{0}^{\infty}x \sin{(x^2(1+s^2))} dx ds = \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1-\lim_{(s,t)\to(s,\infty)}(\cos(t^2(1+s^2)))$$
 * $$\iint\limits_{\mathbf{I}}\cos{(x^2+y^2)} dx dy= \int\limits_{0}^{\infty}\int\limits_{0}^{\infty}x \cos{(x^2(1+s^2))} dx ds = \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}\lim_{(s,t)\to(s,\infty)}(\sin(t^2(1+s^2))$$
 * I'd think, somehow, that we could then express these limits using the angle sum formulae for sine and cosine, which would, in fact, give us existent limits, because of the presence of s. This is just some speculation, and even these limits might not exist, but I think this is the right way to head.--Jasper Deng (talk) 05:34, 2 September 2013 (UTC)
 * Now over a hundred days later, I think something could be achieved by adding these two and using a trigonometric identity to eliminate the "error" term:
 * $$ \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1-\lim_{(s,t)\to(s,\infty)}(\cos(t^2(1+s^2))-\sin(t^2(1+s^2)) = \frac{\pi}{4}-\int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}\lim_{(s,t)\to(s,\infty)}(\cos(t^2(1+s^2))-\sin(t^2(1+s^2)))$$
 * If we use the identity $$\cos{(x)} = \sin{(x+\frac{\pi}{2})}$$, I might be able to make the argument that those two "error" terms cancel in the limit as t→infinity. Meanwhile, on the left side of the equations in my previous comment we have $$\int\limits_{0}^{\infty} \cos{x^2} dx \int\limits_{0}^{\infty} \sin{y^2} dy + \int\limits_{0}^{\infty} \sin{x^2} dx \int\limits_{0}^{\infty} \cos{x^2} dy+\int\limits_{0}^{\infty} \cos{x^2} dx \int\limits_{0}^{\infty} \cos{y^2} dy - \int\limits_{0}^{\infty} \sin{x^2} dx \int\limits_{0}^{\infty} \sin{y^2} dy=\frac{\pi}{4}$$ which looks a little encouraging, but far from conclusive. Therefore I'd assume we must apply the same procedure by subtracting the two to obtain a second equation. We first change the sign of the first equation using the identity $$\cos(-x)=\cos(x)$$ and subtract the first from the second to obtain
 * $$ \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1-\lim_{(s,t)\to(s,\infty)}(\sin(t^2(1+s^2))-\cos(-t^2(1+s^2)) = \frac{\pi}{4}- \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(\lim_{(s,t)\to(s,\infty)}(\sin(t^2(1+s^2))-\cos(t^2(1+s^2)) $$
 * $$= \int\limits_{0}^{\infty} \cos{x^2} dx \int\limits_{0}^{\infty} \sin{y^2} dy + \int\limits_{0}^{\infty} \sin{x^2} dx \int\limits_{0}^{\infty} \cos{x^2} dy-(\int\limits_{0}^{\infty} \cos{x^2} dx \int\limits_{0}^{\infty} \cos{y^2} dy - \int\limits_{0}^{\infty} \sin{x^2} dx \int\limits_{0}^{\infty} \sin{y^2} dy)$$ which could be dealt in a similar way using the identity $$\cos(x)=\sin(x+\frac{\pi}{2})$$. Adding the two equations yields the first double integral and subtracting them yields the second (except in both cases the double integrals are further obtained by dividing both sides by two and combining equations). But I'm not so sure that this could work because it's always dangerous to play around with limits that do not formally exist.--Jasper Deng (talk) 09:05, 20 December 2013 (UTC)
 * Actually a slight improvement even exists, if in the second equation we utilize $$\sin(-x)=-\sin(x)$$ and $$\cos(x+\pi)=-\cos(x)$$, we can reverse the signs of both terms. In that case, if we subtract both equations, the error terms completely vanish. If we don't use these identities, then adding the two equations eliminates the error terms too, completely.--Jasper Deng (talk) 09:21, 20 December 2013 (UTC)

So to summarize the discussion above, $$\iint\limits_{\mathbf{I}}\sin{(x^2+y^2)} dx dy = \int\limits_{0}^{\infty}\int\limits_{0}^{\infty}x \sin{(x^2(1+s^2))} dx ds = \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1-\lim_{(s,t)\to(s,\infty)}(\cos(t^2(1+s^2)))$$ $$\iint\limits_{\mathbf{I}}\cos{(x^2+y^2)} dx dy= \int\limits_{0}^{\infty}\int\limits_{0}^{\infty}x \cos{(x^2(1+s^2))} dx ds = \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}\lim_{(s,t)\to(s,\infty)}(\sin(t^2(1+s^2))$$ can have the two error terms on each side removed by first adding the equations: $$\iint\limits_{\mathbf{I}}\cos{(x^2+y^2)}+\sin{(x^2+y^2)} dx dy = \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1-\lim_{(s,t)\to(s,\infty)}(\cos(t^2(1+s^2))-\sin(t^2(1+s^2)))$$ and then changing the signs in both initial equations (by the fact that cosine is an even function and sine is an odd function) to change them to $$\int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1-\lim_{(s,t)\to(s,\infty)}(\cos(-t^2(1+s^2)))$$ $$\int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}\lim_{(s,t)\to(s,\infty)}(-\sin(-t^2(1+s^2))$$ so that subtracting the equations now gives $$\iint\limits_{\mathbf{I}}\cos{(x^2+y^2)}-\sin{(x^2+y^2)} dx dy = \int\limits_{0}^{\infty}\frac{ds}{2(1+s^2)}(1+\lim_{(s,t)\to(s,\infty)}(\cos(-t^2(1+s^2))-\sin(-t^2(1+s^2)))$$. We can now add this equation to the other one to obtain $$\iint\limits_{\mathbf{I}}\cos{(x^2+y^2)} dx dy = \frac{\pi}{4}$$, because we can revert the sign change by adding pi to the arguments of the sine and cosine, which should not be consequential because we are taking the limit as the argument approaches infinity, so the terms disappear (I think an epsilon-delta definition would have to be resorted to if this were rigorously true). Now for the other double integral, we reverse the sign change I did above (that obtained the second equation that was subtracted above) such that subtracting this equation from the sum of the two initial equations (the third equation in this comment) leads to that double integral to equal 0, which is the result I want to establish.--Jasper Deng (talk) 20:21, 20 December 2013 (UTC)
 * This math stackexchange page has some fun derivations of the limit that avoid using complex analysis. But none strike me as simpler than the complex analysis approach. --Mark viking (talk) 05:10, 6 September 2013 (UTC)
 * In particular, I think the second answer on that page—the one that integrates $$(\sin x^2)e^{-\lambda x^2}\,dx$$—is the closest to the Gaussian integral approach. It avoids questions of convergence of an improper integral by introducing an exponential decay factor.  However, recovering the Fresnel integral as the limit $$\lambda \to 0$$ is not immediate, because one has to pass the limit inside the integral.  The monotone and dominated convergence theorems don't seem to help here, but in a comment on the second answer, the OP explains how to prove convergence by hand—one looks at the difference $$(\sin x^2)(1 - e^{-\lambda x^2})$$ and manipulates it until it can be seen to converge to zero.  Ozob (talk) 18:08, 21 December 2013 (UTC)

External links modified
Hello fellow Wikipedians,

I have just added archive links to 1 one external link on Fresnel integral. Please take a moment to review my edit. If necessary, add after the link to keep me from modifying it. Alternatively, you can add to keep me off the page altogether. I made the following changes:
 * Added archive http://web.archive.org/web/20080923190052/http://fy.chalmers.se:80/LISEBERG/eng/loop_pe.html to http://fy.chalmers.se/LISEBERG/eng/loop_pe.html

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.—cyberbot II  Talk to my owner :Online 23:11, 18 March 2016 (UTC)

C(z) + i S(z)
Corresponding to the formula,

$$S(z) + i C(z) = \sqrt{\frac{\pi}{2}}\frac{1+i}{2} \operatorname{erf}\left(\frac{1+i}{\sqrt{2}}z\right)$$

I believe it is also true that,

$$C(z) + i S(z) = \sqrt{\frac{\pi}{2}}\frac{i+1}{2} \operatorname{erf}\left(\frac{1-i}{\sqrt{2}}z\right)$$

This is probably of more interest than the original formula as it parallels Euler's formula. Perhaps it should be included?

Skinnerd (talk) 21:42, 30 April 2016 (UTC)


 * I've added it to the article. Ozob (talk) 01:42, 1 May 2016 (UTC)

No elementary antiderivative
I think that there should be a source for the claim that the integrals "cannot be evaluated in the closed form in terms of elementary functions" found early in the "Limits as x approaches infinity" section. I spent a short while Googling and wasn't able to find a good one, though. The best I could come up with is the following answer on Math Stack Exchange: https://math.stackexchange.com/a/265884. I'm not sure if such a source would be appropriate for this site, and the answer only deals with S(x) besides, so I figured it would be best to post here. WallAdhesion (talk) 23:54, 16 April 2019 (UTC)

Can we come up with a more concise way of writing (1 + i)/√2, etc.?
If we had a shorter name for $$\frac{1+i}{\sqrt2} = \exp \tfrac14 \pi i = \sqrt[4]{-1}$$, we could make a bunch of the formulas here much more legible. I proposed $$\sqrt{i}$$ to indicate the principle square root of i, because this is compact while still being clear and not needing too much separate definition, but User:Jasper Deng didn’t like it. Maybe we can just give it some arbitrary symbol, maybe a Greek letter along the lines of: let $$\eta = \exp \tfrac14 \pi i,\ \bar\eta = \exp\bigl({-\tfrac14 \pi i}\bigr)$$? –jacobolus (t) 01:57, 16 July 2022 (UTC)