Talk:Fourier transform

Maybe a mistake in time shift property?
Hi, I think there is a small mistake in section 15 "Tables of important Fourier transforms" -> "Functional relationships, one-dimensional", property 102, time shifting of fourier transform. There should be a minus in the power of e: e^(-2*pi*i*...). That minus is missing in the entire row. I think I verified it on paper, but also with other sources, including the wikipedia fourier transform article itself (section 5.1.2 Translation / time shifting). I have no idea how to fix this. This is my first post on wikipedia ever. I hope I'm correct though and not wasting anyones time.

--83.130.77.27 (talk) 12:23, 13 January 2021 (UTC)


 * As far as I can see, all the minus signs are there. Don't you see a minus sign in the following?
 * $$ e^{-2\pi i a \xi} \hat{f}(\xi)\,$$
 * That is what's in the 102 item, and here's the entire line:
 * {| class="wikitable"

! !! Function !! Fourier transform unitary, ordinary frequency !! Fourier transform unitary, angular frequency !! Fourier transform non-unitary, angular frequency !!Remarks
 * $$ f(x)\,$$
 * $$\begin{align} &\hat{f}(\xi) \\&= \int_{-\infty}^\infty f(x) e^{-2\pi i x\xi}\, dx \end{align}$$
 * $$\begin{align} &\hat{f}(\omega) \\&= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(x) e^{-i \omega x}\, dx \end{align}$$
 * $$\begin{align} &\hat{f}(\nu) \\&= \int_{-\infty}^\infty f(x) e^{-i \nu x}\, dx \end{align}$$
 * Definition
 * 102
 * $$ f(x - a)\,$$
 * $$ e^{-2\pi i a \xi} \hat{f}(\xi)\,$$
 * $$ e^{- i a \omega} \hat{f}(\omega)\,$$
 * $$ e^{- i a \nu} \hat{f}(\nu)\,$$
 * Shift in time domain
 * }
 * Which browser are you using? Note that when you create an account and log in, you have some options to affect the appearance of text and math. - DVdm (talk) 12:55, 13 January 2021 (UTC)
 * Shift in time domain
 * }
 * Which browser are you using? Note that when you create an account and log in, you have some options to affect the appearance of text and math. - DVdm (talk) 12:55, 13 January 2021 (UTC)


 * Correct! In latex there is a minus sign. Using chrome, just checked with another pc and it's working fine. And if I look here closely I think I can see the missing sign very faintly. So it must be something with the local browser and not display settings of wikipedia. Sorry for the trouble. — Preceding unsigned comment added by OmerLauer (talk • contribs) 13:14, 13 January 2021 (UTC)


 * Please sign all your talk page messages with four tildes ( ~ ) — See Help:Using talk pages. Thanks.
 * No problem. What happens if you go to Preferences, Appearance, Math and select MathML with SVG or PNG fallback? - DVdm (talk) 13:22, 13 January 2021 (UTC)


 * When switching to PNG I can see the sign as should. The problem is visible when using "MathML with SVG or PNG fallback", which of course is default. Just checked another thing: while zooming in and out, I saw the equations in correct form for all zooms above 125%, and also specificly for 90% (but not for 100%, 110% or below 80%). It seems to be some sort of a rendering issue (??) with my own browser. OmerLauer (talk) 13:34, 13 January 2021 (UTC)


 * OK, happy experimenting! - DVdm (talk) 13:45, 13 January 2021 (UTC)

Hello. There is a problem with Chrome and browsers based off of Chrome such as Edge. The wizards are aware of it. As a temporary fix you can increase the zoom factor. Also, you can get the Math Anywhere extension for both Chrome and Edge that seems to take care of the problem. Or you can wait until Chrome fixes the problem. Constant314 (talk) 16:12, 13 January 2021 (UTC)


 * Indeed, with the current versions of Chrome and Edge, 4 out of 6 minus signs are missing in the table above. Firefox is doing just fine. - DVdm (talk) 16:22, 13 January 2021 (UTC)


 * I should have added: please do not try to fix it by modifying the LaTeX markup. Constant314 (talk) 16:49, 13 January 2021 (UTC)

Definition section: the reason for the negative sign
I have to agree with the IP editor that this is poorly written and after checking the source, I see that the source does not say what is written here. In fact, the source doesn’t give a reason, it just says that engineers prefer a certain sign convention. The source isn't even about the Fourier transform.

The reason given is nonsense. In fact, both negative and positive signs are used by different communities and there is no problem with convergence of the integral.

Since the source doesn’t give a reason, I will remove the reason. But the reason is this: it is arbitrary. It is just a choice of where you want your negative signs to appear. As an engineer, I have my preference mainly because that is the way I was taught and that is the way it appears in most of my textbooks. You may find justifications for one choice or the other, but you will not find a definitive reason the sign must be negative or must be positive. Constant314 (talk) 12:26, 8 October 2021 (UTC)


 * $$e^{+i 2\pi \xi x} = \cos(2\pi \xi x) + i \sin(2\pi \xi x)$$ corresponds to a vector $$[\cos(2\pi \xi x),\sin(2\pi \xi x)]$$ that rotates in the positive direction (increasing vector angle = CCW) for positive frequency $$\xi$$. Anyone who disagrees with that convention is in a lonely minority, and will have endless difficulty with the preponderance of relevant literature and Wikipedia articles.  For all the others, the conclusion is obvious.  The quantity $$\int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi \xi x}\,dx$$ is a measure of the relative amount of component $$e^{+i 2\pi \xi x}$$ in function $$f(x)$$ (see Fourier_series).  Therefore $$\hat{f}(\xi)$$ is the appropriate nomenclature.  Similarly, the quantity $$\int_{-\infty}^{\infty} f(x)\ e^{+i 2\pi \xi x} dx = \int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi (-\xi) x}\,dx$$ is a measure of the relative amount of component $$e^{i 2\pi (-\xi) x}$$ in function $$f(x)$$.  And therefore $$\hat{f}(-\xi)$$ is the appropriate nomenclature.
 * --Bob K (talk) 16:41, 13 November 2021 (UTC)

I use the same convention that you are advocating. I have no problem with using that convention. If you want to state a reason for that convention in the article you need a reliable secondary source that states that reason. No amount of WP:OR will change that. However, I do not mind dabbling in OR here on the talk page.

Let
 * $$\begin{align} &\hat{f}_{-}(\omega) = \int_{-\infty}^\infty f(t) e^{-i \omega x}\, dt \end{align}$$ This is the conventional forward transform.
 * $$\begin{align} &\hat{f}_{+}(\omega) = \int_{-\infty}^\infty f(t) e^{+i \omega x}\, dt \end{align}$$ This is the other convention.  It is mathematically equal to the conventional reverse transform.

I hope it is obvious that
 * $$\begin{align} &\hat{f}_{-} = \hat{f}_{+}^* \end{align}$$ Thus the results of these two conventions are simply conjugates of each other.

This has no physical effect because physical effects are caused by energy or power. The power of a Fourier transform is computed by multiplying the transform by its conjugate.

Again, I hope it is obvious that
 * $$\begin{align} &\hat{f}_{-} \hat{f}_{-}^* = &\hat{f}_{+} \hat{f}_{+}^* \end{align}$$

So, lets look at a couple of examples. I will suppress multiplicative constants that clutter up the results.

First, consider the Fourier transform of $$ cos( \omega t) $$.
 * The Fourier transform under the usual convention is $$ \delta(\omega-a)+\delta(\omega+a)$$. It has Fourier components at both $$+a$$ and $$-a $$.
 * The Fourier transform under the other convention is $$ \delta(\omega+a)+\delta(\omega-a)$$. The result is exactly the same result.

Next, consider the Fourier transform of $$ sin( \omega t) $$.
 * The Fourier transform under the usual convention is $$ -i\delta(\omega-a)+i\delta(\omega+a)$$. It has Fourier components at both $$+a $$ and $$-a $$.
 * The Fourier transform under the other convention is $$ -i\delta(\omega+a)+i\delta(\omega-a)$$. It has Fourier components at both $$+a $$ and $$-a $$.  The result is the conjugate of the result using the usual convention.

Now let me go way off into OR la-la land to speculate why engineers prefer the usual convention. Consider the Fourier transform of cos(ωt) + sin(ωt). It is $$ \delta(\omega-a)+\delta(\omega+a) -i\delta(\omega-a)+i\delta(\omega+a)$$. The component at the positive frequency of $$+a$$ is $$ \delta(\omega-a) -i\delta(\omega-a)$$. Notice in particular that the sign of the imaginary part is negative. Engineers prefer this because $$ cos( \omega t)+sin( \omega t) $$ lags $$ cos( \omega t)$$ by 45°. When an engineer plots this in Cartesian space, it is [1,-1]. The principal argument is negative. Engineers prefer that because the phase of cos(ωt) + sin(ωt) relative to cos(ωt) is negative. Mathematicians consider cos(ωt) and sin(ωt) as basis vectors and they plot cos(ωt) + sin(ωt) as [1,1]. That is all there is to it. Engineers prefer that the Fourier component of sin(ωt) should be negative at positive frequency. Constant314 (talk) 22:03, 13 November 2021 (UTC)

For real-valued $$f(t)$$ the convention hardly matters, because every frequency has a positive equivalent (e.g. see Aliasing). The concept of negative frequency is unnecessary... two-sided Fourier transforms are redundant.

Cutting to the chase, the convention determines whether $$e^{i \omega t}$$ is considered a positive or a negative frequency. The customary definition of instantaneous frequency is the derivative of instantaneous phase, which is $$\omega t,$$ whose derivative is $$\omega.$$ Therefore $$f(t) = e^{i \omega t}, \omega > 0$$ is a positive frequency. And its measurement is $$\hat{f}(\omega)=\int_{-\infty}^{\infty} f(t)\ e^{-i \omega t}\,dt$$ (which means $$\int_{-\infty}^{\infty} f(t)\ e^{i \omega t}\,dt$$  is  $$\hat{f}(-\omega)$$). I conclude that those who claim otherwise have a different definition of instantaneous phase or instantaneous frequency, which puts them at odds with Wikipedia's sourced articles. The burden is on them to provide contradictory sources. --Bob K (talk) 04:38, 15 November 2021 (UTC)


 * But no. The burden is one those who wish to add a fact to a Wikipedia article.  The burden is on them to provide a reliable source. Constant314 (talk) 04:21, 15 November 2021 (UTC)

I get your point. But I disagree with your statement "But the reason is this: it is arbitrary. It is just a choice of where you want your negative signs to appear." No. It comes down to your definition of the instantaneous phase and frequency of function $$e^{i \omega t}.$$ When you "arbitrarily" choose  $$\hat f(\omega) = \int_{-\infty}^{\infty} f(t)\ e^{+i \omega t}\,dt,$$  you are also arbitrarily rejecting the customary definitions of instantaneous phase and frequency. Therefore you need to provide sourced reasons for that whim. --Bob K (talk) 05:10, 15 November 2021 (UTC)

I went through several of my text books. Here is what I found.
 * Using the engineering convention
 * Oppenheim, Alan V.; Willsky, Alan S.; Young, Ian T. (1983), Signals and Systems (1st ed.), Prentice-Hall, ISBN 0138097313
 * Gregg, W. David (1977), Analog & Digital Communication, John Wiley, ISBN 0471326615
 * Stein, Seymour; Jones, J. Jones (1967), Modern Communnication Principles, McGraw-Hill, page 4, equation 1-5
 * Hayt, William; Kemmerly, Jack E. (1971), Engineering Circuit Analysis (2nd ed.), McGraw-Hill, ISBN 0070273820, page 535, equation 8b.


 * Using the other convention
 * Press, William H.; Teukolsky, Saul A.; Vetterling, William T. (2007), Numerical Recipes (3rd ed.), Cambridge University Press, ISBN 9780521880688, page 692.
 * Jackson, John Davd (1999), Classical Electrodynamics (3rd ed.), John-Wiley, ISBN 047130932X, page 372, equation 8.89
 * Stratton, Julius Adams (1941), Electromagnetic Theory, McGraw-Hill page 294, equation 47
 * Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (1993), Foundations of Electromagnetic Theory, Addison-Wesley, ISBN 0201526247, page 607, equation VI-2

Constant314 (talk) 17:28, 17 November 2021 (UTC)

Thank you. I can expand the upper list, if needed, but it seems to be coming down to signals and communication vs electromagnetics. Amazingly, I still have my undergrad copy of It is a dense 554-page book, with not a single Fourier transform formula or even Euler's formula. My take-away is that the EM applications of Fourier transform theory don't go deep enough to matter which convention they use. To quote myself (above) "For real-valued $$f(t)$$ the convention hardly matters". The concept of negative frequency is not useful. So your statement: might be a misleading generalization based on certain limited applications of transform theory. Anyhow, we're getting a little off track. The point is that the convention chosen for the article (which we both agree with) was not an arbitrary coin toss. It might not have any consequences for EM theory, but it does have consequences for signal theory. So I added a footnote that does not need an external citation. All it relies on is a Wikilink to our instantaneous frequency article. --Bob K (talk) 12:30, 18 November 2021 (UTC)
 * "But the reason is this: it is arbitrary. It is just a choice of where you want your negative signs to appear."


 * Greetings. I had not responded because it looked like we were at an impasse, but I think the conversation should continue.  I looked at instantaneous frequency and did not see anything there that would favor one convention for FT over the other.  There is no physical requirement for one or the other.  However, I will speculate.  When I look at the engineers describing a simple wave traveling in the x direction, they tend to use $$ \cos (\omega t - kx) $$ whereas the EM guys tend to write $$ \cos (kx - \omega t ) $$.  My interpretaion is that the signals guys are mostly interested in what is happening with respect to time at a fixed place while the EM guys are more interested in what happens with respect to space at a fixed time. Constant314 (talk) 05:04, 16 March 2022 (UTC)

That sounds about right, based on my distant memories (circa 1968) of one EM theory course. --Bob K (talk) 11:47, 16 March 2022 (UTC)

Language for Beginners
I think this article should be improved in some manner for the general public that is scientifically minded but not taking a full calculus class in college. I think the possibility of adding it to the Simple English Wikipedia with easier to understand language is a good idea, in addition to the process of adding explanations and writing that is not mathematically centered. Maybe including something of this sort, "The Fourier Transform helps to transform functions such as cosine and sine into different output functions that behave differently than normal trigonometric functions." ScientistBuilder (talk) 01:38, 14 October 2021 (UTC)ScientistBuilderScientistBuilder (talk) 01:38, 14 October 2021 (UTC)


 * Is this better: Fourier_analysis ?
 * --Bob K (talk) 12:29, 16 October 2021 (UTC)

Use of complex sinusoids to represent real sinusoids
A quick impression is that this section could be simplified, perhaps making use of the Analytic signal concept instead of Fourier series. I'll try to give that some thought.

Furthermore, the statement "every real sinusoid consists of an equal contribution of positive and negative frequency components, which is true of all real signals" is misleading. It is a cancellation, not a contribution, analogous to something like "10 apples consists of 5 apples + 5 bananas and 5 apples - 5 bananas". (See Negative_frequency)

--Bob K (talk) 12:34, 16 March 2022 (UTC)


 * I'm sorry. Thank you for pointing that out. For now I've rephrased as:
 * "Hence, every real sinusoid (and real signal) can be considered to consist of a positive and negative frequency, whose imaginary components cancel but whose real components sum to form the real signal."
 * And I removed that reference to that ccrma.stanford.edu page cause what I wrote is now slightly different.
 * I understand you don't want a complicated discussion on complex sinusoids. I'm wondering maybe what if I move that discussion out from this article and put it in the article for either negative frequency or Sine wave.  Or maybe the Sinusoid redirect page could become its own page that discusses both real and complex sinusoids. Em3rgent0rdr (talk) 02:15, 17 March 2022 (UTC)

I think that's on the right track. I was definitely struggling with that section... I kept coming back to the question "Does it even need to be here?" IMO, the ccrma.stanford.edu viewpoint is the easy explanation, more of an engineering convenience than a true insight. I'm all in favor of conveniences, but I'm also in favor of distinguishing them from the underlying realities. --Bob K (talk) 11:45, 17 March 2022 (UTC)

Pronunciation
I was hoping to see an IPA pronunciation in the first sentence of the article (but there is not one). Wiktionary has an English pronunciation for Fourier (as a surname), which might apply to Fourier transform. - excarnateSojourner (talk&#124;contrib) 21:30, 11 July 2022 (UTC)


 * In English the name is often pronounced foo-ree-ei or foor-ee-ei (the last syllable rhyming with "day", and all three syllables given approximately equal stress). 2601:200:C000:1A0:BC00:5039:DB55:E9EC (talk) 22:23, 29 July 2022 (UTC)

Nonsense statement
The section idiotically titled Introduction contains this passage:

"Although Fourier series can represent periodic waveforms as the sum of harmonically-related sinusoids, Fourier series can't represent non-periodic waveforms. However, the Fourier transform is able to represent non-periodic waveforms as well. It achieves this by applying a limiting process to lengthen the period of any waveform to infinity and then treating that as a periodic waveform. "

The last sentence is complete mathematical nonsense, and is not taken from the cited reference. 2601:200:C000:1A0:BC00:5039:DB55:E9EC (talk) 19:13, 29 July 2022 (UTC)

Possible problem may need fixing
As I tried to write a few clarifying sentences in the section Fourier transform for functions that are zero outside an interval, in order to specify the periodic function whose Fourier series was referred to ... I realized that the article refers to the same function when discussing the Fourier transform. But: It's not the same function. One is periodic and the other is zero outside the interval [-T/2, T/2].

I hope someone knowledgeable about this subject can fix this apparent problem. 2601:200:C000:1A0:BC00:5039:DB55:E9EC (talk) 20:08, 29 July 2022 (UTC)

Disappointing introduction
I find the introduction difficult to understand. It would be far better to avoid use of the word "transform" at first in the explanation, and instead say that the Fourier transform is a mapping taking an integrable function on Rn to an integrable function.

And that according to a version of the Fourier transform F favored by mathematicians, F4 = I.

Surely this is worthy of prominent mention. 2601:200:C000:1A0:FDA6:8A26:FCCA:2C1B (talk) 04:39, 22 September 2022 (UTC)


 * I have no idea what your 3rd sentence means. I understand the 2nd sentence, but I disagree with "far better".  Wikipedia's standard is to reflect a consensus of common usage, not to create a better math textbook.
 * --Bob K (talk) 13:32, 22 September 2022 (UTC)
 * Transform, as a verb, is more meaningful to the typical reader than mapping. Constant314 (talk) 19:03, 22 September 2022 (UTC)


 * I suggest initially avoiding the verb "transform" to explain the Fourier transform, and instead use the noun "mapping" to say that the f.t. is a mapping from a function space to a function space.


 * I certainly agree: The last thing a Wikipedia article on a math topic should resemble in tone or wording is a math textbook.


 * The reason in my opinion is that the audiences for the two things have very different characteristics, statistically.


 * The word "transform" (whether verb or noun) as a technical term in math is distinctly more advanced than "mapping", which all peopl exposed to even just a smidgen of math know means a function.


 * The thing is: A consensus already exists in a strong and wide area of mathematics called "Fourier analysis".


 * By any reliable judgment, it is significant — and so worth inclusion in the article — that applying the Fourier transform four times in a row will result in the same function you started with.2601:200:C000:1A0:FDA6:8A26:FCCA:2C1B (talk) 00:33, 23 September 2022 (UTC)
 * Mapping is discussed in the body of the article. Constant<b style="color: #4400bb;">314</b> (talk) 17:10, 23 September 2022 (UTC)


 * IMO this can be handled by just inserting a footnote to the effect that a synonym (in this case at least) for transform is mapping (and vice versa). Here we prefer transform, because it is the most commonly found and widely accepted terminology.
 * --Bob K (talk) 14:38, 23 September 2022 (UTC)

Section: Use of complex sinusoids to represent real sinusoids
Well done, but it repeats a lot of information available thru WikiLinks, and now also added to Fourier transform. IMO this section can be downsized or eliminated. Bob K (talk) 16:47, 12 December 2022 (UTC)

ξ vs ν ? revisited
It has been a decade since the ξ_vs_ν_? discussion between just 2 editors. And as noted then, ν was the original preference, replaced without a consensus. Now I am asking if there is any consensus for reverting back to ν, because ξ is unnecessarily intimidating-looking (IMO). --Bob K (talk) 00:59, 19 December 2022 (UTC)
 * I am not in favor of this. This is a bit like avoiding $$\theta$$ for an angle. Thenub314 (talk) 21:30, 15 February 2023 (UTC)

Section: Fourier transform for periodic functions
The reliance on:


 * $$ \sum_{n=-\infty}^{\infty} e^{-i \omega n T} = \frac{2 \pi}{T} \sum_{k=-\infty}^{\infty} \delta(\omega-2\pi k/T)$$

is no better than relying on the transform pair:


 * $$e^{ i 2\pi \xi_0 x}\ \stackrel{\mathcal{F}}{\longleftrightarrow}\ \delta \left(\xi - \xi_0\right).$$

Consequently, you've made a mountain out of a molehill. --Bob K (talk) 01:59, 25 December 2022 (UTC)

Section: Fourier transform for functions that are zero outside an interval
The result of this section seems to be the statement:

Under appropriate conditions, the Fourier series of $f$ will equal the function $f$. In other words, $f$ can be written:


 * $$f(x)=\sum_{n=-\infty}^\infty c_n\, e^{2\pi i\left(\frac{n}{T}\right) x} =\sum_{n=-\infty}^\infty \hat{f}(\xi_n)\ e^{2\pi i\xi_n x}\Delta\xi,$$

At the very least it needs to clarified that the first equality only applies in the interval T.

But also we already know that $$f(x)$$ can be recovered from $$\hat f(\xi).$$ So what you are showing is that when its domain is bounded, it can also be recovered from discrete samples of $$\hat f(\xi),$$ which, by the way, is the dual of the time domain sampling theorem. This is mildly interesting, but it strikes me as a proof looking for a home, and the whole article is already pretty cluttered with stuff. --Bob K (talk) 09:00, 27 December 2022 (UTC)

Erroneous square integrable function
The function $$e^{i \alpha x^2}$$ is listed as a square integrable function (Line 207). I think this is only true if $$\mathrm{Im}(\alpha) > 0$$. However, this condition is not mentioned, and if this condition is included, (Line 207) becomes just a restatement of (Line 206) with the replacement $$\alpha \to i \alpha$$. 18.29.20.123 (talk) 16:40, 10 February 2023 (UTC)


 * I believe α is assumed to be purely real. Im(α) = 0. <b style="color: #4400bb;">Constant314</b> (talk) 17:57, 10 February 2023 (UTC)


 * The IP editor is correct (if you read this, I hope you make an account and stick around). If $$\mathrm{Im}(\alpha)=0$$ then $$|e^{i \alpha x^2}| = 1$$ which is definitely not square integrable. I don't think the citation says what the author thought it said.  There are a couple of other places like this too.  If I can at some point I will try to run through the citations and verify and fix what I see.
 * I agree. Still, I think α is assumed to be purely real, if so, it should not be in the table at that point. <b style="color: #4400bb;">Constant314</b> (talk) 17:25, 13 February 2023 (UTC)
 * Yes, I think you're right. I will do a bit of research, once I find a source for this particular formula, I will cite it, move it, and include conditions. Thenub314 (talk) 20:17, 13 February 2023 (UTC)

Recent revert
I would like to discuss the recent revert of my edit. The edit, in part, addresssed the comment from the IP editor about that our tables were incorrect. While looking at the article I noticed it was inconsistent in the placement of $$2\pi i$$ vs $$i 2\pi$$ so I fixed it. It seems the objections are mostly about the later. For which I would like to point out, the prevailing culture is that if a $$2\pi$$ is used, it is before the i to see this I would like to point to:
 * MATLAB Documentation
 * Mathematica Documentation (see Details and Options)
 * The makers of FFTW, the open source FFT engine
 * The first few articles I looked at from the IEEE
 * This article, for a minimum of a decade until last May when it was changed without discussion.

Aside from the last one, these are large professional orgainzations trying to make something for the masses. About the last one, I include the last one because I would like to return to the status quo for the article. I also feel it would benefit readers because they are (in my opinion) more likely to encouter $$2\pi i$$ in books/papers/references. Thenub314 (talk) 15:47, 15 February 2023 (UTC)


 * I'm not aware of our tables being incorrect nor of the inconsistencies. I looked and did not find them.  So there must have been very few, and so it can more easily be addressed by changing a small number of things instead of everything else.  Sorry to be disagreeable, but I also don't think 2πi is more common than i2π.  Regardless of that, they are both common, so  I think we should look at the rational for each.  Besides the (very good) reason I cited in the undo, leading with the i makes it more obvious that it's a complex exponential, which is its most important attribute.
 * --Bob K (talk) 18:47, 15 February 2023 (UTC)
 * The issue with the table being incorrect started on this talk page! A kindly IP editor spotted it, and left a comment here. I said I would research and fix it and I did.
 * You're right, I did reply to the edit summary stating your preferred format is more logical. Clearly, I disagree.  There are plenty of these arguments around, and they all boil down to a matter of taste.  This is a bit like $$\int\,dx f(x)$$ vs $$\int f(x)\,dx$$.  This is why I am pointing to things that are external to me.  To put some data behind my argument, I ran through each of the books cited by this article.
 * Excluding the few that I couldn't get access to, in the ones that use $$2\pi$$ just over 80% place the i after.
 * And as a proportion of total references those that put an i before the $$2\pi$$ are less frequent than those that avoid complex number all together by using sine and cosine.
 * IMO, the most important attribute is to be a service to the readers, particularly the non-experts who may be thrown off by the change. They will most likely be looking at a reference where i following the $$2\pi$$. Thenub314 (talk) 21:27, 15 February 2023 (UTC)


 * And I think it serves the non-experts more to put the $$i$$ in the most prominent location, and to keep 2πξ (= ω) together, especially when side-by-side in the tables with $$e^{i\omega t}.$$
 * --Bob K (talk) 23:19, 15 February 2023 (UTC)


 * My experience has an educator leads me to disagree. IMO students are more thrown off by disagreements in notation that some philosophical point about i being important, so it should come first. But we may be at an impasse, hopefully someone else while chime in and help us out.Thenub314 (talk) 00:25, 16 February 2023 (UTC)
 * I agree with Bob K. I prefer to see the i in front, so I can tell immediately if it is there without searching.  The fact that the figures and text disagree is not a problem.  Anyone who does not realize that i2π is the same as 2πi won't understand anything anyway.
 * On the other hand, there was a mistake in table item 207 as the function was not square integrable. Please go ahead and fix that while the discussion about i2π continues. <b style="color: #4400bb;">Constant314</b> (talk) 01:38, 16 February 2023 (UTC)


 * Sure, but I have no better way than to copy and paste from the diff, which I cannot do from my phone, but I'll try when I am able. Thenub314 (talk) 02:57, 16 February 2023 (UTC)


 * I only use a laptop, so I'll take a look at that. Also, I'll take a look for those remaining "inconsistencies".  I don't like them either.
 * --Bob K (talk) 13:22, 16 February 2023 (UTC)

Symmetry section
I removed a reference from here because the cited reference wasn't discussing the Fourier transform in the section indicated. I will try to find something more specific to this topic. Thenub314 (talk) 20:47, 16 February 2023 (UTC)

A Commons file used on this page or its Wikidata item has been nominated for deletion
The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion: Participate in the deletion discussion at the. —Community Tech bot (talk) 14:53, 17 February 2023 (UTC)
 * Complex-valued function modulating a complex sinusoid.svg

Error in inversion section?
In the inversion section the integrals are over the variable σ but the variable ξ appears in the integrands. Where does the ξ come from? — Preceding unsigned comment added by 2.27.171.228 (talk) 21:25, 21 August 2023 (UTC)

Code sample
I reverted this edit, because Wikipedia articles do not usually include code samples (see MOS:CODE), unless those code samples illustrate some fundamental aspect of an algorithm. In this case, the algorithm (the fast Fourier transform, for which there is already a separate article) is not actually shown. Instead, it uses a builtin function of the numpy library. So this code is very python-specific, and is not a good illustration of the Fourier transform. Tito Omburo (talk) 10:42, 23 May 2024 (UTC)


 * Since the code apparently produces a graph, perhaps Wikimedia is an appropriate host for this creation. And it welcomes the inclusion of source code.  Here is a link to an example, where the code is in a portion of the Summary section.  But it can also have its own separate section.
 * --Bob K (talk) 12:13, 23 May 2024 (UTC)
 * I agree with . That code sample just shows calls to library functions and serves no useful purpose in the article. <b style="color: #4400bb;">Constant314</b> (talk) 13:09, 23 May 2024 (UTC)

Error in Symmetry Section?
I think there may be an error in the symmetry section. For example, it says "even-symmetric function $$(f_{_{RE}}+i\ f_{_{IO}})$$..."

But $$(f_{_{RE}}+i\ f_{_{IO}})$$ isn't even symmetric, right? Shouldn't the even symmetric function be $$(f_{_{RE}}+i\ f_{_{IE}})$$?

Is there a reference for this section? Jackmjackm (talk) 22:01, 17 June 2024 (UTC)


 * "Even symmetric" apparently means $$f(-x)=\overline{f(x)}$$ here. I assume this is a standard ise of the term in signal processing, but agree that a reference seems desirable. Tito Omburo (talk) 22:56, 17 June 2024 (UTC)


 * The article links to Even_and_odd_functions. That's where the reference should be.
 * --Bob K (talk) 11:42, 19 June 2024 (UTC)


 * But at 15:45 on 16 February 2023, an editor removed what appears to be a directly relevant reference:
 * I don't think I have access to the reference.
 * --Bob K (talk) 10:32, 20 June 2024 (UTC)
 * This usage is supported by Oppenheim and Schafer, fwiw. Tito Omburo (talk) 11:03, 20 June 2024 (UTC)


 * Thank you. I added the citation to Even_and_odd_functions, which also contains the Proakis reference, except the page number is 411, instead of the one that was deleted here (page 291).  Maybe that was the problem all along.  (I don't have the Proakis book to verify.)
 * --Bob K (talk) 12:58, 20 June 2024 (UTC)