User:WakingLili/Sandbox/FT for Ava

The Fourier transform, named after Joseph Fourier, is a mathematical transform with many applications in physics and engineering. Very commonly it transforms a mathematical function of time, f(t), into a new function, sometimes denoted by $$\hat f$$ or F, whose argument is frequency with units of cycles/s (hertz) or radians per second. The new function is then known as the Fourier transform and/or the frequency spectrum of the function f. The Fourier transform is also a reversible operation. Thus, given the function $$\hat f,$$ one can determine the original function, f. (See Fourier inversion theorem.) f and $$ \hat f$$ are also respectively known as time domain and frequency domain representations of the same "event". Most often perhaps, f is a real-valued function, and $$\hat f$$ is complex valued, where a complex number describes both the amplitude and phase of a corresponding frequency component. In general, f is also complex, such as the analytic representation of a real-valued function. The term "Fourier transform" refers to both the transform operation and to the complex-valued function it produces.

In the case of a periodic function (for example, a continuous but not necessarily sinusoidal musical sound), the Fourier transform can be simplified to the calculation of a discrete set of complex amplitudes, called Fourier series coefficients. Also, when a time-domain function is sampled to facilitate storage or computer-processing, it is still possible to recreate a version of the original Fourier transform according to the Poisson summation formula, also known as discrete-time Fourier transform. These topics are addressed in separate articles. For an overview of those and other related operations, refer to Fourier analysis or List of Fourier-related transforms.

Definition
There are several common conventions for defining the Fourier transform ƒ̂ of an integrable function f : R → C,. This article will use the definition:


 * $$\hat{f}(\xi) = \int_{-\infty}^{\infty} f(x)\ e^{- 2\pi i x \xi}\,dx$$,  for every real number ξ.

When the independent variable x represents time (with SI unit of seconds), the transform variable ξ represents frequency (in hertz). Under suitable conditions, f is determined by ƒ̂ via the inverse transform:


 * $$f(x) = \int_{-\infty}^{\infty} \hat{f}(\xi)\ e^{2 \pi i \xi x}\,d\xi, $$  for every real number x.

The statement that f can be reconstructed from ƒ̂ is known as the Fourier inversion theorem, and was first introduced in Fourier's Analytical Theory of Heat, , although what would be considered a proof by modern standards was not given until much later. The functions f and ƒ̂ often are referred to as a Fourier integral pair or Fourier transform pair.

For other common conventions and notations, including using the angular frequency ω instead of the frequency ξ, see Other conventions and Other notations below. The Fourier transform on Euclidean space is treated separately, in which the variable x often represents position and ξ momentum.

Introduction
The motivation for the Fourier transform comes from the study of Fourier series. In the study of Fourier series, complicated but periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. The Fourier transform is an extension of the Fourier series that results when the period of the represented function is lengthened and allowed to approach infinity.

Due to the properties of sine and cosine, it is possible to recover the amplitude of each wave in a Fourier series using an integral. In many cases it is desirable to use Euler's formula, which states that e2πiθ= cos(2πθ) + isin(2πθ), to write Fourier series in terms of the basic waves e2πiθ. This has the advantage of simplifying many of the formulas involved, and provides a formulation for Fourier series that more closely resembles the definition followed in this article. Re-writing sines and cosines as complex exponentials makes it necessary for the Fourier coefficients to be complex valued. The usual interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. These complex exponentials sometimes contain negative "frequencies". If θ is measured in seconds, then the waves e2πiθ and e−2πiθ both complete one cycle per second, but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is still closely related.

There is a close connection between the definition of Fourier series and the Fourier transform for functions f which are zero outside of an interval. For such a function, we can calculate its Fourier series on any interval that includes the points where f is not identically zero. The Fourier transform is also defined for such a function. As we increase the length of the interval on which we calculate the Fourier series, then the Fourier series coefficients begin to look like the Fourier transform and the sum of the Fourier series of f begins to look like the inverse Fourier transform. To explain this more precisely, suppose that T is large enough so that the interval [−T/2,T/2] contains the interval on which f is not identically zero. Then the n-th series coefficient cn is given by:


 * $$c_n = \int_{-T/2}^{T/2} f(x)\ e^{-2\pi i(n/T) x} dx.\,$$

Comparing this to the definition of the Fourier transform, it follows that cn = ƒ̂(n/T) since f(x) is zero outside [−T/2,T/2]. Thus the Fourier coefficients are just the values of the Fourier transform sampled on a grid of width 1/T. As T increases the Fourier coefficients more closely represent the Fourier transform of the function.

Under appropriate conditions, the sum of the Fourier series of f will equal the function f. In other words, f can be written:


 * $$f(x)=\frac{1}{T}\sum_{n=-\infty}^\infty \hat{f}(n/T)\ e^{2\pi i(n/T) x} =\sum_{n=-\infty}^\infty \hat{f}(\xi_n)\ e^{2\pi i\xi_n x}\Delta\xi,$$

where the last sum is simply the first sum rewritten using the definitions ξn = n/T, and Δξ = (n + 1)/T − n/T = 1/T.

This second sum is a Riemann sum, and so by letting T → ∞ it will converge to the integral for the inverse Fourier transform given in the definition section. Under suitable conditions this argument may be made precise.

In the study of Fourier series the numbers cn could be thought of as the "amount" of the wave present in the Fourier series of f. Similarly, as seen above, the Fourier transform can be thought of as a function that measures how much of each individual frequency is present in our function f, and we can recombine these waves by using an integral (or "continuous sum") to reproduce the original function.

Example
The following images provide a visual illustration of how the Fourier transform measures whether a frequency is present in a particular function. The function depicted f(t) = cos(6πt) e-πt 2 oscillates at 3 hertz (if t measures seconds) and tends quickly to 0. (The second factor in this equation is an envelope function that shapes the continuous sinusoid into a short pulse. Its general form is a Gaussian function). This function was specially chosen to have a real Fourier transform which can easily be plotted. The first image contains its graph. In order to calculate ƒ̂(3) we must integrate e−2πi(3t)f(t). The second image shows the plot of the real and imaginary parts of this function. The real part of the integrand is almost always positive, because when f(t) is negative, the real part of e−2πi(3t) is negative as well. Because they oscillate at the same rate, when f(t) is positive, so is the real part of e−2πi(3t). The result is that when you integrate the real part of the integrand you get a relatively large number (in this case 0.5). On the other hand, when you try to measure a frequency that is not present, as in the case when we look at ƒ̂(5), the integrand oscillates enough so that the integral is very small. The general situation may be a bit more complicated than this, but this in spirit is how the Fourier transform measures how much of an individual frequency is present in a function f(t).

Properties of the Fourier transform
Here we assume f(x), g(x) and h(x) are integrable functions, are Lebesgue-measurable on the real line, and satisfy:


 * $$\int_{-\infty}^\infty |f(x)| \, dx < \infty.$$

We denote the Fourier transforms of these functions by $$\hat{f}(\xi)$$&thinsp;, $$\hat{g}(\xi)$$&thinsp; and &thinsp;$$\hat{h}(\xi)$$ respectively.

Basic properties
The Fourier transform has the following basic properties:.


 * Linearity


 * For any complex numbers a and b, if h(x) = af(x) + bg(x), then &thinsp;$$\hat{h}(\xi)=a\cdot \hat{f}(\xi) + b\cdot\hat{g}(\xi).$$


 * Translation


 * For any real number x0, if &thinsp;$$h(x)=f(x-x_0),$$&thinsp; then &thinsp;$$\hat{h}(\xi)= e^{-i\,2\pi \,x_0\,\xi }\hat{f}(\xi).$$


 * Modulation


 * For any real number ξ0 if $$h(x)=e^{i \, 2\pi \, x \,\xi_0}f(x),$$ then &thinsp;$$\hat{h}(\xi) = \hat{f}(\xi-\xi_{0}).$$


 * Scaling


 * For a non-zero real number a, if h(x) = f(ax), then &thinsp;$$\hat{h}(\xi)=\frac{1}{|a|}\hat{f}\left(\frac{\xi}{a}\right).$$     The case a = −1 leads to the time-reversal property, which states: if h(x) = f(−x), then $$\hat{h}(\xi)=\hat{f}(-\xi).$$


 * Conjugation


 * If &thinsp;$$h(x)=\overline{f(x)},$$&thinsp; then &thinsp;$$\hat{h}(\xi) = \overline{\hat{f}(-\xi)}.$$


 * In particular, if f is real, then one has the reality condition &thinsp;$$\hat{f}(-\xi)=\overline{\hat{f}(\xi)}.$$, that is, $$\hat{f}$$ is a hermitian function.


 * And if f is purely imaginary, then &thinsp;$$\hat{f}(-\xi)=-\overline{\hat{f}(\xi)}.$$


 * Integration


 * Substituting $$\xi=0 $$ in the definition, we obtain


 * $$\hat{f}(0) = \int_{-\infty}^{\infty} f(x)\,dx$$

That is, the evaluation of the Fourier transform in the origin ($$\xi=0$$) equals the integral of f all over its domain.

Convolution theorem
The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms $$\hat{f}(\xi)$$ and $$\hat{g}(\xi)$$ respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms $$\hat{f}(\xi)$$ and $$\hat{g}(\xi)$$ (under other conventions for the definition of the Fourier transform a constant factor may appear).

This means that if:


 * $$h(x) = (f*g)(x) = \int_{-\infty}^\infty f(y)g(x - y)\,dy,$$

where ∗ denotes the convolution operation, then:


 * $$\hat{h}(\xi) = \hat{f}(\xi)\cdot \hat{g}(\xi).$$

In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, $$\hat{g}(\xi)$$ represents the frequency response of the system.

Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms $$\hat{p}(\xi)$$ and $$\hat{q}(\xi)$$.

Cross-correlation theorem
In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x):


 * $$h(x)=(f\star g)(x) = \int_{-\infty}^\infty \overline{f(y)}\,g(x+y)\,dy$$

then the Fourier transform of h(x) is:


 * $$\hat{h}(\xi) = \overline{\hat{f}(\xi)} \,\cdot\, \hat{g}(\xi).$$

As a special case, the autocorrelation of function f(x) is:


 * $$h(x)=(f\star f)(x)=\int_{-\infty}^\infty \overline{f(y)}f(x+y)\,dy$$

for which


 * $$\hat{h}(\xi) = \overline{\hat{f}(\xi)}\,\hat{f}(\xi) = |\hat{f}(\xi)|^2.$$

Other notations
Other common notations for ƒ̂(ξ) include:


 * $$\tilde{f}(\xi),\ \tilde{f}(\omega),\  F(\xi),\  \mathcal{F}\left(f\right)(\xi),\  \left(\mathcal{F}f\right)(\xi),\  \mathcal{F}(f),\  \mathcal F(\omega),\ F(\omega),\  \mathcal F(j\omega),\  \mathcal{F}\{f\},\  \mathcal{F} \left(f(t)\right),\ \mathcal{F} \{f(t)\}.$$

Denoting the Fourier transform by a capital letter corresponding to the letter of function being transformed (such as f(x) and F(ξ)) is especially common in the sciences and engineering. In electronics, the omega (ω) is often used instead of ξ due to its interpretation as angular frequency, sometimes it is written as F(jω), where j is the imaginary unit, to indicate its relationship with the Laplace transform, and sometimes it is written informally as F(2πf) in order to use ordinary frequency.

The interpretation of the complex function ƒ̂(ξ) may be aided by expressing it in polar coordinate form


 * $$\hat{f}(\xi)=A(\xi)e^{i\varphi(\xi)}$$

in terms of the two real functions A(ξ) and φ(ξ) where:


 * $$A(\xi) = |\hat{f}(\xi)|, \, $$

is the amplitude and


 * $$\varphi (\xi) = \arg \big( \hat{f}(\xi) \big), $$

is the phase (see arg function).

Then the inverse transform can be written:


 * $$f(x) = \int _{-\infty}^{\infty} A(\xi)\ e^{ i(2\pi \xi x +\varphi (\xi))}\,d\xi,$$

which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ).

The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted $$\mathcal{F}$$ and $$\mathcal{F}(f)$$ is used to denote the Fourier transform of the function f. This mapping is linear, which means that $$\mathcal{F}$$ can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write $$\mathcal{F} f$$ instead of $$\mathcal{F}(f)$$. Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as $$\mathcal{F}(f)(\xi)$$ or as $$(\mathcal{F} f)(\xi)$$. Notice that in the former case, it is implicitly understood that $$\mathcal{F}$$ is applied first to f and then the resulting function is evaluated at ξ, not the other way around.

In mathematics and various applied sciences it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like $$\mathcal{F}(f(x))$$ formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed.

For example, $$\mathcal{F}( \mathrm{rect}(x) ) = \mathrm{sinc}(\xi)$$ is sometimes used to express that the Fourier transform of a rectangular function is a sinc function,

or $$\mathcal{F}(f(x+x_{0})) = \mathcal{F}(f(x)) e^{2\pi i \xi x_{0}}$$ is used to express the shift property of the Fourier transform.

Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0.

Other conventions
The Fourier transform can also be written in terms of angular frequency: ω = 2πξ whose units are radians per second.

The substitution ξ = ω/(2π) into the formulas above produces this convention:


 * $$\hat{f}(\omega) = \int_{\mathbf{R}^n} f(x) e^{- i\omega\cdot x}\,dx.$$

Under this convention, the inverse transform becomes:


 * $$f(x) = \frac{1}{(2\pi)^n} \int_{\mathbf{R}^n} \hat{f}(\omega)e^{ i\omega \cdot x}\,d\omega. $$

Unlike the convention followed in this article, when the Fourier transform is defined this way, it is no longer a unitary transformation on L2(Rn). There is also less symmetry between the formulas for the Fourier transform and its inverse.

Another convention is to split the factor of (2π)n evenly between the Fourier transform and its inverse, which leads to definitions:


 * $$ \hat{f}(\omega) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbf{R}^n} f(x) e^{- i\omega\cdot x}\,dx $$
 * $$f(x) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbf{R}^n} \hat{f}(\omega) e^{ i\omega \cdot x}\,d\omega. $$

Under this convention, the Fourier transform is again a unitary transformation on L2(Rn). It also restores the symmetry between the Fourier transform and its inverse.

Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention.

As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined $$E(e^{it\cdot X})=\int e^{it\cdot x}d\mu_X(x)$$.

As in the case of the "non-unitary angular frequency" convention above, there is no factor of 2π appearing in either of the integral, or in the exponential. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponential.

Tables of important Fourier transforms
The following tables record some closed form Fourier transforms. For functions f(x), g(x) and h(x) denote their Fourier transforms by ƒ̂, $$\hat{g}$$, and $$\hat{h}$$ respectively. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse.

Functional relationships
The Fourier transforms in this table may be found in or.

Square-integrable functions
The Fourier transforms in this table may be found in, , or the appendix of.

Distributions
The Fourier transforms in this table may be found in or the appendix of.

Two-dimensional functions

 * Remarks

To 400: The variables ξx, ξy, ωx, ωy, νx and νy are real numbers. The integrals are taken over the entire plane.

To 401: Both functions are Gaussians, which may not have unit volume.

To 402: The function is defined by circ(r)=1 0≤r≤1, and is 0 otherwise. This is the Airy distribution, and is expressed using J1 (the order 1 Bessel function of the first kind).

Formulas for general n-dimensional functions

 * Remarks

To 501: The function χ[0,1] is the indicator function of the interval [0, 1]. The function Γ(x) is the gamma function. The function Jn/2 + δ is a Bessel function of the first kind, with order n/2 + δ. Taking n = 2 and δ = 0 produces 402.

To 502: See Riesz potential. The formula also holds for all α ≠ −n, −n − 1, ... by analytic continuation, but then the function and its Fourier transforms need to be understood as suitably regularized tempered distributions. See homogeneous distribution.

To 503: This is the formula for a multivariate normal distribution normalized to 1 with a mean of 0. Bold variables are vectors or matrices. Following the notation of the aforementioned page, $$\boldsymbol{\Sigma}=\boldsymbol{\sigma}\boldsymbol{\sigma}^{T}$$ and $$\boldsymbol{\Sigma}^{-1}=\boldsymbol{\sigma}^{-T}\boldsymbol{\sigma}^{-1}$$