User:Target81/sandbox

How do I save my sandbox?

{Uncertainty principles in mathematics} The Fourier transform, named after Joseph Fourier, is a mathematical transformation employed to transform signals between time (or spatial) domain and frequency domain, which has many applications in physics and engineering. It is reversible, being able to transform from either domain to the other. The term itself refers to both the transform operation and to the function it produces.

In the case of a periodic function over time (for example, a continuous but not necessarily sinusoidal musical sound), the Fourier transform can be simplified to the calculation of a discrete set of complex amplitudes, called Fourier series coefficients. They represent the frequency spectrum of the original time-domain signal. Also, when a time-domain function is sampled to facilitate storage or computer-processing, it is still possible to recreate a version of the original Fourier transform according to the Poisson summation formula, also known as discrete-time Fourier transform. See also Fourier analysis and List of Fourier-related transforms.

Definition
There are several common conventions for defining the Fourier transform $$\hat{f}$$ of an integrable function $$f : \mathbb R \rightarrow \mathbb C$$,. This article will use the following definition:


 * $$\hat{f}(\xi) = \int_{-\infty}^\infty f(x)\ e^{- 2\pi i x \xi}\,dx$$,  for any real number ξ.

When the independent variable x represents time (with SI unit of seconds), the transform variable ξ represents frequency (in hertz). Under suitable conditions, $$f$$ is determined by $$\hat f$$ via the inverse transform:


 * $$f(x) = \int_{-\infty}^\infty \hat f(\xi)\ e^{2 \pi i \xi x}\,d\xi$$,   for any real number x.

The statement that $$f$$ can be reconstructed from $$\hat f$$ is known as the Fourier inversion theorem, and was first introduced in Fourier's Analytical Theory of Heat, , although what would be considered a proof by modern standards was not given until much later. The functions $$f$$ and $$\hat{f}$$ often are referred to as a Fourier integral pair or Fourier transform pair.

For other common conventions and notations, including using the angular frequency ω instead of the frequency ξ, see Other conventions and Other notations below. The Fourier transform on Euclidean space is treated separately, in which the variable x often represents position and ξ momentum.

Introduction
The motivation for the Fourier transform comes from the study of Fourier series. In the study of Fourier series, complicated but periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. The Fourier transform is an extension of the Fourier series that results when the period of the represented function is lengthened and allowed to approach infinity.

Due to the properties of sine and cosine, it is possible to recover the amplitude of each wave in a Fourier series using an integral. In many cases it is desirable to use Euler's formula, which states that, to write Fourier series in terms of the basic waves e2πiθ. This has the advantage of simplifying many of the formulas involved, and provides a formulation for Fourier series that more closely resembles the definition followed in this article. Re-writing sines and cosines as complex exponentials makes it necessary for the Fourier coefficients to be complex valued. The usual interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. These complex exponentials sometimes contain negative "frequencies". If θ is measured in seconds, then the waves e2πiθ and e−2πiθ both complete one cycle per second, but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is still closely related.

There is a close connection between the definition of Fourier series and the Fourier transform for functions f which are zero outside of an interval. For such a function, we can calculate its Fourier series on any interval that includes the points where f is not identically zero. The Fourier transform is also defined for such a function. As we increase the length of the interval on which we calculate the Fourier series, then the Fourier series coefficients begin to look like the Fourier transform and the sum of the Fourier series of f begins to look like the inverse Fourier transform. To explain this more precisely, suppose that T is large enough so that the interval [−T/2, T/2] contains the interval on which f is not identically zero. Then the n-th series coefficient cn is given by:


 * $$c_n = \frac{1}{T} \int_{-T/2}^{T/2} f(x)\ e^{-2\pi i(n/T) x} dx.$$

Comparing this to the definition of the Fourier transform, it follows that $$c_n = (1/T)\hat f(n/T)$$ since f(x) is zero outside [−T/2,T/2]. Thus the Fourier coefficients are just the values of the Fourier transform sampled on a grid of width 1/T, multiplied by the grid width 1/T.

Under appropriate conditions, the Fourier series of f will equal the function f. In other words, f can be written:


 * $$f(x)=\sum_{n=-\infty}^\infty c_n\ e^{2\pi i(n/T) x} =\sum_{n=-\infty}^\infty \hat{f}(\xi_n)\ e^{2\pi i\xi_n x}\Delta\xi,$$

where the last sum is simply the first sum rewritten using the definitions ξn = n/T, and Δξ = (n + 1)/T − n/T = 1/T.

This second sum is a Riemann sum, and so by letting T → ∞ it will converge to the integral for the inverse Fourier transform given in the definition section. Under suitable conditions this argument may be made precise.

In the study of Fourier series the numbers cn could be thought of as the "amount" of the wave present in the Fourier series of f. Similarly, as seen above, the Fourier transform can be thought of as a function that measures how much of each individual frequency is present in our function f, and we can recombine these waves by using an integral (or "continuous sum") to reproduce the original function.

Example
The following images provide a visual illustration of how the Fourier transform measures whether a frequency is present in a particular function. The function depicted f(t) = cos(6πt) e−πt 2 oscillates at 3 hertz (if t measures seconds) and tends quickly to 0. (The second factor in this equation is an envelope function that shapes the continuous sinusoid into a short pulse. Its general form is a Gaussian function). This function was specially chosen to have a real Fourier transform which can easily be plotted. The first image contains its graph. In order to calculate $$\hat f(3)$$ we must integrate e−2πi(3t)f(t). The second image shows the plot of the real and imaginary parts of this function. The real part of the integrand is almost always positive, because when f(t) is negative, the real part of e−2πi(3t) is negative as well. Because they oscillate at the same rate, when f(t) is positive, so is the real part of e−2πi(3t). The result is that when you integrate the real part of the integrand you get a relatively large number (in this case 0.5). On the other hand, when you try to measure a frequency that is not present, as in the case when we look at $$\hat f(5)$$, the integrand oscillates enough so that the integral is very small. The general situation may be a bit more complicated than this, but this in spirit is how the Fourier transform measures how much of an individual frequency is present in a function f(t).

Properties of the Fourier transform
Here we assume f(x), g(x) and h(x) are integrable functions, are Lebesgue-measurable on the real line, and satisfy:


 * $$\int_{-\infty}^\infty |f(x)| \, dx < \infty.$$

We denote the Fourier transforms of these functions by $$\hat{f}(\xi)$$&thinsp;, $$\hat{g}(\xi)$$&thinsp; and &thinsp;$$\hat{h}(\xi)$$ respectively.

Basic properties
The Fourier transform has the following basic properties:.


 * Linearity


 * For any complex numbers a and b, if h(x) = af(x) + bg(x), then &thinsp;$$\hat{h}(\xi)=a\cdot \hat{f}(\xi) + b\cdot\hat{g}(\xi).$$


 * Translation


 * For any real number x0, if &thinsp;$$h(x)=f(x-x_0),$$&thinsp; then &thinsp;$$\hat{h}(\xi)= e^{-i\,2\pi \,x_0\,\xi }\hat{f}(\xi).$$


 * Modulation


 * For any real number ξ0 if $$h(x)=e^{i \, 2\pi \, x \,\xi_0}f(x),$$ then &thinsp;$$\hat{h}(\xi) = \hat{f}(\xi-\xi_{0}).$$


 * Scaling


 * For a non-zero real number a, if h(x) = f(ax), then &thinsp;$$\hat{h}(\xi)=\frac{1}{|a|}\hat{f}\left(\frac{\xi}{a}\right).$$     The case a = −1 leads to the time-reversal property, which states: if h(x) = f(−x), then $$\hat{h}(\xi)=\hat{f}(-\xi).$$


 * Conjugation


 * If &thinsp;$$h(x)=\overline{f(x)},$$&thinsp; then &thinsp;$$\hat{h}(\xi) = \overline{\hat{f}(-\xi)}.$$


 * In particular, if f is real, then one has the reality condition &thinsp;$$\hat{f}(-\xi)=\overline{\hat{f}(\xi)}.$$, that is, $$\hat{f}$$ is a Hermitian function.


 * And if f is purely imaginary, then &thinsp;$$\hat{f}(-\xi)=-\overline{\hat{f}(\xi)}.$$


 * Integration


 * Substituting $$\xi=0 $$ in the definition, we obtain


 * $$\hat{f}(0) = \int_{-\infty}^{\infty} f(x)\,dx$$

That is, the evaluation of the Fourier transform in the origin ($$\xi=0$$) equals the integral of f all over its domain.

Invertibility and periodicity
Under suitable conditions on the function f, it can be recovered from its Fourier transform $$\hat{f}.$$ Indeed, denoting the Fourier transform operator by $$\mathcal{F},$$ so $$\mathcal{F}(f) := \hat{f},$$ then for suitable functions, applying the Fourier transform twice simply flips the function: $$\mathcal{F}^2(f)(x) = f(-x),$$ which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields $$\mathcal{F}^4(f) = f,$$ so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: $$\mathcal{F}^3(\hat{f}) = f.$$ In particular the Fourier transform is invertible (under suitable conditions).

More precisely, defining the parity operator $$\mathcal{P}$$ that inverts time, $$\mathcal{P}[f]\colon t \mapsto f(-t),$$:
 * $$\mathcal{F}^0 = \mathrm{Id}, \qquad \mathcal{F}^1 = \mathcal{F}, \qquad \mathcal{F}^2 = \mathcal{P}, \qquad \mathcal{F}^4 = \mathrm{Id}$$
 * $$\mathcal{F}^3 = \mathcal{F}^{-1} = \mathcal{P} \circ \mathcal{F} = \mathcal{F} \circ \mathcal{P}$$

These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem.

This four-fold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis.

Uniform continuity and the Riemann–Lebesgue lemma
The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties.

The Fourier transform, $$\hat f$$, of any integrable function f is uniformly continuous and $$\|\hat{f}\|_{\infty}\leq \|f\|_1$$. By the Riemann–Lebesgue lemma ,


 * $$\hat{f}(\xi)\to 0\text{ as }|\xi|\to \infty.$$

However, $$\hat f$$ need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent.

It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both f and $$\hat f$$ are integrable, the inverse equality


 * $$f(x) = \int_{-\infty}^\infty \hat f(\xi) e^{2 i \pi x \xi} \, d\xi$$

holds almost everywhere. That is, the Fourier transform is injective on L1(R). (But if f is continuous, then equality holds for every x.)

Plancherel theorem and Parseval's theorem
Let f(x) and g(x) be integrable, and let $$\hat{f}(\xi)$$ and $$\hat{g}(\xi)$$ be their Fourier transforms. If f(x) and g(x) are also square-integrable, then we have Parseval's theorem :


 * $$\int_{-\infty}^{\infty} f(x) \overline{g(x)} \,{\rm d}x = \int_{-\infty}^\infty \hat{f}(\xi) \overline{\hat{g}(\xi)} \,d\xi,$$

where the bar denotes complex conjugation.

The Plancherel theorem, which is equivalent to Parseval's theorem, states :


 * $$\int_{-\infty}^\infty \left| f(x) \right|^2\,dx = \int_{-\infty}^\infty \left| \hat{f}(\xi) \right|^2\,d\xi. $$

The Plancherel theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R)∩L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). The Plancherel theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. Depending on the author either of these theorems might be referred to as the Plancherel theorem or as Parseval's theorem.

See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.

Poisson summation formula
The Poisson summation formula (PSF) is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. It has a variety of useful forms that are derived from the basic one by application of the Fourier transform's scaling and time-shifting properties. The frequency-domain dual of the standard PSF is also called discrete-time Fourier transform, which leads directly to:


 * a popular, graphical, frequency-domain representation of the phenomenon of aliasing, and
 * a proof of the Nyquist-Shannon sampling theorem.

Convolution theorem
The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms $$\hat{f}(\xi)$$ and $$\hat{g}(\xi)$$ respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms $$\hat{f}(\xi)$$ and $$\hat{g}(\xi)$$ (under other conventions for the definition of the Fourier transform a constant factor may appear).

This means that if:


 * $$h(x) = (f*g)(x) = \int_{-\infty}^\infty f(y)g(x - y)\,dy,$$

where ∗ denotes the convolution operation, then:


 * $$\hat{h}(\xi) = \hat{f}(\xi)\cdot \hat{g}(\xi).$$

In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, $$\hat{g}(\xi)$$ represents the frequency response of the system.

Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms $$\hat{p}(\xi)$$ and $$\hat{q}(\xi)$$.

Cross-correlation theorem
In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x):


 * $$h(x)=(f\star g)(x) = \int_{-\infty}^\infty \overline{f(y)}\,g(x+y)\,dy$$

then the Fourier transform of h(x) is:


 * $$\hat{h}(\xi) = \overline{\hat{f}(\xi)} \,\cdot\, \hat{g}(\xi).$$

As a special case, the autocorrelation of function f(x) is:


 * $$h(x)=(f\star f)(x)=\int_{-\infty}^\infty \overline{f(y)}f(x+y)\,dy$$

for which


 * $$\hat{h}(\xi) = \overline{\hat{f}(\xi)}\,\hat{f}(\xi) = |\hat{f}(\xi)|^2.$$

Eigenfunctions
One important choice of an orthonormal basis for L2(R) is given by the Hermite functions


 * $${\psi}_n(x) = \frac{2^{1/4}}{\sqrt{n!}} \, e^{-\pi x^2}\mathrm{He}_n(2x\sqrt{\pi}),$$

where Hen(x) are the "probabilist's" Hermite polynomials, defined by


 * $$\mathrm{He}_n(x) = (-1)^n e^{\frac{x^2}{2}}\left(\frac{d}{dx}\right)^n e^{-\frac{x^2}{2}}$$

Under this convention for the Fourier transform, we have that


 * $$ \hat\psi_n(\xi) = (-i)^n {\psi}_n(\xi) $$.

In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R). However, this choice of eigenfunctions is not unique. There are only four different eigenvalues of the Fourier transform (±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik.

Since the complete set of Hermite functions provides a resolution of the identity, the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed. This approach to define the Fourier transform was first done by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time-frequency analysis. In physics, this transform was introduced by Edward Condon.

Fourier transform on Euclidean space
The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition:


 * $$\hat{f}(\boldsymbol{\xi}) = \mathcal{F}(f)(\boldsymbol{\xi}) = \int_{\R^n} f(\mathbf{x}) e^{-2\pi i \mathbf{x}\cdot\boldsymbol{\xi}} \, d\mathbf{x}$$

where x and ξ are n-dimensional vectors, and x&thinsp;·&thinsp;ξ is the dot product of the vectors. The dot product is sometimes written as $$\left\langle \mathbf x, \boldsymbol \xi \right\rangle$$.

All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds.

Uncertainty principle
Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform $$\hat f(\xi)$$ must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we "squeeze" a function in x, its Fourier transform "stretches out" in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform.

The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form.

Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized:


 * $$\int_{-\infty}^\infty |f(x)|^2 \,dx=1.$$

It follows from the Plancherel theorem that $$\hat f(\xi)$$ is also normalized.

The spread around x = 0 may be measured by the dispersion about zero defined by


 * $$D_0(f)=\int_{-\infty}^\infty x^2|f(x)|^2\,dx.$$

In probability terms, this is the second moment of |f(x)|2 about zero.

The Uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then


 * $$D_0(f)D_0(\hat{f}) \geq \frac{1}{16\pi^2}$$.

The equality is attained only in the case $$f(x)=C_1 \, e^{{-\pi x^2}/{\sigma^2}}$$ (hence $$\hat{f}(\xi)= \sigma C_1 \, e^{-\pi\sigma^2\xi^2}$$) where σ > 0 is arbitrary and $$C_1 = \sqrt[4]{2} / \sqrt{\sigma}$$ so that f is L2–normalized. In other words, where f is a (normalized) Gaussian function with variance σ2, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2.

In fact, this inequality implies that:


 * $$\left(\int_{-\infty}^\infty (x-x_0)^2|f(x)|^2\,dx\right)\left(\int_{-\infty}^\infty(\xi-\xi_0)^2|\hat{f}(\xi)|^2\,d\xi\right)\geq \frac{1}{16\pi^2}$$

for any x0, ξ0 ∈ R.

In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, to within a factor of Planck's constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle.

A stronger uncertainty principle is the Hirschman uncertainty principle which is expressed as:


 * $$H(|f|^2)+H(|\hat{f}|^2)\ge \log(e/2)$$

where H(p) is the differential entropy of the probability density function p(x):


 * $$H(p) = -\int_{-\infty}^\infty p(x)\log(p(x))dx$$

where the logarithms may be in any base which is consistent. The equality is attained for a Gaussian, as in the previous case.

Spherical harmonics
Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π P(x) for some P(x) in Ak, then $$\hat{f}(\xi)=i^{-k}f(\xi)$$. Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk. Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then $$\hat{f}(\xi)=F_0(|\xi|)P(\xi)$$ where


 * $$F_0(r)=2\pi i^{-k}r^{-(n+2k-2)/2}\int_0^\infty f_0(s)J_{(n+2k-2)/2}(2\pi rs)s^{(n+2k)/2}\,ds.$$

Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order (n + 2k − 2)/2. When k = 0 this gives a useful formula for the Fourier transform of a radial function. Note that this is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n+2 and n allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one.

Restriction problems
In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 &lt; p &lt; 2. Surprisingly, it is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas-Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ (2n + 2)&thinsp;/&thinsp;(n + 3).

One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by:


 * $$f_R(x) = \int_{E_R}\hat{f}(\xi) e^{2\pi ix\cdot\xi}\, d\xi, \quad x \in \mathbf{R}^n.$$

Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| &lt; R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2. In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp.

On Lp spaces

 * On L1

The definition of the Fourier transform by the integral formula


 * $$\hat{f}(\xi) = \int_{\mathbf{R}^n} f(x)e^{-2\pi i \xi\cdot x}\,dx$$

is valid for Lebesgue integrable functions f; that is, f ∈ L1(Rn).

The Fourier transform $$\mathcal{F}$$: L1(Rn) → L∞(Rn) is a bounded operator. This follows from the observation that


 * $$\vert\hat{f}(\xi)\vert \leq \int_{\mathbf{R}^n} \vert f(x)\vert \,dx,$$

which shows that its operator norm is bounded by 1. Indeed it equals 1, which can be seen, for example, from the transform of the rect function. The image of L1 is a subset of the space C0(Rn) of continuous functions that tend to zero at infinity (the Riemann–Lebesgue lemma), although it is not the entire space. Indeed, there is no simple characterization of the image.


 * On L2

Since compactly supported smooth functions are integrable and dense in L2(Rn), the Plancherel theorem allows us to extend the definition of the Fourier transform to general functions in L2(Rn) by continuity arguments. The Fourier transform in L2(Rn) is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, here meaning that for an L2 function f,


 * $$\hat{f}(\xi) = \lim_{R\to\infty}\int_{|x|\le R} f(x) e^{-2\pi i x\cdot\xi}\,dx$$

where the limit is taken in the L2 sense. Many of the properties of the Fourier transform in L1 carry over to L2, by a suitable limiting argument.

Furthermore $$\mathcal{F}$$: L2(Rn) → L2(Rn) is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f,g∈L2(Rn) we have


 * $$\int_{\mathbf{R}^n} f(x)\mathcal{F}g(x)\,dx = \int_{\mathbf{R}^n} \mathcal{F}f(x)g(x)\,dx. $$

In particular, the image of L2(Rn) is itself under the Fourier transform.


 * On other Lp

The definition of the Fourier transform can be extended to functions in Lp(Rn) for 1 ≤ p ≤ 2 by decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where $$q=p/(p-1)$$ is the Hölder conjugate of p. by the Hausdorff–Young inequality. However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions. In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function.

Tempered distributions
One might consider enlarging the domain of the Fourier transform from L1+L2 by considering generalized functions, or distributions. A distribution on Rn is a continuous linear functional on the space Cc(Rn) of compactly supported smooth functions, equipped with a suitable topology. The strategy is then to consider the action of the Fourier transform on Cc(Rn) and pass to distributions by duality. The obstruction to do this is that the Fourier transform does not map Cc(Rn) to Cc(Rn). In fact the Fourier transform of an element in Cc(Rn) can not vanish on an open set; see the above discussion on the uncertainty principle. The right space here is the slightly larger space of Schwartz functions. The Fourier transform is an automorphism on the Schwartz space, as a topological vector space, and thus induces an automorphism on its dual, the space of tempered distributions. The tempered distribution include all the integrable functions mentioned above, as well as well-behaved functions of polynomial growth and distributions of compact support.

For the definition of the Fourier transform of a tempered distribution, let f and g be integrable functions, and let $$\hat{f}$$ and $$\hat{g}$$ be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula ,


 * $$\int_{\mathbf{R}^n}\hat{f}(x)g(x)\,dx=\int_{\mathbf{R}^n}f(x)\hat{g}(x)\,dx.$$

Every integrable function f defines (induces) a distribution Tf by the relation


 * $$T_f(\varphi)=\int_{\mathbf{R}^n}f(x)\varphi(x)\,dx$$  for all Schwartz functions φ.

So it makes sense to define Fourier transform $$\hat{T}_f$$ of Tf by


 * $$\hat{T}_f (\varphi)= T_f(\hat{\varphi})$$

for all Schwartz functions φ. Extending this to all tempered distributions T gives the general definition of the Fourier transform.

Distributions can be differentiated and the above mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions.

Fourier–Stieltjes transform
The Fourier transform of a finite Borel measure μ on Rn is given by :


 * $$\hat\mu(\xi)=\int_{\mathbf{R}^n} \mathrm{e}^{-2\pi i x \cdot \xi}\,d\mu.$$

This transform continues to enjoy many of the properties of the Fourier transform of integrable functions. One notable difference is that the Riemann–Lebesgue lemma fails for measures. In the case that dμ = f(x)dx, then the formula above reduces to the usual definition for the Fourier transform of f. In the case that μ is the probability distribution associated to a random variable X, the Fourier-Stieltjes transform is closely related to the characteristic function, but the typical conventions in probability theory take eix·ξ instead of e−2πix·ξ. In the case when the distribution has a probability density function this definition reduces to the Fourier transform applied to the probability density function, again with a different choice of constants.

The Fourier transform may be used to give a characterization of measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle.

Furthermore, the Dirac delta function is not a function but it is a finite Borel measure. Its Fourier transform is a constant function (whose specific value depends upon the form of the Fourier transform used).

Locally compact abelian groups
The Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group is an abelian group which is at the same time a locally compact Hausdorff topological space so that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of pointwise convergence, the set of characters $$\hat G$$ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by :


 * $$\hat{f}(\xi)=\int_G \xi(x)f(x)\,d\mu\qquad\text{for any }\xi\in\hat G.$$

The Riemann-Lebesgue lemma holds in this case; $$\hat{f}(\xi)$$ is a function vanishing at infinity on $$\hat G$$.

Gelfand transform
The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above.

Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by


 * $$f^*(g) = \overline{f(g^{-1})}.$$

Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.)

Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by


 * $$a \mapsto ( \varphi \mapsto \varphi(a) )$$

It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier-Pontryagin transform.

Non-abelian groups
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis.

Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by


 * $$\langle \hat{\mu}\xi,\eta\rangle_{H_\sigma} = \int_G \langle \overline{U}^{(\sigma)}_g\xi,\eta\rangle\,d\mu(g)$$

where $$\overline{U}^{(\sigma)}$$ is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as


 * $$d\mu = fd\lambda$$

for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ.

The mapping $$\mu\mapsto\hat{\mu}$$ defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ: Hσ → Hσ for which the norm


 * $$\|E\| = \sup_{\sigma\in\Sigma}\|E_\sigma\|$$

is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C* algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by


 * $$f^*(g) = \overline{f(g^{-1})},$$

and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators.

The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then


 * $$f(g) = \sum_{\sigma\in\Sigma} d_\sigma \operatorname{tr}(\hat{f}(\sigma)U^{(\sigma)}_g)$$

where the summation is understood as convergent in the L2 sense.

The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions.

Alternatives
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent.

As alternatives to the Fourier transform, in time-frequency analysis, one uses time-frequency transforms or time-frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform or fractional Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. .

Analysis of differential equations
Fourier transforms and the closely related Laplace transforms are widely used in solving differential equations. The Fourier transform is compatible with differentiation in the following sense: if f(x) is a differentiable function with Fourier transform $$\hat f(\xi)$$, then the Fourier transform of its derivative is given by $$2 \pi i \xi \hat f(\xi)$$. This can be used to transform differential equations into algebraic equations. This technique only applies to problems whose domain is the whole set of real numbers. By extending the Fourier transform to functions of several variables partial differential equations with domain Rn can also be translated into algebraic equations.

Fourier transform spectroscopy
The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry.

Quantum mechanics and signal processing
In quantum mechanics, Fourier transforms of solutions to the Schrödinger equation are known as momentum space (or k space) wave functions. They display the amplitudes for momenta. Their absolute square is the probabilities of momenta. This is valid also for classical waves treated in signal processing, such as in swept frequency radar where data is taken in frequency domain and transformed to time domain, yielding range. The absolute square is then the power.

Other notations
Other common notations for $$\hat f(\xi)$$ include:


 * $$\tilde{f}(\xi),\ \tilde{f}(\omega),\  F(\xi),\  \mathcal{F}\left(f\right)(\xi),\  \left(\mathcal{F}f\right)(\xi),\  \mathcal{F}(f),\  \mathcal F(\omega),\ F(\omega),\  \mathcal F(j\omega),\  \mathcal{F}\{f\},\  \mathcal{F} \left(f(t)\right),\ \mathcal{F} \{f(t)\}.$$

Denoting the Fourier transform by a capital letter corresponding to the letter of function being transformed (such as f(x) and F(ξ)) is especially common in the sciences and engineering. In electronics, the omega (ω) is often used instead of ξ due to its interpretation as angular frequency, sometimes it is written as F(jω), where j is the imaginary unit, to indicate its relationship with the Laplace transform, and sometimes it is written informally as F(2πf) in order to use ordinary frequency.

The interpretation of the complex function $$\hat f(\xi)$$ may be aided by expressing it in polar coordinate form


 * $$\hat f(\xi) = A(\xi) e^{i\varphi(\xi)}$$

in terms of the two real functions A(ξ) and φ(ξ) where:


 * $$A(\xi) = |\hat f(\xi)|,$$

is the amplitude and


 * $$\varphi (\xi) = \arg \big( \hat f(\xi) \big), $$

is the phase (see arg function).

Then the inverse transform can be written:


 * $$f(x) = \int _{-\infty}^{\infty} A(\xi)\ e^{ i(2\pi \xi x +\varphi (\xi))}\,d\xi,$$

which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ).

The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted $$\mathcal F$$ and $$\mathcal F(f)$$ is used to denote the Fourier transform of the function f. This mapping is linear, which means that $$\mathcal F$$ can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write $$\mathcal F f$$ instead of $$\mathcal F(f)$$. Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as $$\mathcal{F} f(\xi)$$ or as $$(\mathcal F f)(\xi)$$. Notice that in the former case, it is implicitly understood that $$\mathcal F$$ is applied first to f and then the resulting function is evaluated at ξ, not the other way around.

In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like $$\mathcal{F}(f(x))$$ formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed.

For example, $$\mathcal F( \mathrm{rect}(x) ) = \mathrm{sinc}(\xi)$$ is sometimes used to express that the Fourier transform of a rectangular function is a sinc function,

or $$\mathcal F(f(x + x_0)) = \mathcal F(f(x)) e^{2\pi i \xi x_0}$$ is used to express the shift property of the Fourier transform.

Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0.

Other conventions
The Fourier transform can also be written in terms of angular frequency: whose units are radians per second.

The substitution ξ = ω/(2π) into the formulas above produces this convention:


 * $$\hat{f}(\omega) = \int_{\mathbf R^n} f(x) e^{-i\omega\cdot x}\,dx.$$

Under this convention, the inverse transform becomes:


 * $$f(x) = \frac{1}{(2\pi)^n} \int_{\mathbf R^n} \hat{f}(\omega)e^{i\omega \cdot x}\,d\omega.$$

Unlike the convention followed in this article, when the Fourier transform is defined this way, it is no longer a unitary transformation on L2(Rn). There is also less symmetry between the formulas for the Fourier transform and its inverse.

Another convention is to split the factor of (2π)n evenly between the Fourier transform and its inverse, which leads to definitions:


 * $$ \hat{f}(\omega) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbf{R}^n} f(x) e^{- i\omega\cdot x}\,dx $$
 * $$f(x) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbf{R}^n} \hat{f}(\omega) e^{ i\omega \cdot x}\,d\omega. $$

Under this convention, the Fourier transform is again a unitary transformation on L2(Rn). It also restores the symmetry between the Fourier transform and its inverse.

Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention.

As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined $$E(e^{it\cdot X})=\int e^{it\cdot x}d\mu_X(x)$$.

As in the case of the "non-unitary angular frequency" convention above, there is no factor of 2π appearing in either of the integral, or in the exponential. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponential.

Tables of important Fourier transforms
The following tables record some closed-form Fourier transforms. For functions f(x), g(x) and h(x) denote their Fourier transforms by $$\hat{f}$$, $$\hat{g}$$, and $$\hat{h}$$ respectively. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse.

Functional relationships
The Fourier transforms in this table may be found in or.

Square-integrable functions
The Fourier transforms in this table may be found in, , or the appendix of.

Distributions
The Fourier transforms in this table may be found in or the appendix of.

Two-dimensional functions

 * Remarks

To 400: The variables ξx, ξy, ωx, ωy, νx and νy are real numbers. The integrals are taken over the entire plane.

To 401: Both functions are Gaussians, which may not have unit volume.

To 402: The function is defined by circ(r)=1 0≤r≤1, and is 0 otherwise. This is the Airy distribution, and is expressed using J1 (the order 1 Bessel function of the first kind).

Formulas for general n-dimensional functions

 * Remarks

To 501: The function χ[0, 1] is the indicator function of the interval [0, 1]. The function Γ(x) is the gamma function. The function Jn/2 + δ is a Bessel function of the first kind, with order n/2 + δ. Taking and  produces 402.

To 502: See Riesz potential. The formula also holds for all α ≠ −n, −n − 1, … by analytic continuation, but then the function and its Fourier transforms need to be understood as suitably regularized tempered distributions. See homogeneous distribution.

To 503: This is the formula for a multivariate normal distribution normalized to 1 with a mean of 0. Bold variables are vectors or matrices. Following the notation of the aforementioned page, $$\boldsymbol \Sigma = \boldsymbol \sigma \boldsymbol \sigma^{\mathrm T}$$ and $$\boldsymbol \Sigma^{-1} = \boldsymbol \sigma^{-\mathrm T} \boldsymbol \sigma^{-1}$$

To 504: Here $$c_n=\Gamma((n+1)/2)/\pi^{(n+1)/2}$$. See.