Fluctuation-dissipation theorem

The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance (to be intended in their general sense, not only in electromagnetic terms) of the same physical variable (like voltage, temperature difference, etc.), and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.

The fluctuation–dissipation theorem was proven by Herbert Callen and Theodore Welton in 1951 and expanded by Ryogo Kubo. There are antecedents to the general theorem, including Einstein's explanation of Brownian motion during his annus mirabilis and Harry Nyquist's explanation in 1928 of Johnson noise in electrical resistors.

Qualitative overview and examples
The fluctuation–dissipation theorem says that when there is a process that dissipates energy, turning it into heat (e.g., friction), there is a reverse process related to thermal fluctuations. This is best understood by considering some examples:


 * Drag and Brownian motion
 * If an object is moving through a fluid, it experiences drag (air resistance or fluid resistance). Drag dissipates kinetic energy, turning it into heat. The corresponding fluctuation is Brownian motion. An object in a fluid does not sit still, but rather moves around with a small and rapidly-changing velocity, as molecules in the fluid bump into it. Brownian motion converts heat energy into kinetic energy—the reverse of drag.
 * Resistance and Johnson noise
 * If electric current is running through a wire loop with a resistor in it, the current will rapidly go to zero because of the resistance. Resistance dissipates electrical energy, turning it into heat (Joule heating). The corresponding fluctuation is Johnson noise. A wire loop with a resistor in it does not actually have zero current, it has a small and rapidly-fluctuating current caused by the thermal fluctuations of the electrons and atoms in the resistor. Johnson noise converts heat energy into electrical energy—the reverse of resistance.
 * Light absorption and thermal radiation
 * When light impinges on an object, some fraction of the light is absorbed, making the object hotter. In this way, light absorption turns light energy into heat. The corresponding fluctuation is thermal radiation (e.g., the glow of a "red hot" object). Thermal radiation turns heat energy into light energy—the reverse of light absorption. Indeed, Kirchhoff's law of thermal radiation confirms that the more effectively an object absorbs light, the more thermal radiation it emits.

Examples in detail
The fluctuation–dissipation theorem is a general result of statistical thermodynamics that quantifies the relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations.

Brownian motion
For example, Albert Einstein noted in his 1905 paper on Brownian motion that the same random forces that cause the erratic motion of a particle in Brownian motion would also cause drag if the particle were pulled through the fluid. In other words, the fluctuation of the particle at rest has the same origin as the dissipative frictional force one must do work against, if one tries to perturb the system in a particular direction.

From this observation Einstein was able to use statistical mechanics to derive the Einstein–Smoluchowski relation


 * $$ D = {\mu \, k_{\rm B} T} $$

which connects the diffusion constant D and the particle mobility μ, the ratio of the particle's terminal drift velocity to an applied force. kB is the Boltzmann constant, and T is the absolute temperature.

Thermal noise in a resistor
In 1928, John B. Johnson discovered and Harry Nyquist explained Johnson–Nyquist noise. With no applied current, the mean-square voltage depends on the resistance $$R$$, $$k_{\rm B}T$$, and the bandwidth $$\Delta\nu$$ over which the voltage is measured:


 * $$ \langle V^2 \rangle \approx 4Rk_{\rm B}T\,\Delta\nu. $$



This observation can be understood through the lens of the fluctuation-dissipation theorem. Take, for example, a simple circuit consisting of a resistor with a resistance $$R$$ and a capacitor with a small capacitance $$C$$. Kirchhoff's voltage law yields


 * $$V=-R\frac{dQ}{dt}+\frac{Q}{C}$$

and so the response function for this circuit is


 * $$\chi(\omega)\equiv\frac{Q(\omega)}{V(\omega)}=\frac{1}{\frac{1}{C}-i\omega R}$$

In the low-frequency limit $$\omega\ll (RC)^{-1}$$, its imaginary part is simply


 * $$\text{Im}\left[\chi(\omega)\right]\approx \omega RC^2$$

which then can be linked to the power spectral density function $$S_V(\omega)$$ of the voltage via the fluctuation-dissipation theorem


 * $$S_V(\omega)=\frac{S_Q(\omega)}{C^2}\approx \frac{2k_{\rm B}T}{C^2\omega}\text{Im}\left[\chi(\omega)\right]=2Rk_{\rm B}T$$

The Johnson–Nyquist voltage noise $$\langle V^2 \rangle$$ was observed within a small frequency bandwidth $$\Delta \nu=\Delta\omega/(2\pi)$$ centered around $$\omega=\pm \omega_0$$. Hence


 * $$\langle V^2 \rangle\approx S_V(\omega)\times 2\Delta \nu\approx 4Rk_{\rm B}T\Delta \nu$$

General formulation
The fluctuation–dissipation theorem can be formulated in many ways; one particularly useful form is the following:.

Let $$x(t)$$ be an observable of a dynamical system with Hamiltonian $$H_0(x)$$ subject to thermal fluctuations. The observable $$x(t)$$ will fluctuate around its mean value $$\langle x\rangle_0$$ with fluctuations characterized by a power spectrum $$S_x(\omega) = \langle \hat{x}(\omega)\hat{x}^*(\omega) \rangle$$. Suppose that we can switch on a time-varying, spatially constant field $$f(t)$$ which alters the Hamiltonian to $$H(x)=H_0(x)-f(t)x$$. The response of the observable $$x(t)$$ to a time-dependent field $$f(t)$$ is characterized to first order by the susceptibility or linear response function $$\chi(t)$$ of the system


 * $$ \langle x(t) \rangle = \langle x \rangle_0 + \int_{-\infty}^{t} \! f(\tau) \chi(t-\tau)\,d\tau, $$

where the perturbation is adiabatically (very slowly) switched on at $$\tau =-\infty$$.

The fluctuation–dissipation theorem relates the two-sided power spectrum (i.e. both positive and negative frequencies) of $$x$$ to the imaginary part of the Fourier transform $$\hat{\chi}(\omega)$$ of the susceptibility $$\chi(t)$$: $$S_x(\omega) = -\frac{2 k_\mathrm{B} T}{\omega} \operatorname{Im}\hat{\chi}(\omega).$$

which holds under the Fourier transform convention $$f(\omega)=\int_{-\infty}^\infty f(t) e^{-i\omega t}\, dt$$. The left-hand side describes fluctuations in $$x$$, the right-hand side is closely related to the energy dissipated by the system when pumped by an oscillatory field $$f(t) = F \sin(\omega t + \phi)$$. The spectrum of fluctuations reveal the linear response, because past fluctuations cause future fluctuations via a linear response upon itself.

This is the classical form of the theorem; quantum fluctuations are taken into account by replacing $$2 k_\mathrm{B} T / \omega$$ with $$\hbar \, \coth(\hbar\omega / 2k_\mathrm{B}T)$$ (whose limit for $$\hbar\to 0$$ is $$2 k_\mathrm{B} T/\omega$$). A proof can be found by means of the LSZ reduction, an identity from quantum field theory.

The fluctuation–dissipation theorem can be generalized in a straightforward way to the case of space-dependent fields, to the case of several variables or to a quantum-mechanics setting.

Classical version
We derive the fluctuation–dissipation theorem in the form given above, using the same notation. Consider the following test case: the field f has been on for infinite time and is switched off at t=0


 * $$ f(t)=f_0 \theta(-t), $$

where $$ \theta(t)$$ is the Heaviside function. We can express the expectation value of $$x$$ by the probability distribution W(x,0) and the transition probability $$ P(x',t | x,0) $$


 * $$ \langle x(t) \rangle = \int dx' \int dx \, x' P(x',t|x,0) W(x,0) . $$

The probability distribution function W(x,0) is an equilibrium distribution and hence given by the Boltzmann distribution for the Hamiltonian $$ H(x) = H_0(x) - x f_0 $$


 * $$ W(x,0)= \frac{\exp(-\beta H(x))}{\int dx' \, \exp(-\beta H(x'))} \,, $$

where $$\beta^{-1} = k_{\rm B}T$$. For a weak field $$ \beta x f_0 \ll 1 $$, we can expand the right-hand side


 * $$ W(x,0) \approx W_0(x) [1+\beta f_0 (x-\langle x \rangle_0)], $$

here $$ W_0(x) $$ is the equilibrium distribution in the absence of a field. Plugging this approximation in the formula for $$ \langle x(t) \rangle $$ yields

where A(t) is the auto-correlation function of x in the absence of a field:


 * $$ A(t)=\langle [x(t)-\langle x \rangle_0][ x(0)-\langle x \rangle_0] \rangle_0. $$

Note that in the absence of a field the system is invariant under time-shifts. We can rewrite $$ \langle x(t) \rangle - \langle x \rangle_0 $$ using the susceptibility of the system and hence find with the above equation (*)


 * $$ f_0 \int_0^{\infty} d\tau \, \chi(\tau) \theta(\tau-t) = \beta f_0 A(t) $$

Consequently,

To make a statement about frequency dependence, it is necessary to take the Fourier transform of equation (**). By integrating by parts, it is possible to show that
 * $$ -\hat\chi(\omega) = i\omega\beta \int_0^\infty e^{-i\omega t} A(t)\, dt -\beta A(0).$$

Since $$A(t)$$ is real and symmetric, it follows that
 * $$ 2 \operatorname{Im}[\hat\chi(\omega)] = -\omega\beta \hat A(\omega).$$

Finally, for stationary processes, the Wiener–Khinchin theorem states that the two-sided spectral density is equal to the Fourier transform of the auto-correlation function:
 * $$ S_x(\omega) = \hat{A}(\omega).$$

Therefore, it follows that
 * $$ S_x(\omega) = -\frac{2k_\text{B} T}{\omega} \operatorname{Im}[\hat\chi(\omega)].$$

Quantum version
The fluctuation-dissipation theorem relates the correlation function of the observable of interest $$\langle \hat{x}(t)\hat{x}(0)\rangle$$ (a measure of fluctuation) to the imaginary part of the response function $$\text{Im}\left[\chi(\omega)\right]=\left[\chi(\omega)-\chi^*(\omega)\right]/2i$$ in the frequency domain (a measure of dissipation). A link between these quantities can be found through the so-called Kubo formula


 * $$\chi(t-t')=\frac{i}{\hbar}\theta(t-t')\langle [\hat{x}(t),\hat{x}(t')] \rangle$$

which follows, under the assumptions of the linear response theory, from the time evolution of the ensemble average of the observable $$\langle\hat{x}(t)\rangle$$ in the presence of a perturbing source. Once Fourier transformed, the Kubo formula allows writing the imaginary part of the response function as


 * $$\text{Im}\left[\chi(\omega)\right]=\frac{1}{2\hbar}\int_{-\infty}^{+\infty}\langle \hat{x}(t)\hat{x}(0)-\hat{x}(0)\hat{x}(t)\rangle e^{i\omega t}dt.$$

In the canonical ensemble, the second term can be re-expressed as


 * $$\langle \hat{x}(0) \hat{x}(t)\rangle=\text{Tr } e^{-\beta \hat{H}}\hat{x}(0)\hat{x}(t)=\text{Tr } \hat{x}(t) e^{-\beta \hat{H}}\hat{x}(0)=\text{Tr } e^{-\beta \hat{H}}\underbrace{e^{\beta \hat{H}}\hat{x}(t) e^{-\beta \hat{H}}}_{\hat{x}(t-i\hbar\beta)}\hat{x}(0)=\langle \hat{x}(t-i\hbar\beta) \hat{x}(0)\rangle$$

where in the second equality we re-positioned $$\hat{x}(t)$$ using the cyclic property of trace. Next, in the third equality, we inserted $$e^{-\beta \hat{H}}e^{\beta \hat{H}}$$ next to the trace and interpreted $$e^{-\beta\hat{H}}$$ as a time evolution operator $$e^{-\frac{i}{\hbar}\hat{H}\Delta t}$$ with imaginary time interval $$\Delta t=-i\hbar\beta$$. The imaginary time shift turns into a $$e^{-\beta\hbar\omega}$$ factor after Fourier transform


 * $$\int_{-\infty}^{+\infty}\langle \hat{x}(t-i\hbar\beta)\hat{x}(0)\rangle e^{i\omega t}dt=e^{-\beta\hbar\omega}\int_{-\infty}^{+\infty}\langle \hat{x}(t)\hat{x}(0)\rangle e^{i\omega t}dt$$

and thus the expression for $$\text{Im}\left[\chi(\omega)\right]$$ can be easily rewritten as the quantum fluctuation-dissipation relation


 * $$S_{x}(\omega)=2\hbar\left[n_{\rm BE}(\omega)+1\right]\text{Im}\left[\chi(\omega)\right]$$

where the power spectral density $$S_{x}(\omega)$$ is the Fourier transform of the auto-correlation $$\langle \hat{x}(t) \hat{x}(0)\rangle$$ and $$n_{\rm BE}(\omega)=\left(e^{\beta\hbar\omega}-1\right)^{-1}$$ is the Bose-Einstein distribution function. The same calculation also yields


 * $$S_{x}(-\omega)=e^{-\beta\hbar\omega}S_{x}(\omega) = 2\hbar\left[n_{\rm BE}(\omega)\right]\text{Im}\left[\chi(\omega)\right]\neq S_{x}(+\omega)$$

thus, differently from what obtained in the classical case, the power spectral density is not exactly frequency-symmetric in the quantum limit. Consistently, $$\langle \hat{x}(t)\hat{x}(0)\rangle$$ has an imaginary part originating from the commutation rules of operators. The additional "$$+1$$" term in the expression of $$S_x(\omega)$$ at positive frequencies can also be thought of as linked to spontaneous emission. An often cited result is also the symmetrized power spectral density


 * $$\frac{S_x(\omega)+S_x(-\omega)}{2}=2\hbar\left[n_{\rm BE}(\omega)+\frac{1}{2}\right]\text{Im}\left[\chi(\omega)\right]=\hbar\coth\left(\frac{\hbar\omega}{2k_BT}\right)\text{Im}\left[\chi(\omega)\right].$$

The "$$+1/2$$" can be thought of as linked to quantum fluctuations, or to zero-point motion of the observable $$\hat{x}$$. At high enough temperatures, $$n_{\rm BE}\approx (\beta\hbar\omega)^{-1}\gg 1$$, i.e. the quantum contribution is negligible, and we recover the classical version.

Violations in glassy systems
While the fluctuation–dissipation theorem provides a general relation between the response of systems obeying detailed balance, when detailed balance is violated comparison of fluctuations to dissipation is more complex. Below the so called glass temperature $$T_{\rm g}$$, glassy systems are not equilibrated, and slowly approach their equilibrium state. This slow approach to equilibrium is synonymous with the violation of detailed balance. Thus these systems require large time-scales to be studied while they slowly move toward equilibrium.

To study the violation of the fluctuation-dissipation relation in glassy systems, particularly spin glasses, performed numerical simulations of macroscopic systems (i.e. large compared to their correlation lengths) described by the three-dimensional Edwards-Anderson model using supercomputers. In their simulations, the system is initially prepared at a high temperature, rapidly cooled to a temperature $$T=0.64 T_{\rm g}$$ below the glass temperature $$T_{\rm g}$$, and left to equilibrate for a very long time $$t_{\rm w}$$ under a magnetic field $$H$$. Then, at a later time $$t + t_{\rm w}$$, two dynamical observables are probed, namely the response function $$\chi(t+t_{\rm w},t_{\rm w})\equiv\left.\frac{\partial m(t+t_{\rm w})}{\partial H}\right|_{H=0}$$ and the spin-temporal correlation function $$C(t+t_{\rm w},t_{\rm w})\equiv \frac{1}{V}\left.\sum_{x}\langle S_x(t_{\rm w}) S_x(t+t_{\rm w})\rangle\right|_{H=0}$$ where $$S_x=\pm 1$$ is the spin living on the node $$x$$ of the cubic lattice of volume $$V$$, and $m(t)\equiv \frac{1}{V} \sum_{x} \langle S_{x}(t) \rangle$ is the magnetization density. The fluctuation-dissipation relation in this system can be written in terms of these observables as $$T\chi(t+t_{\rm w}, t_{\rm w})=1-C(t+t_{\rm w}, t_{\rm w})$$

Their results confirm the expectation that as the system is left to equilibrate for longer times, the fluctuation-dissipation relation is closer to be satisfied.

In the mid-1990s, in the study of dynamics of spin glass models, a generalization of the fluctuation–dissipation theorem was discovered that holds for asymptotic non-stationary states, where the temperature appearing in the equilibrium relation is substituted by an effective temperature with a non-trivial dependence on the time scales. This relation is proposed to hold in glassy systems beyond the models for which it was initially found.