Density functional theory

Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals, i.e. functions of another function. In the case of DFT, these are functionals of the spatially dependent electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry.

DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals.

Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors. The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms,    is a current research topic. Classical density functional theory uses a similar formalism to calculate the properties of non-uniform classical fluids.

Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported to stray away from the search for the exact functional. Further, DFT potentials obtained with adjustable parameters are no longer true DFT potentials, given that they are not functional derivatives of the exchange correlation energy with respect to the charge density. Consequently, it is not clear if the second theorem of DFT holds in such conditions.

Overview of method
In the context of computational materials science, ab initio (from first principles) DFT calculations allow the prediction and calculation of material behavior on the basis of quantum mechanical considerations, without requiring higher-order parameters such as fundamental material properties. In contemporary DFT techniques the electronic structure is evaluated using a potential acting on the system's electrons. This DFT potential is constructed as the sum of external potentials $V_{ext}$, which is determined solely by the structure and the elemental composition of the system, and an effective potential $V_{eff}$, which represents interelectronic interactions. Thus, a problem for a representative supercell of a material with $n$ electrons can be studied as a set of $n$ one-electron Schrödinger-like equations, which are also known as Kohn–Sham equations.

Origins
Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the framework of the two Hohenberg–Kohn theorems (HK). The original HK theorems held only for non-degenerate ground states in the absence of a magnetic field, although they have since been generalized to encompass these.

The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of $N$ electrons with $3N$ spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.

The second HK theorem defines an energy functional for the system and proves that the ground-state electron density minimizes this energy functional.

In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of noninteracting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve, as the wavefunction can be represented as a Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The exchange–correlation part of the total energy functional remains unknown and must be approximated.

Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original HK theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the noninteracting system.

Derivation and formalism
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential $V$, in which the electrons are moving. A stationary electronic state is then described by a wavefunction $Ψ(r_{1}, …, r_{N})$ satisfying the many-electron time-independent Schrödinger equation


 * $$ \hat H \Psi = \left[\hat T + \hat V + \hat U\right]\Psi = \left[\sum_{i=1}^N \left(-\frac{\hbar^2}{2m_i} \nabla_i^2\right) + \sum_{i=1}^N V(\mathbf r_i) + \sum_{i<j}^N U\left(\mathbf r_i, \mathbf r_j\right)\right] \Psi = E \Psi, $$

where, for the $N$-electron system, $Ĥ$ is the Hamiltonian, $E$ is the total energy, $$\hat T$$ is the kinetic energy, $$\hat V$$ is the potential energy from the external field due to positively charged nuclei, and $Û$ is the electron–electron interaction energy. The operators $$\hat T$$ and $Û$ are called universal operators, as they are the same for any $N$-electron system, while $$\hat V$$ is system-dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term $Û$.

There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.

Here DFT provides an appealing alternative, being much more versatile, as it provides a way to systematically map the many-body problem, with $Û$, onto a single-body problem without $Û$. In DFT the key variable is the electron density $n(r)$, which for a normalized $Ψ$ is given by


 * $$n(\mathbf r) = N \int{\mathrm d}^3 \mathbf r_2 \cdots \int{\mathrm d}^3 \mathbf r_N \, \Psi^*(\mathbf r, \mathbf r_2, \dots, \mathbf r_N) \Psi(\mathbf r, \mathbf r_2, \dots, \mathbf r_N).$$

This relation can be reversed, i.e., for a given ground-state density $n_{0}(r)$ it is possible, in principle, to calculate the corresponding ground-state wavefunction $Ψ_{0}(r_{1}, …, r_{N})$. In other words, $Ψ$ is a unique functional of $n_{0}$,


 * $$\Psi_0 = \Psi[n_0],$$

and consequently the ground-state expectation value of an observable $Ô$ is also a functional of $n_{0}$:


 * $$ O[n_0] = \big\langle \Psi[n_0] \big| \hat O \big| \Psi[n_0] \big\rangle.$$

In particular, the ground-state energy is a functional of $n_{0}$:


 * $$E_0 = E[n_0] = \big\langle \Psi[n_0] \big| \hat T + \hat V + \hat U \big| \Psi[n_0] \big\rangle,$$

where the contribution of the external potential $$\big\langle \Psi[n_0] \big| \hat V \big| \Psi[n_0] \big\rangle$$ can be written explicitly in terms of the ground-state density $$n_0$$:


 * $$V[n_0] = \int V(\mathbf r) n_0(\mathbf r) \,\mathrm d^3 \mathbf r.$$

More generally, the contribution of the external potential $$\big\langle \Psi \big| \hat V \big| \Psi \big\rangle$$ can be written explicitly in terms of the density $$n$$:


 * $$V[n] = \int V(\mathbf r) n(\mathbf r) \,\mathrm d^3 \mathbf r.$$

The functionals $T[n]$ and $U[n]$ are called universal functionals, while $V[n]$ is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified $$\hat V$$, one then has to minimize the functional


 * $$E[n] = T[n] + U[n] + \int V(\mathbf r) n(\mathbf r) \,\mathrm d^3 \mathbf r$$

with respect to $n(r)$, assuming one has reliable expressions for $T[n]$ and $U[n]$. A successful minimization of the energy functional will yield the ground-state density $n_{0}$ and thus all other ground-state observables.

The variational problems of minimizing the energy functional $E[n]$ can be solved by applying the Lagrangian method of undetermined multipliers. First, one considers an energy functional that does not explicitly have an electron–electron interaction energy term,


 * $$E_s[n] = \big\langle \Psi_\text{s}[n] \big| \hat T + \hat V_\text{s} \big| \Psi_\text{s}[n] \big\rangle,$$

where $$\hat{T}$$ denotes the kinetic-energy operator, and $$\hat{V}_\text{s}$$ is an effective potential in which the particles are moving. Based on $$E_s$$, Kohn–Sham equations of this auxiliary noninteracting system can be derived:


 * $$\left[-\frac{\hbar^2}{2m} \nabla^2 + V_\text{s}(\mathbf r)\right] \varphi_i(\mathbf r) = \varepsilon_i \varphi_i(\mathbf r),$$

which yields the orbitals $φ_{i}$ that reproduce the density $n(r)$ of the original many-body system


 * $$n(\mathbf r ) = \sum_{i=1}^N \big|\varphi_i(\mathbf r)\big|^2.$$

The effective single-particle potential can be written as


 * $$V_\text{s}(\mathbf r) = V(\mathbf r) + \int \frac{n(\mathbf r')}{|\mathbf r - \mathbf r'|} \,\mathrm d^3 \mathbf r' + V_\text{XC}[n(\mathbf r)],$$

where $$V(\mathbf r)$$ is the external potential, the second term is the Hartree term describing the electron–electron Coulomb repulsion, and the last term $V_{XC}$ is the exchange–correlation potential. Here, $V_{XC}$ includes all the many-particle interactions. Since the Hartree term and $V_{XC}$ depend on $n(r)$, which depends on the $φ_{i}$, which in turn depend on $V_{s}$, the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for $n(r)$, then calculates the corresponding $V_{s}$ and solves the Kohn–Sham equations for the $φ_{i}$. From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.


 * Notes
 * 1) The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. $E_{s}[n]$ contains kinds of singularities, cuts and branches. This may indicate a limitation of our hope for representing exchange–correlation functional in a simple analytic form.
 * 2) It is possible to extend the DFT idea to the case of the Green function $G$ instead of the density $n$. It is called as Luttinger–Ward functional (or kinds of similar functionals), written as $E[G]$. However, $G$ is determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical difficulties.
 * 3) There is no one-to-one correspondence between one-body density matrix $n(r, r′)$ and the one-body potential $V(r, r′)$. (All the eigenvalues of $n(r, r′)$ are 1.) In other words, it ends up with a theory similar to the Hartree–Fock (or hybrid) theory.

Relativistic formulation (ab initio functional forms)
The same theorems can be proven in the case of relativistic electrons, thereby providing generalization of DFT for the relativistic case. Unlike the nonrelativistic theory, in the relativistic case it is possible to derive a few exact and explicit formulas for the relativistic density functional.

Let one consider an electron in a hydrogen-like ion obeying the relativistic Dirac equation. The Hamiltonian $H$ for a relativistic electron moving in the Coulomb potential can be chosen in the following form (atomic units are used):


 * $$H= c (\boldsymbol \alpha \cdot \mathbf p) + eV + mc^2\beta,$$

where $V = −eZ/r$ is the Coulomb potential of a pointlike nucleus, $p$ is a momentum operator of the electron, and $e$, $m$ and $c$ are the elementary charge, electron mass and the speed of light respectively, and finally $α$ and $β$ are a set of Dirac 2 × 2 matrices:


 * $$\begin{align}

\boldsymbol\alpha &= \begin{pmatrix} 0 & \boldsymbol\sigma \\ \boldsymbol\sigma & 0 \end{pmatrix}, \\ \beta &= \begin{pmatrix} I & 0 \\ 0 & -I \end{pmatrix}. \end{align}$$

To find out the eigenfunctions and corresponding energies, one solves the eigenfunction equation


 * $$H\Psi = E\Psi,$$

where $Ψ = (Ψ(1), Ψ(2), Ψ(3), Ψ(4))^{T}$ is a four-component wavefunction, and $E$ is the associated eigenenergy. It is demonstrated in Brack (1983) that application of the virial theorem to the eigenfunction equation produces the following formula for the eigenenergy of any bound state:


 * $$E = mc^2 \langle \Psi | \beta | \Psi \rangle

= mc^2 \int \big|\Psi(1)\big|^2 + \big|\Psi(2)\big|^2 - \big|\Psi(3)\big|^2 - \big|\Psi(4)\big|^2 \,\mathrm{d}\tau,$$

and analogously, the virial theorem applied to the eigenfunction equation with the square of the Hamiltonian yields


 * $$E^2 = m^2 c^4 + emc^2 \langle \Psi | V\beta | \Psi \rangle.$$

It is easy to see that both of the above formulae represent density functionals. The former formula can be easily generalized for the multi-electron case.

One may observe that both of the functionals written above do not have extremals, of course, if a reasonably wide set of functions is allowed for variation. Nevertheless, it is possible to design a density functional with desired extremal properties out of those ones. Let us make it in the following way:


 * $$F[n] = \frac{1}{mc^2} \left(mc^2 \int n \,d\tau - \sqrt{m^2 c^4 + emc^2 \int Vn \,d\tau} \right)^2 + \delta_{n, n_e} mc^2 \int n \,d\tau,$$

where $n_{e}$ in Kronecker delta symbol of the second term denotes any extremal for the functional represented by the first term of the functional $F$. The second term amounts to zero for any function that is not an extremal for the first term of functional $F$. To proceed further we'd like to find Lagrange equation for this functional. In order to do this, we should allocate a linear part of functional increment when the argument function is altered:


 * $$F[n_e + \delta n] = \frac{1}{mc^2} \left(mc^2 \int (n_e + \delta n) \,d\tau - \sqrt{m^2 c^4 + emc^2 \int V(n_e + \delta n) \,d\tau} \right)^2.$$

Deploying written above equation, it is easy to find the following formula for functional derivative:


 * $$\frac{\delta F[n_e]}{\delta n} = 2A - \frac{2B^2 + AeV(\tau_0)}{B} + eV(\tau_0),$$

where $A = mc^{2}∫ n_{e} dτ$, and $B = √m^{2}c^{4} + emc^{2}∫Vn_{e} dτ$, and $V(τ_{0})$ is a value of potential at some point, specified by support of variation function $δn$, which is supposed to be infinitesimal. To advance toward Lagrange equation, we equate functional derivative to zero and after simple algebraic manipulations arrive to the following equation:


 * $$2B(A - B) = eV(\tau_0)(A - B).$$

Apparently, this equation could have solution only if $A = B$. This last condition provides us with Lagrange equation for functional $F$, which could be finally written down in the following form:


 * $$\left(mc^2 \int n \,d\tau \right)^2 = m^2 c^4 + emc^2 \int Vn \,d\tau.$$

Solutions of this equation represent extremals for functional $F$. It's easy to see that all real densities, that is, densities corresponding to the bound states of the system in question, are solutions of written above equation, which could be called the Kohn–Sham equation in this particular case. Looking back onto the definition of the functional $F$, we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value.

Approximations (exchange–correlation functionals)
The major problem with DFT is that the exact functionals for exchange and correlation are not known, except for the free-electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. One of the simplest approximations is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:


 * $$E_\text{XC}^\text{LDA}[n] = \int \varepsilon_\text{XC}(n) n(\mathbf r) \,\mathrm d^3 \mathbf r.$$

The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin:


 * $$E_\text{XC}^\text{LSDA}[n_\uparrow, n_\downarrow] = \int \varepsilon_\text{XC}(n_\uparrow, n_\downarrow) n(\mathbf r) \,\mathrm d^3 \mathbf r.$$

In LDA, the exchange–correlation energy is typically separated into the exchange part and the correlation part: $ε_{XC} = ε_{X} + ε_{C}$. The exchange part is called the Dirac (or sometimes Slater) exchange, which takes the form $ε_{X} ∝ n^{1/3}$. There are, however, many mathematical forms for the correlation part. Highly accurate formulae for the correlation energy density $ε_{C}(n_{↑}, n_{↓})$ have been constructed from quantum Monte Carlo simulations of jellium. A simple first-principles correlation functional has been recently proposed as well. Although unrelated to the Monte Carlo simulation, the two variants provide comparable accuracy.

The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to underestimate the exchange energy and over-estimate the correlation energy. The errors due to the exchange and correlation parts tend to compensate each other to a certain degree. To correct for this tendency, it is common to expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron density. This allows corrections based on the changes in density away from the coordinate. These expansions are referred to as generalized gradient approximations (GGA)  and have the following form:


 * $$E_\text{XC}^\text{GGA}[n_\uparrow, n_\downarrow] = \int \varepsilon_\text{XC}(n_\uparrow, n_\downarrow, \nabla n_\uparrow, \nabla n_\downarrow) n(\mathbf r) \,\mathrm d^3 \mathbf r.$$

Using the latter (GGA), very good results for molecular geometries and ground-state energies have been achieved.

Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian), whereas GGA includes only the density and its first derivative in the exchange–correlation potential.

Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density.

Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals.

Generalizations to include magnetic fields
The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt, the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris, the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally.

Applications


In general, density functional theory finds increasingly broad application in chemistry and materials science for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behavior in dilute magnetic semiconductor materials, and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors. It has also been shown that DFT gives good results in the prediction of sensitivity of some nanostructures to environmental pollutants like sulfur dioxide or acrolein, as well as prediction of mechanical properties.

In practice, Kohn–Sham theory can be applied in several distinct ways, depending on what is being investigated. In solid-state calculations, the local density approximations are still commonly used along with plane-wave basis sets, as an electron-gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron-gas approximation; however, they must reduce to LDA in the electron-gas limit. Among physicists, one of the most widely used functionals is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized gradient parameterization of the free-electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP, which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a "training set" of molecules. Although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). In the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments.

Density functional theory is generally highly accurate but highly computationally-expensive. In recent years, DFT has been used with machine learning techniques - especially graph neural networks - to create machine learning potentials. These graph neural networks approximate DFT, with the aim of achieving similar accuracies with much less computation, and are especially beneficial for large systems. They are trained using DFT-calculated properties of a known set of molecules. Researchers have been trying to approximate DFT with machine learning for decades, but have only recently made good estimators. Breakthroughs in model architecture and data preprocessing that more heavily encoded theoretical knowledge, especially regarding symmetries and invariances, have enabled huge leaps in model performance. Using backpropagation, the process by which neural networks learn from training errors, to extract meaningful information about forces and densities, has similarly improved machine learning potentials accuracy. By 2023, for example, the DFT approximator Matlantis could simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20,000,000 times faster than DFT with similar accuracy, showcasing the power of DFT approximators in the artificial intelligence age. ML approximations of DFT have historically faced substantial transferability issues, with models failing to generalize potentials from some types of elements and compounds to others; improvements in architecture and data have slowly mitigated, but not eliminated, this issue. For very large systems, electrically nonneutral simulations, and intricate reaction pathways, DFT approximators often remain insufficiently computationally-lightweight or insufficiently accurate.

Thomas–Fermi model
The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Llewellyn Thomas and Enrico Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every $$h^3$$ of volume. For each element of coordinate space volume $$\mathrm d^3 \mathbf r$$ we can fill out a sphere of momentum space up to the Fermi momentum $$p_\text{F}$$


 * $$\tfrac43 \pi p_\text{F}^3(\mathbf r).$$

Equating the number of electrons in coordinate space to that in phase space gives


 * $$n(\mathbf r) = \frac{8\pi}{3h^3} p_\text{F}^3(\mathbf r).$$

Solving for $p_{F}$ and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:


 * $$\begin{align}

t_\text{TF}[n] &= \frac{p^2}{2m_e} \propto \frac{(n^{1/3})^2}{2m_e} \propto n^{2/3}(\mathbf r), \\ T_\text{TF}[n] &= C_\text{F} \int n(\mathbf r) n^{2/3}(\mathbf r) \,\mathrm d^3 \mathbf r = C_\text{F} \int n^{5/3}(\mathbf r) \,\mathrm d^3 \mathbf r, \end{align}$$

where


 * $$C_\text{F} = \frac{3h^2}{10m_e} \left(\frac{3}{8\pi}\right)^{2/3}.$$

As such, they were able to calculate the energy of an atom using this kinetic-energy functional combined with the classical expressions for the nucleus–electron and electron–electron interactions (which can both also be represented in terms of the electron density).

Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic-energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange-energy functional was added by Paul Dirac in 1928.

However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.

Edward Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic-energy functional.

The kinetic-energy functional can be improved by adding the von Weizsäcker (1935) correction:


 * $$T_\text{W}[n] = \frac{\hbar^2}{8m} \int \frac{|\nabla n(\mathbf r)|^2}{n(\mathbf r)} \,\mathrm d^3 \mathbf r.$$

Hohenberg–Kohn theorems
The Hohenberg–Kohn theorems relate to any system consisting of electrons moving under the influence of an external potential.

Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density.


 * If two systems of electrons, one trapped in a potential $$v_1(\mathbf r)$$ and the other in $$v_2(\mathbf r)$$, have the same ground-state density $$n(\mathbf r)$$, then $$v_1(\mathbf r) - v_2(\mathbf r)$$ is necessarily a constant.


 * Corollary 1: the ground-state density uniquely determines the potential and thus all properties of the system, including the many-body wavefunction. In particular, the HK functional, defined as $$F[n] = T[n] + U[n]$$, is a universal functional of the density (not depending explicitly on the external potential).
 * Corollary 2: In light of the fact that the sum of the occupied energies provides the energy content of the Hamiltonian, a unique functional of the ground state charge density, the spectrum of the Hamiltonian is also a unique functional of the ground state charge density.

Theorem 2. The functional that delivers the ground-state energy of the system gives the lowest energy if and only if the input density is the true ground-state density.


 * In other words, the energy content of the Hamiltonian reaches its absolute minimum, i.e., the ground state, when the charge density is that of the ground state.


 * For any positive integer $$N$$ and potential $$v(\mathbf r)$$, a density functional $$F[n]$$ exists such that
 * $$E_{(v,N)}[n] = F[n] + \int v(\mathbf r) n(\mathbf r) \,\mathrm d^3 \mathbf r$$
 * reaches its minimal value at the ground-state density of $$N$$ electrons in the potential $$v(\mathbf r)$$. The minimal value of $$E_{(v,N)}[n]$$ is then the ground-state energy of this system.

Pseudo-potentials
The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 1950s.



Ab initio pseudo-potentials
A crucial step toward more realistic pseudo-potentials was given by Topp and Hopfield, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance $r_{l}$. The pseudo-wavefunctions are also forced to have the same norm (i.e., the so-called norm-conserving condition) as the true valence wavefunctions and can be written as


 * $$\begin{align}

R_l^\text{PP}(r) &= R_{nl}^\text{AE}(r), \text{ for } r > r_l,\\ \int_0^{r_l} \big|R_l^\text{PP}(r)\big|^2 r^2 \,\mathrm{d}r &= \int_0^{r_l} \big|R_{nl}^\text{AE}(r)\big|^2 r^2 \,\mathrm{d}r, \end{align}$$

where $R_{l}(r)$ is the radial part of the wavefunction with angular momentum $l$, and PP and AE denote the pseudo-wavefunction and the true (all-electron) wavefunction respectively. The index $n$ in the true wavefunctions denotes the valence level. The distance $r_{l}$ beyond which the true and the pseudo-wavefunctions are equal is also dependent on $l$.

Electron smearing
The electrons of a system will occupy the lowest Kohn–Sham eigenstates up to a given energy level according to the Aufbau principle. This corresponds to the steplike Fermi–Dirac distribution at absolute zero. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. One way of damping these oscillations is to smear the electrons, i.e. allowing fractional occupancies. One approach of doing this is to assign a finite temperature to the electron Fermi–Dirac distribution. Other ways is to assign a cumulative Gaussian distribution of the electrons or using a Methfessel–Paxton method.

Classical density functional theory
Classical density functional theory is a classical statistical method to investigate the properties of many-body systems consisting of interacting molecules, macromolecules, nanoparticles or microparticles. The classical non-relativistic method is correct for classical fluids with particle velocities less than the speed of light and thermal de Broglie wavelength smaller than the distance between particles. The theory is based on the calculus of variations of a thermodynamic functional, which is a function of the spatially dependent density function of particles, thus the name. The same name is used for quantum DFT, which is the theory to calculate the electronic structure of electrons based on spatially dependent electron density with quantum and relativistic effects. Classical DFT is a popular and useful method to study fluid phase transitions, ordering in complex liquids, physical characteristics of interfaces and nanomaterials. Since the 1970s it has been applied to the fields of materials science, biophysics, chemical engineering and civil engineering. Computational costs are much lower than for molecular dynamics simulations, which provide similar data and a more detailed description but are limited to small systems and short time scales. Classical DFT is valuable to interpret and test numerical results and to define trends although details of the precise motion of the particles are lost due to averaging over all possible particle trajectories. As in electronic systems, there are fundamental and numerical difficulties in using DFT to quantitatively describe the effect of intermolecular interaction on structure, correlations and thermodynamic properties.

Classical DFT addresses the difficulty of describing thermodynamic equilibrium states of many-particle systems with nonuniform density. Classical DFT has its roots in theories such as the van der Waals theory for the equation of state and the virial expansion method for the pressure. In order to account for correlation in the positions of particles the direct correlation function was introduced as the effective interaction between two particles in the presence of a number of surrounding particles by Leonard Ornstein and Frits Zernike in 1914. The connection to the density pair distribution function was given by the Ornstein–Zernike equation. The importance of correlation for thermodynamic properties was explored through density distribution functions. The functional derivative was introduced to define the distribution functions of classical mechanical systems. Theories were developed for simple and complex liquids using the ideal gas as a basis for the free energy and adding molecular forces as a second-order perturbation. A term in the gradient of the density was added to account for non-uniformity in density in the presence of external fields or surfaces. These theories can be considered precursors of DFT.

To develop a formalism for the statistical thermodynamics of non-uniform fluids functional differentiation was used extensively by Percus and Lebowitz (1961), which led to the Percus–Yevick equation linking the density distribution function and the direct correlation. Other closure relations were also proposed;the Classical-map hypernetted-chain method, the BBGKY hierarchy. In the late 1970s classical DFT was applied to the liquid–vapor interface and the calculation of surface tension. Other applications followed: the freezing of simple fluids, formation of the glass phase, the crystal–melt interface and dislocation in crystals, properties of polymer systems, and liquid crystal ordering. Classical DFT was applied to colloid dispersions, which were discovered to be good models for atomic systems. By assuming local chemical equilibrium and using the local chemical potential of the fluid from DFT as the driving force in fluid transport equations, equilibrium DFT is extended to describe non-equilibrium phenomena and fluid dynamics on small scales.

Classical DFT allows the calculation of the equilibrium particle density and prediction of thermodynamic properties and behavior of a many-body system on the basis of model interactions between particles. The spatially dependent density determines the local structure and composition of the material. It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. The grand potential is evaluated as the sum of the ideal-gas term with the contribution from external fields and an excess thermodynamic free energy arising from interparticle interactions. In the simplest approach the excess free-energy term is expanded on a system of uniform density using a functional Taylor expansion. The excess free energy is then a sum of the contributions from s-body interactions with density-dependent effective potentials representing the interactions between s particles. In most calculations the terms in the interactions of three or more particles are neglected (second-order DFT). When the structure of the system to be studied is not well approximated by a low-order perturbation expansion with a uniform phase as the zero-order term, non-perturbative free-energy functionals have also been developed. The minimization of the grand potential functional in arbitrary local density functions for fixed chemical potential, volume and temperature provides self-consistent thermodynamic equilibrium conditions, in particular, for the local chemical potential. The functional is not in general a convex functional of the density; solutions may not be local minima. Limiting to low-order corrections in the local density is a well-known problem, although the results agree (reasonably) well on comparison to experiment.

A variational principle is used to determine the equilibrium density. It can be shown that for constant temperature and volume the correct equilibrium density minimizes the grand potential functional $$\Omega$$ of the grand canonical ensemble over density functions $$n(\mathbf r)$$. In the language of functional differentiation (Mermin theorem):


 * $$\frac{\delta \Omega}{\delta n(\mathbf r)} = 0.$$

The Helmholtz free energy functional $$F$$ is defined as $$F = \Omega + \int d^3 \mathbf r\, n(\mathbf r) \mu(\mathbf r)$$. The functional derivative in the density function determines the local chemical potential: $$\mu(\mathbf r) = \delta F(\mathbf r) / \delta n(\mathbf r)$$. In classical statistical mechanics the partition function is a sum over probability for a given microstate of $N$ classical particles as measured by the Boltzmann factor in the Hamiltonian of the system. The Hamiltonian splits into kinetic and potential energy, which includes interactions between particles, as well as external potentials. The partition function of the grand canonical ensemble defines the grand potential. A correlation function is introduced to describe the effective interaction between particles.

The s-body density distribution function is defined as the statistical ensemble average $$\langle\dots\rangle$$ of particle positions. It measures the probability to find s particles at points in space $$\mathbf r_1, \dots, \mathbf r_s$$:


 * $$n_s(\mathbf r_1, \dots, \mathbf r_s) = \frac{N!}{(N - s)!} \big\langle \delta(\mathbf r_1 - \mathbf r'_1) \dots \delta(\mathbf r_s - \mathbf r'_s) \big\rangle.$$

From the definition of the grand potential, the functional derivative with respect to the local chemical potential is the density; higher-order density correlations for two, three, four or more particles are found from higher-order derivatives:


 * $$\frac{\delta^s \Omega}{\delta \mu(\mathbf r_1) \dots \delta\mu(\mathbf r_s)} = (-1)^s n_s(\mathbf r_1, \dots, \mathbf r_s).$$

The radial distribution function with s = 2 measures the change in the density at a given point for a change of the local chemical interaction at a distant point.

In a fluid the free energy is a sum of the ideal free energy and the excess free-energy contribution $$\Delta F$$ from interactions between particles. In the grand ensemble the functional derivatives in the density yield the direct correlation functions $$c_s$$:


 * $$\frac{1}{kT} \frac{\delta^s \Delta F}{\delta n(\mathbf r_1) \dots \delta n(\mathbf r_s)} = c_s(\mathbf r_1, \dots, \mathbf r_s).$$

The one-body direct correlation function plays the role of an effective mean field. The functional derivative in density of the one-body direct correlation results in the direct correlation function between two particles $$c_2$$. The direct correlation function is the correlation contribution to the change of local chemical potential at a point $$\mathbf r$$ for a density change at $$\mathbf r'$$ and is related to the work of creating density changes at different positions. In dilute gases the direct correlation function is simply the pair-wise interaction between particles (Debye–Huckel equation). The Ornstein–Zernike equation between the pair and the direct correlation functions is derived from the equation


 * $$\int d^3 \mathbf r\, \frac{\delta \mu(\mathbf r)}{\delta n(\mathbf r)} \frac{\delta n(\mathbf r'')}{\delta\mu(\mathbf r')} = \delta(\mathbf r - \mathbf r').$$

Various assumptions and approximations adapted to the system under study lead to expressions for the free energy. Correlation functions are used to calculate the free-energy functional as an expansion on a known reference system. If the non-uniform fluid can be described by a density distribution that is not far from uniform density a functional Taylor expansion of the free energy in density increments leads to an expression for the thermodynamic potential using known correlation functions of the uniform system. In the square gradient approximation a strong non-uniform density contributes a term in the gradient of the density. In a perturbation theory approach the direct correlation function is given by the sum of the direct correlation in a known system such as hard spheres and a term in a weak interaction such as the long range London dispersion force. In a local density approximation the local excess free energy is calculated from the effective interactions with particles distributed at uniform density of the fluid in a cell surrounding a particle. Other improvements have been suggested such as the weighted density approximation for a direct correlation function of a uniform system which distributes the neighboring particles with an effective weighted density calculated from a self-consistent condition on the direct correlation function.

The variational Mermin principle leads to an equation for the equilibrium density and system properties are calculated from the solution for the density. The equation is a non-linear integro-differential equation and finding a solution is not trivial, requiring numerical methods, except for the simplest models. Classical DFT is supported by standard software packages, and specific software is currently under development. Assumptions can be made to propose trial functions as solutions, and the free energy is expressed in the trial functions and optimized with respect to parameters of the trial functions. Examples are a localized Gaussian function centered on crystal lattice points for the density in a solid, the hyperbolic function $$\tanh(r)$$ for interfacial density profiles.

Classical DFT has found many applications, for example:
 * developing new functional materials in materials science, in particular nanotechnology;
 * studying the properties of fluids at surfaces and the phenomena of wetting and adsorption;
 * understanding life processes in biotechnology;
 * improving filtration methods for gases and fluids in chemical engineering;
 * fighting pollution of water and air in environmental science;
 * generating new procedures in microfluidics and nanofluidics.

The extension of classical DFT towards nonequilibrium systems is known as dynamical density functional theory (DDFT). DDFT allows to describe the time evolution of the one-body density $$\rho(\boldsymbol{r},t)$$ of a colloidal system, which is governed by the equation


 * $$\frac{\partial \rho}{\partial t} = \Gamma \nabla \cdot \left(\rho\nabla \frac{\delta F}{\delta \rho} \right)$$

with the mobility $$\Gamma $$ and the free energy $$ F $$. DDFT can be derived from the microscopic equations of motion for a colloidal system (Langevin equations or Smoluchowski equation) based on the adiabatic approximation, which corresponds to the assumption that the two-body distribution in a nonequilibrium system is identical to that in an equilibrium system with the same one-body density. For a system of noninteracting particles, DDFT reduces to the standard diffusion equation.

Lists

 * List of quantum chemistry and solid state physics software
 * List of software for molecular mechanics modeling