User:Braydenbekker/Interatomic Potentials

Machine-learning potentials
Current research in interatomic potentials involves using machine learning methods. The total energy is then written$$ V_\mathrm{TOT} = \sum_i^N E(\mathbf{q}_i)$$where $$\mathbf{q}_i$$is a mathematical representation of the atomic environment surrounding the atom $$i$$, known as the descriptor. $$E$$ is a machine-learning model that provides a prediction for the energy of atom $$i$$ based on the descriptor output. An accurate machine-learning potential requires both a robust descriptor and a suitable machine learning framework. It is also possible to use a linear combination of multiple descriptors with associated machine-learning models. Potentials have been constructed using a variety of machine-learning methods, including neural networks, Gaussian process regression , and linear regression.

A machine-learning potential is trained to total energies, forces, and possibly stresses obtained from quantum-level calculations, such as density functional theory, as with most modern potentials. However, unlike analytical models, the accuracy of a machine-learning potential can be converged to be comparable with the underlying quantum calculations. Hence, machine-learned potentials are, in general, more accurate than the traditional analytical potentials, but are less able to extrapolate. Further, owing to the complexity of the machine-learning model and the descriptors, they are computationally far more expensive than their analytical counterparts.

Machine-learning potentials may also be combined with analytical potentials, for example to include known physics such as the screened Coulomb repulsion, or to impose physical constraints on the predictions.