Empirical dynamic modeling

Empirical dynamic modeling (EDM) is a framework for analysis and prediction of nonlinear dynamical systems. Applications include population dynamics,  ecosystem service, medicine, neuroscience,   dynamical systems,  geophysics,   and human-computer interaction. EDM was originally developed by Robert May and George Sugihara. It can be considered a methodology for data modeling, predictive analytics, dynamical system analysis, machine learning and time series analysis.

Description
Mathematical models have tremendous power to describe observations of real-world systems. They are routinely used to test hypothesis, explain mechanisms and predict future outcomes. However, real-world systems are often nonlinear and multidimensional, in some instances rendering explicit equation-based modeling problematic. Empirical models, which infer patterns and associations from the data instead of using hypothesized equations, represent a natural and flexible framework for modeling complex dynamics.

Donald DeAngelis and Simeon Yurek illustrated that canonical statistical models are ill-posed when applied to nonlinear dynamical systems. A hallmark of nonlinear dynamics is state-dependence: system states are related to previous states governing transition from one state to another. EDM operates in this space, the multidimensional state-space of system dynamics rather than on one-dimensional observational time series. EDM does not presume relationships among states, for example, a functional dependence, but projects future states from localised, neighboring states. EDM is thus a state-space, nearest-neighbors paradigm where system dynamics are inferred from states derived from observational time series. This provides a model-free representation of the system naturally encompassing nonlinear dynamics.

A cornerstone of EDM is recognition that time series observed from a dynamical system can be transformed into higher-dimensional state-spaces by time-delay embedding with Takens's theorem. The state-space models are evaluated based on in-sample fidelity to observations, conventionally with Pearson correlation between predictions and observations.

Methods
EDM is continuing to evolve. As of 2022, the main algorithms are Simplex projection, Sequential locally weighted global linear maps (S-Map) projection, Multivariate embedding in Simplex or S-Map, Convergent cross mapping (CCM), and Multiview Embeding, described below.

Nearest neighbors are found according to: $$\text{NN}(y, X, k) = \| X_{N_i}^{E} - y\| \leq \| X_{N_j}^{E} - y\| \text{ if } 1 \leq i \leq j \leq k$$

Simplex
Simplex projection  is a nearest neighbor projection. It locates the $$k$$ nearest neighbors to the location in the state-space from which a prediction is desired. To minimize the number of free parameters $$k$$ is typically set to $$E+1$$ defining an $$E+1$$ dimensional simplex in the state-space. The prediction is computed as the average of the weighted phase-space simplex projected $$Tp$$ points ahead. Each neighbor is weighted proportional to their distance to the projection origin vector in the state-space.


 * 1) Find $$k$$ nearest neighbor: $$N_k \gets \text{NN}(y, X, k)$$
 * 2) Define the distance scale: $$d \gets  \| X_{N_1}^{E} - y\|$$
 * 3) Compute weights: For{$$i=1,\dots,k$$} : $$w_i \gets \exp (-\| X_{N_i}^{E} - y\| / d )$$
 * 4) Average of state-space simplex: $$\hat{y} \gets \sum_{i = 1}^{k} \left(w_iX_{N_i+T_p}\right) / \sum_{i = 1}^{k} w_i$$

S-Map
S-Map extends the state-space prediction in Simplex from an average of the $$E+1$$ nearest neighbors to a linear regression fit to all neighbors, but localised with an exponential decay kernel. The exponential localisation function is $$F(\theta) = \text{exp}(-\theta d/D)$$, where $$d$$ is the neighbor distance and $$D$$ the mean distance. In this way, depending on the value of $$\theta$$, neighbors close to the prediction origin point have a higher weight than those further from it, such that a local linear approximation to the nonlinear system is reasonable. This localisation ability allows one to identify an optimal local scale, in-effect quantifying the degree of state dependence, and hence nonlinearity of the system.

Another feature of S-Map is that for a properly fit model, the regression coefficients between variables have been shown to approximate the gradient (directional derivative) of variables along the manifold. These Jacobians represent the time-varying interaction strengths between system variables.

\begin{bmatrix} 1         & X_{N_1} & X_{N_1- 1} & \dots  & X_{N_1 - E + 1} \\ 1         & X_{N_2} & X_{N_2- 1} & \dots  & X_{N_2 - E + 1} \\ \vdots    & \vdots & \vdots   & \ddots & \vdots       \\ 1         & X_{N_k} & X_{N_k- 1} & \dots  & X_{N_k - E + 1} \end{bmatrix}$$ \begin{bmatrix} X_{N_1 + T_p} \\ X_{N_2 + T_p} \\ \vdots \\ X_{N_k + T_p} \end{bmatrix}$$
 * 1) Find $$k$$ nearest neighbor: $$N \gets \text{NN}(y, X, k)$$
 * 2) Sum of distances: $$D \gets \frac{1}{k} \sum_{i=1}^k \| X_{N_i}^{E} - y\|$$
 * 3) Compute weights: For{$$i=1,\dots,k$$} : $$w_i \gets \exp (-\theta \| X_{N_i}^{E} - y\| / D )$$
 * 4) Reweighting matrix: $$W \gets \text{diag}(w_i)$$
 * 5) Design matrix: $$A \gets
 * 1) Weighted design matrix: $$A \gets WA$$
 * 2) Response vector at $$Tp$$: $$b \gets
 * 1) Weighted response vector: $$b \gets Wb$$
 * 2) Least squares solution (SVD): $$\hat{c} \gets \text{argmin}_{c}\| Ac - b \|_2^2$$
 * 3) Local linear model $$\hat{c}$$ is prediction: $$\hat{y} \gets \hat{c}_0 + \sum_{i=1}^E\hat{c}_iy_i$$

Multivariate Embedding
Multivariate Embedding recognizes that time-delay embeddings are not the only valid state-space construction. In Simplex and S-Map one can generate a state-space from observational vectors, or time-delay embeddings of a single observational time series, or both.

Convergent Cross Mapping
Convergent cross mapping (CCM) leverages a corollary to the Generalized Takens Theorem that it should be possible to cross predict or cross map between variables observed from the same system. Suppose that in some dynamical system involving variables $$X$$ and $$Y$$, $$X$$ causes $$Y$$. Since $$X$$ and $$Y$$ belong to the same dynamical system, their reconstructions (via embeddings) $$M_{x}$$, and $$M_{y}$$, also map to the same system.

The causal variable $$X$$ leaves a signature on the affected variable $$Y$$, and consequently, the reconstructed states based on $$Y$$ can be used to cross predict values of $$X$$. CCM leverages this property to infer causality by predicting $$X$$ using the $$M_{y}$$ library of points (or vice versa for the other direction of causality), while assessing improvements in cross map predictability as larger and larger random samplings of $$M_{y}$$ are used. If the prediction skill of $$X$$ increases and saturates as the entire $$M_{y}$$ is used, this provides evidence that $$X$$ is casually influencing $$Y$$.

Multiview Embedding
Multiview Embedding is a Dimensionality reduction technique where a large number of state-space time series vectors are combitorially assessed towards maximal model predictability.

Extensions
Extensions to EDM techniques include:
 * Generalized Theorems for Nonlinear State Space Reconstruction
 * Extended Convergent Cross Mapping
 * Dynamic stability
 * S-Map regularization
 * Visual analytics with EDM
 * Convergent Cross Sorting
 * Expert system with EDM hybrid
 * Sliding windows based on the extended convergent cross-mapping
 * Empirical Mode Modeling
 * Variable step sizes with bundle embedding
 * Multiview distance regularised S-map