Direct coupling analysis

Direct coupling analysis or DCA is an umbrella term comprising several methods for analyzing sequence data in computational biology. The common idea of these methods is to use statistical modeling to quantify the strength of the direct relationship between two positions of a biological sequence, excluding effects from other positions. This contrasts usual measures of correlation, which can be large even if there is no direct relationship between the positions (hence the name direct coupling analysis). Such a direct relationship can for example be the evolutionary pressure for two positions to maintain mutual compatibility in the biomolecular structure of the sequence, leading to molecular coevolution between the two positions.

DCA has been used in the inference of protein residue contacts,  RNA structure prediction,  the inference of protein-protein interaction networks,    the modeling of fitness landscapes,   the generation of novel function proteins, and the modeling of protein evolution.

Mathematical Model
The basis of DCA is a statistical model for the variability within a set of phylogenetically related biological sequences. When fitted to a multiple sequence alignment (MSA) of sequences of length $$ N $$, the model defines a probability for all possible sequences of the same length. This probability can be interpreted as the probability that the sequence in question belongs to the same class of sequences as the ones in the MSA, for example the class of all protein sequences belonging to a specific protein family.

We denote a sequence by $$ a = (a_1,a_2..,a_N) $$, with the $$ a_i $$ being categorical variables representing the monomers of the sequence (if the sequences are for example aligned amino acid sequences of proteins of a protein family, the $$ a_i $$ take as values any of the 20 standard amino acids). The probability of a sequence within a model is then defined as

\begin{align} P\left(a | J,h\right) = \frac{1}{Z} \exp{\left(\sum\limits_{i=1}^{N-1} \sum\limits_{j=i+1}^{N} J_{ij}(a_i,a_j) + \sum\limits_{i=1}^{N} h_i(a_i)\right)}, \end{align} $$

where
 * $$ J,h $$ are sets of real numbers representing the parameters of the model (more below)
 * $$ Z $$ is a normalization constant (a real number) to ensure $$ \sum\limits_{a} P(a | J,h) = 1 $$

The parameters $$ h_i(a_i) $$ depend on one position $$ i $$ and the symbol $$ a_i $$ at this position. They are usually called fields and represent the propensity of symbol to be found at a certain position. The parameters $$ J_{ij}(a_i,a_j) $$ depend on pairs of positions $$ i,j $$ and the symbols $$ a_i,a_j, $$ at these positions. They are usually called couplings and represent an interaction, i.e. a term quantifying how compatible the symbols at both positions are with each other. The model is fully connected, so there are interactions between all pairs of positions. The model can be seen as a generalization of the Ising model, with spins not only taking two values, but any value from a given finite alphabet. In fact, when the size of the alphabet is 2, the model reduces to the Ising model. Since it is also reminiscent of the model of the same name, it is often called Potts model.

Even knowing the probabilities of all sequences does not determine the parameters $$ J,h $$ uniquely. For example, a simple transformation of the parameters



J_{ij}(a,b) \rightarrow J_{ij}(a,b) + R_{ij} $$

for any set of real numbers $$ R_{ij} $$ leaves the probabilities the same. The likelihood function is invariant under such transformations as well, so the data cannot be used to fix these degrees of freedom (although a prior on the parameters might do so ).

A convention often found in literature is to fix these degrees of freedom such that the Frobenius norm of the coupling matrix

F_{ij} = \sqrt{\sum\limits_{a,b} J_{ij}(a,b)^2}, $$

is minimized (independently for every pair of positions $$ i $$ and $$ j $$).

Maximum Entropy Derivation
To justify the Potts model, it is often noted that it can be derived following a maximum entropy principle: For a given set of sample covariances and frequencies, the Potts model represents the distribution with the maximal Shannon entropy of all distributions reproducing those covariances and frequencies. For a multiple sequence alignment, the sample covariances are defined as



C_{ij}(a,b) = f_{ij}(a,b) - f_i(a)f_j(b) $$,

where $$ f_{ij}(a,b) $$ is the frequency of finding symbols $$a$$ and $$b$$ at positions $$ i $$ and $$ j $$ in the same sequence in the MSA, and $$ f_i(a) $$ the frequency of finding symbol $$ a $$ at position $$ i $$. The Potts model is then the unique distribution $$ P $$ that maximizes the functional



\begin{align} F[P] = &- \sum\limits_{a} P(a) \log P(a) \\ &+ \sum\limits_{i<j} \sum\limits_{x,y} \lambda_{ij}(x,y) \Big( P_{ij}(x,y) - f_{ij}(x,y) \Big) \\ &+ \sum\limits_{i}\sum\limits_{x} \lambda_{i}(x) \Big( P_i(x) - f_i(x) \Big) \\ &+ \Omega \left(1 - \sum\limits_{a} P(a)\right). \end{align} $$

The first term in the functional is the Shannon entropy of the distribution. The $$ \lambda $$ are Lagrange multipliers to ensure $$ P_{ij}(x,y) = f_{ij}(x,y) $$, with $$P_{ij}(x,y)$$ being the marginal probability to find symbols $$ x,y $$ at positions $$ i,j $$. The Lagrange multiplier $$ \Omega $$ ensures normalization. Maximizing this functional and identifying



\begin{align} &\lambda_{ij}(x,y) = J_{ij}(x,y) \\ &\lambda_{i}(x) = h_i(x) \\ &\Omega = Z \\ \end{align} $$

leads to the Potts model above. This procedure only gives the functional form of the Potts model, while the numerical values of the Lagrange multipliers (identified with the parameters) still have to be determined by fitting the model to the data.

Direct Couplings and Indirect Correlation
The central point of DCA is to interpret the $$ J_{ij} $$ (which can be represented as a $$ q\times q$$ matrix if there are $$ q $$ possible symbols) as direct couplings. If two positions are under joint evolutionary pressure (for example to maintain a structural bond), one might expect these couplings to be large because only sequences with fitting pairs of symbols should have a significant probability. On the other hand, a large correlation between two positions does not necessarily mean that the couplings are large, since large couplings between e.g. positions $$ i,j $$ and $$ j,k $$ might lead to large correlations between positions $$ i $$ and $$ k $$, mediated by position $$ j $$. In fact, such indirect correlations have been implicated in the high false positive rate when inferring protein residue contacts using correlation measures like mutual information.

Inference
The inference of the Potts model on a multiple sequence alignment (MSA) using maximum likelihood estimation is usually computationally intractable, because one needs to calculate the normalization constant $$Z$$, which is for sequence length $$ N $$ and $$ q $$ possible symbols a sum of $$q^N$$ terms (which means for example for a small protein domain family with 30 positions $$20^{30}$$ terms). Therefore, numerous approximations and alternatives have been developed:


 * mpDCA (inference based on message passing/belief propagation)
 * mfDCA (inference based on a mean-field approximation)
 * gaussDCA (inference based on a Gaussian approximation)
 * plmDCA (inference based on pseudo-likelihoods)
 * Adaptive Cluster Expansion

All of these methods lead to some form of estimate for the set of parameters $$J,{h}$$ maximizing the likelihood of the MSA. Many of them include regularization or prior terms to ensure a well-posed problem or promote a sparse solution.

Protein Residue Contact Prediction
A possible interpretation of large values of couplings in a model fitted to a MSA of a protein family is the existence of conserved contacts between positions (residues) in the family. Such a contact can lead to molecular coevolution, since a mutation in one of the two residues, without a compensating mutation in the other residue, is likely to disrupt protein structure and negatively affect the fitness of the protein. Residue pairs for which there is a strong selective pressure to maintain mutual compatibility are therefore expected to mutate together or not at all. This idea (which was known in literature long before the conception of DCA ) has been used to predict protein contact maps, for example analyzing the mutual information between protein residues.

Within the framework of DCA, a score for the strength of the direct interaction between a pair of residues $$ i,j $$ is often defined using the Frobenius norm $$ F_{ij} $$ of the corresponding coupling matrix $$ J_{ij} $$ and applying an average product correction (APC):



F^{APC}_{ij} = F_{ij} - \frac{F_{i} F_{j}}{F}, $$

where $$ F_{ij} $$ has been defined above and



\begin{align} &F_{i} = \frac{1}{N}\sum\limits_{j \neq i}^{N} F_{ij} \\ &F = \frac{1}{N^2-N}\sum\limits_{i,j, i \neq j}^{N} F_{ij} \end{align} $$. This correction term was first introduced for mutual information and is used to remove biases of specific positions to produce large $$ F_{ij} $$. Scores that are invariant under parameter transformations that do not affect the probabilities have also been used. Sorting all residue pairs by this score results in a list in which the top of the list is strongly enriched in residue contacts when compared to the protein contact map of a homologous protein. High-quality predictions of residue contacts are valuable as prior information in protein structure prediction.

Inference of protein-protein interaction
DCA can be used for detecting conserved interaction between protein families and for predicting which residue pairs form contacts in a protein complex. Such predictions can be used when generating structural models for these complexes, or when inferring protein-protein interaction networks made from more than two proteins.

Modeling of fitness landscapes
DCA can be used to model fitness landscapes and to predict the effect of a mutation in the amino acid sequence of a protein on its fitness.