Substitution model

In biology, a substitution model, also called models of sequence evolution, are Markov models that describe changes over evolutionary time. These models describe evolutionary changes in macromolecules, such as DNA sequences or protein sequences, that can be represented as sequence of symbols (e.g., A, C, G, and T in the case of DNA or the 20 "standard" proteinogenic amino acids in the case of proteins). Substitution models are used to calculate the likelihood of phylogenetic trees using multiple sequence alignment data. Thus, substitution models are central to maximum likelihood estimation of phylogeny as well as Bayesian inference in phylogeny. Estimates of evolutionary distances (numbers of substitutions that have occurred since a pair of sequences diverged from a common ancestor) are typically calculated using substitution models (evolutionary distances are used input for distance methods such as neighbor joining). Substitution models are also central to phylogenetic invariants because they are necessary to predict site pattern frequencies given a tree topology. Substitution models are also necessary to simulate sequence data for a group of organisms related by a specific tree.

Phylogenetic tree topologies and other parameters
Phylogenetic tree topologies are often the parameter of interest; thus, branch lengths and any other parameters describing the substitution process are often viewed as nuisance parameters. However, biologists are sometimes interested in the other aspects of the model. For example, branch lengths, especially when those branch lengths are combined with information from the fossil record and a model to estimate the timeframe for evolution. Other model parameters have been used to gain insights into various aspects of the process of evolution. The Ka/Ks ratio (also called ω in codon substitution models) is a parameter of interest in many studies. The Ka/Ks ratio can be used to examine the action of natural selection on protein-coding regions, it provides information about the relative rates of nucleotide substitutions that change amino acids (non-synonymous substitutions) to those that do not change the encoded amino acid (synonymous substitutions).

Application to sequence data
Most of the work on substitution models has focused on DNA/RNA and protein sequence evolution. Models of DNA sequence evolution, where the alphabet corresponds to the four nucleotides (A, C, G, and T), are probably the easiest models to understand. DNA models can also be used to examine RNA virus evolution; this reflects the fact that RNA also has a four nucleotide alphabet (A, C, G, and U). However, substitution models can be used for alphabets of any size; the alphabet is the 20 proteinogenic amino acids for proteins and the sense codons (i.e., the 61 codons that encode amino acids in the standard genetic code) for aligned protein-coding gene sequences. In fact, substitution models can be developed for any biological characters that can be encoded using a specific alphabet (e.g., amino acid sequences combined with information about the conformation of those amino acids in three-dimensional protein structures ).

The majority of substitution models used for evolutionary research assume independence among sites (i.e., the probability of observing any specific site pattern is identical regardless of where the site pattern is in the sequence alignment). This simplifies likelihood calculations because it is only necessary to calculate the probability of all site patterns that appear in the alignment then use those values to calculate the overall likelihood of the alignment (e.g., the probability of three "GGGG" site patterns given some model of DNA sequence evolution is simply the probability of a single "GGGG" site pattern raised to the third power). This means that substitution models can be viewed as implying a specific multinomial distribution for site pattern frequencies. If we consider a multiple sequence alignment of four DNA sequences there are 256 possible site patterns so there are 255 degrees of freedom for the site pattern frequencies. However, it is possible to specify the expected site pattern frequencies using five degrees of freedom if using the Jukes-Cantor model of DNA evolution, which is a simple substitution model that allows one to calculate the expected site pattern frequencies only the tree topology and the branch lengths (given four taxa an unrooted bifurcating tree has five branch lengths).

Substitution models also make it possible to simulate sequence data using Monte Carlo methods. Simulated multiple sequence alignments can be used to assess the performance of phylogenetic methods and generate the null distribution for certain statistical tests in the fields of molecular evolution and molecular phylogenetics. Examples of these tests include tests of model fit and the "SOWH test" that can be used to examine tree topologies.

Application to morphological data
The fact that substitution models can be used to analyze any biological alphabet has made it possible to develop models of evolution for phenotypic datasets (e.g., morphological and behavioural traits). Typically, "0" is. used to indicate the absence of a trait and "1" is used to indicate the presence of a trait, although it is also possible to score characters using multiple states. Using this framework, we might encode a set of phenotypes as binary strings (this could be generalized to k-state strings for characters with more than two states) before analyses using an appropriate mode. This can be illustrated using a "toy" example: we can use a binary alphabet to score the following phenotypic traits "has feathers", "lays eggs", "has fur", "is warm-blooded", and "capable of powered flight". In this toy example hummingbirds would have sequence 11011 (most other birds would have the same string), ostriches would have the sequence 11010, cattle (and most other land mammals) would have 00110, and bats would have 00111. The likelihood of a phylogenetic tree can then be calculated using those binary sequences and an appropriate substitution model. The existence of these morphological models make it possible to analyze data matrices with fossil taxa, either using the morphological data alone or a combination of morphological and molecular data (with the latter scored as missing data for the fossil taxa).

There is an obvious similarity between use of molecular or phenotypic data in the field of cladistics and analyses of morphological characters using a substitution model. However, there has been a vociferous debate in the systematics community regarding the question of whether or not cladistic analyses should be viewed as "model-free". The field of cladistics (defined in the strictest sense) favor the use of the maximum parsimony criterion for phylogenetic inference. Many cladists reject the position that maximum parsimony is based on a substitution model and (in many cases) they justify the use of parsimony using the philosophy of Karl Popper. However, the existence of "parsimony-equivalent" models (i.e., substitution models that yield the maximum parsimony tree when used for analyses) makes it possible to view parsimony as a substitution model.

The molecular clock and the units of time
Typically, a branch length of a phylogenetic tree is expressed as the expected number of substitutions per site; if the evolutionary model indicates that each site within an ancestral sequence will typically experience x substitutions by the time it evolves to a particular descendant's sequence then the ancestor and descendant are considered to be separated by branch length x.

Sometimes a branch length is measured in terms of geological years. For example, a fossil record may make it possible to determine the number of years between an ancestral species and a descendant species. Because some species evolve at faster rates than others, these two measures of branch length are not always in direct proportion. The expected number of substitutions per site per year is often indicated with the Greek letter mu (μ).

A model is said to have a strict molecular clock if the expected number of substitutions per year μ is constant regardless of which species' evolution is being examined. An important implication of a strict molecular clock is that the number of expected substitutions between an ancestral species and any of its present-day descendants must be independent of which descendant species is examined.

Note that the assumption of a strict molecular clock is often unrealistic, especially across long periods of evolution. For example, even though rodents are genetically very similar to primates, they have undergone a much higher number of substitutions in the estimated time since divergence in some regions of the genome. This could be due to their shorter generation time, higher metabolic rate, increased population structuring, increased rate of speciation, or smaller body size. When studying ancient events like the Cambrian explosion under a molecular clock assumption, poor concurrence between cladistic and phylogenetic data is often observed. There has been some work on models allowing variable rate of evolution.

Models that can take into account variability of the rate of the molecular clock between different evolutionary lineages in the phylogeny are called “relaxed” in opposition to “strict”. In such models the rate can be assumed to be correlated or not between ancestors and descendants and rate variation among lineages can be drawn from many distributions but usually exponential and lognormal distributions are applied. There is a special case, called “local molecular clock” when a phylogeny is divided into at least two partitions (sets of lineages) and a strict molecular clock is applied in each, but with different rates.

Time-reversible and stationary models
Many useful substitution models are time-reversible; in terms of the mathematics, the model does not care which sequence is the ancestor and which is the descendant so long as all other parameters (such as the number of substitutions per site that is expected between the two sequences) are held constant.

When an analysis of real biological data is performed, there is generally no access to the sequences of ancestral species, only to the present-day species. However, when a model is time-reversible, which species was the ancestral species is irrelevant. Instead, the phylogenetic tree can be rooted using any of the species, re-rooted later based on new knowledge, or left unrooted. This is because there is no 'special' species, all species will eventually derive from one another with the same probability.

A model is time reversible if and only if it satisfies the property (the notation is explained below)
 * $$ \pi_iQ_{ij} = \pi_jQ_{ji}$$

or, equivalently, the detailed balance property,
 * $$ \pi_iP(t)_{ij} = \pi_jP(t)_{ji}$$

for every i, j, and t.

Time-reversibility should not be confused with stationarity. A model is stationary if Q does not change with time. The analysis below assumes a stationary model.

The mathematics of substitution models
Stationary, neutral, independent, finite sites models (assuming a constant rate of evolution) have two parameters, π, an equilibrium vector of base (or character) frequencies and a rate matrix, Q, which describes the rate at which bases of one type change into bases of another type; element $$Q_{ij}$$ for i ≠ j is the rate at which base i goes to base j. The diagonals of the Q matrix are chosen so that the rows sum to zero:


 * $$Q_{ii} = - {\sum_{\lbrace j \mid j\ne i\rbrace} Q_{ij}} \,,$$

The equilibrium row vector π must be annihilated by the rate matrix Q:
 * $$\pi \, Q = 0\,.$$

The transition matrix function is a function from the branch lengths (in some units of time, possibly in substitutions), to a matrix of conditional probabilities. It is denoted $$P(t)$$. The entry in the ith column and the jth row, $$P_{ij}(t)$$, is the probability, after time t, that there is a base j at a given position, conditional on there being a base i in that position at time 0. When the model is time reversible, this can be performed between any two sequences, even if one is not the ancestor of the other, if you know the total branch length between them.

The asymptotic properties of Pij(t) are such that Pij(0) = δij, where δij is the Kronecker delta function. That is, there is no change in base composition between a sequence and itself. At the other extreme, $$\lim_{t \rightarrow \infty} P_{ij}(t) = \pi_{j}\,,$$ or, in other words, as time goes to infinity the probability of finding base j at a position given there was a base i at that position originally goes to the equilibrium probability that there is base j at that position, regardless of the original base. Furthermore, it follows that $$ \pi P(t) = \pi $$ for all t.

The transition matrix can be computed from the rate matrix via matrix exponentiation:
 * $$P(t) = e^{Qt} = \sum_{n=0}^\infty Q^n\frac{t^n}{n!}\,,$$

where Qn is the matrix Q multiplied by itself enough times to give its nth power.

If Q is diagonalizable, the matrix exponential can be computed directly: let Q = U−1 Λ U be a diagonalization of Q, with
 * $$\Lambda = \begin{pmatrix}

\lambda_1 & \ldots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \ldots & \lambda_4 \end{pmatrix}\,, $$ where Λ is a diagonal matrix and where $$\lbrace \lambda_i \rbrace$$ are the eigenvalues of Q, each repeated according to its multiplicity. Then
 * $$P(t) = e^{Qt} = e^{U^{-1} (\Lambda t) U} = U^{-1} e^{\Lambda t}\,U\,,$$

where the diagonal matrix eΛt is given by
 * $$e^{\Lambda t} = \begin{pmatrix}

e^{\lambda_1 t} & \ldots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \ldots & e^{\lambda_4 t} \end{pmatrix}\,. $$

Generalised time reversible
Generalised time reversible (GTR) is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986. The GTR model is often called the general time reversible model in publications; it has also been called the REV model.

The GTR parameters for nucleotides consist of an equilibrium base frequency vector, $$\vec{\pi} = (\pi_1, \pi_2, \pi_3, \pi_4)$$, giving the frequency at which each base occurs at each site, and the rate matrix


 * $$Q = \begin{pmatrix} {-(x_1+x_2+x_3)} & x_1 & x_2 & x_3 \\ {\pi_1 x_1 \over \pi_2} & {-({\pi_1 x_1 \over \pi_2} + x_4 + x_5)} & x_4 & x_5 \\ {\pi_1 x_2 \over \pi_3} & {\pi_2 x_4 \over \pi_3} & {-({\pi_1 x_2 \over \pi_3} + {\pi_2 x_4 \over \pi_3} + x_6)} & x_6 \\ {\pi_1 x_3 \over \pi_4} & {\pi_2 x_5 \over \pi_4} & {\pi_3 x_6 \over \pi_4} & {-({\pi_1 x_3 \over \pi_4} + {\pi_2 x_5 \over \pi_4} + {\pi_3 x_6 \over \pi_4})} \end{pmatrix} $$

Because the model must be time reversible and must approach the equilibrium nucleotide (base) frequencies at long times, each rate below the diagonal equals the reciprocal rate above the diagonal multiplied by the equilibrium ratio of the two bases. As such, the nucleotide GTR requires 6 substitution rate parameters and 4 equilibrium base frequency parameters. Since the 4 frequency parameters must sum to 1, there are only 3 free frequency parameters. The total of 9 free parameters is often further reduced to 8 parameters plus $$\mu$$, the overall number of substitutions per unit time. When measuring time in substitutions ($$\mu$$=1) only 8 free parameters remain.

In general, to compute the number of parameters, you count the number of entries above the diagonal in the matrix, i.e. for n trait values per site $${{n^2-n} \over 2} $$, and then add n-1 for the equilibrium frequencies, and subtract 1 because $$\mu$$ is fixed. You get


 * $${{n^2-n} \over 2} + (n - 1) - 1 = {1 \over 2}n^2 + {1 \over 2}n - 2.$$

For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins), you would find there are 208 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are $$4^3 = 64$$ codons, resulting in 2078 free parameters. However, the rates for transitions between codons which differ by more than one base are often assumed to be zero, reducing the number of free parameters to only $${{20 \times 19 \times 3} \over 2} + 63 - 1 = 632$$ parameters. Another common practice is to reduce the number of codons by forbidding the stop (or nonsense) codons. This is a biologically reasonable assumption because including the stop codons would mean that one is calculating the probability of finding sense codon $$j$$ after time $$t$$ given that the ancestral codon is $$i$$ would involve the possibility of passing through a state with a premature stop codon.

An alternative (and commonly used  ) way to write the instantaneous rate matrix ($$Q$$ matrix) for the nucleotide GTR model is:

$$Q = \begin{pmatrix} {-(a\pi_C+b\pi_G+c\pi_T)} & a\pi_C & b\pi_G & c\pi_T \\ a\pi_A & {-(a\pi_A+d\pi_G+e\pi_T)} & d\pi_G & e\pi_T \\ b\pi_A & d\pi_C & {-(b\pi_A+d\pi_C+f\pi_T)} & f\pi_T \\ c\pi_A & e\pi_C & f\pi_G & {-(c\pi_A+e\pi_C+f\pi_G)} \end{pmatrix} $$

The $$Q$$ matrix is normalized so $$-\sum_{k=1}^4 \pi_i Q_{ii} = 1$$.

This notation is easier to understand than the notation originally used by Tavaré, because all model parameters correspond either to "exchangeability" parameters ($$a$$ through $$f$$, which can also be written using the notation $$r_{ij}$$) or to equilibrium nucleotide frequencies $$\vec{\pi} = (\pi_A, \pi_C, \pi_G, \pi_T)$$. Note that the nucleotides in the $$Q$$ matrix have been written in alphabetical order. In other words, the transition probability matrix for the $$Q$$ matrix above would be:

$$P(t) = e^{Qt} = \begin{pmatrix} p_\mathrm{AA}(t) & p_\mathrm{AC}(t) & p_\mathrm{AG}(t) & p_\mathrm{AT}(t) \\ p_\mathrm{CA}(t) & p_\mathrm{CC}(t) & p_\mathrm{CG}(t) & p_\mathrm{CT}(t) \\ p_\mathrm{GA}(t) & p_\mathrm{GC}(t) & p_\mathrm{GG}(t) & p_\mathrm{GT}(t) \\ p_\mathrm{TA}(t) & p_\mathrm{TC}(t) & p_\mathrm{TG}(t) & p_\mathrm{TT}(t) \end{pmatrix}$$

Some publications write the nucleotides in a different order (e.g., some authors choose to group two purines together and the two pyrimidines together; see also models of DNA evolution). These differences in notation make it important to be clear regarding the order of the states when writing the $$Q$$ matrix.

The value of this notation is that instantaneous rate of change from nucleotide $$i$$ to nucleotide $$j$$ can always be written as $$r_{ij}\pi_j$$, where $$r_{ij}$$ is the exchangeability of nucleotides $$i$$ and $$j$$ and $$\pi_j$$ is the equilibrium frequency of the $$j^{th}$$ nucleotide. The matrix shown above uses the letters $$a$$ through $$f$$ for the exchangeability parameters in the interest of readability, but those parameters could also be to written in a systematic manner using the $$r_{ij}$$ notation (e.g., $$a = r_{AC}$$, $$b = r_{AG}$$, and so forth).

Note that the ordering of the nucleotide subscripts for exchangeability parameters is irrelevant (e.g., $$r_{AC} = r_{CA}$$) but the transition probability matrix values are not (i.e.,  $$p_\mathrm{AC}(t)$$ is the probability of observing A in sequence 1 and C in sequence 2 when the evolutionary distance between those sequences is $$t$$ whereas  $$p_\mathrm{CA}(t)$$ is the probability of observing C in sequence 1 and A in sequence 2 at the same evolutionary distance).

An arbitrarily chosen exchangeability parameters (e.g., $$f=r_{GT}$$) is typically set to a value of 1 to increase the readability of the exchangeability parameter estimates (since it allows users to express those values relative to chosen exchangeability parameter). The practice of expressing the exchangeability parameters in relative terms is not problematic because the $$Q$$ matrix is normalized. Normalization allows $$t$$ (time) in the matrix exponentiation $$P(t) = e^{Qt}$$ to be expressed in units of expected substitutions per site (standard practice in molecular phylogenetics). This is the equivalent to the statement that one is setting the mutation rate $$\mu$$ to 1) and reducing the number of free parameters to eight. Specifically, there are five free exchangeability parameters ($$a$$ through $$e$$, which are expressed relative to the fixed ''$$f=r_{GT}=1 $$ in this example) and three equilibrium base frequency parameters (as described above, only three $$\pi_i$$'' values need to be specified because $$\vec{\pi}$$ must sum to 1).

The alternative notation also makes it easier to understand the sub-models of the GTR model, which simply correspond to cases where exchangeability and/or equilibrium base frequency parameters are constrained to take on equal values. A number of specific sub-models have been named, largely based on their original publications: There are 203 possible ways that the exchangeability parameters can be restricted to form sub-models of GTR, ranging from the JC69 and F81 models (where all exchangeability parameters are equal) to the SYM model and the full GTR (or REV ) model (where all exchangeability parameters are free). The equilibrium base frequencies are typically treated in two different ways: 1) all $$\pi_i$$ values are constrained to be equal (i.e., $$\pi_A = \pi_C = \pi_G = \pi_T = 0.25$$); or 2) all $$\pi_i$$ values are treated as free parameters. Although the equilibrium base frequencies can be constrained in other ways most constraints that link some but not all $$\pi_i$$ values are unrealistic from a biological standpoint. The possible exception is enforcing strand symmetry (i.e., constraining $$\pi_A = \pi_T$$ and $$\pi_C = \pi_G$$ but allowing $$\pi_A + \pi_T \neq \pi_C + \pi_G$$).

The alternative notation also makes it straightforward to see how the GTR model can be applied to biological alphabets with a larger state-space (e.g., amino acids or codons). It is possible to write a set of equilibrium state frequencies as $$\pi_1$$, $$\pi_2$$, ... $$\pi_k$$ and a set of exchangeability parameters ($$r_{ij}$$) for any alphabet of $$k$$ character states. These values can then be used to populate the $$Q$$ matrix by setting the off-diagonal elements as shown above (the general notation would be $$Q_{ij}=r_{ij}\pi_j$$), setting the diagonal elements $$Q_{ii}$$ to the negative sum of the off-diagonal elements on the same row, and normalizing. Obviously, $$k=20$$ for amino acids and $$k=61$$ for codons (assuming the standard genetic code). However, the generality of this notation is beneficial because one can use reduced alphabets for amino acids. For example, one can use $$k=6$$ and encode amino acids by recoding the amino acids using the six categories proposed by Margaret Dayhoff. Reduced amino acid alphabets are viewed as a way to reduce the impact of compositional variation and saturation.

Importantly, evolutionary patterns can vary among genomic regions and thus different genomic regions can fit with different substitution models. Actually, ignoring heterogeneous evolutionary patterns along sequences can lead to biases in the estimation of evolutionary parameters, including the Ka/Ks ratio. In this regard, the use of mixture models in phylogenentic frameworks is convenient to better mimic the molecular evolution observed in real data.

Mechanistic vs. empirical models
A main difference in evolutionary models is how many parameters are estimated every time for the data set under consideration and how many of them are estimated once on a large data set. Mechanistic models describe all substitutions as a function of a number of parameters which are estimated for every data set analyzed, preferably using maximum likelihood. This has the advantage that the model can be adjusted to the particularities of a specific data set (e.g. different composition biases in DNA). Problems can arise when too many parameters are used, particularly if they can compensate for each other (this can lead to non-identifiability ). Then it is often the case that the data set is too small to yield enough information to estimate all parameters accurately.

Empirical models are created by estimating many parameters (typically all entries of the rate matrix as well as the character frequencies, see the GTR model above) from a large data set. These parameters are then fixed and will be reused for every data set. This has the advantage that those parameters can be estimated more accurately. Normally, it is not possible to estimate all entries of the substitution matrix from the current data set only. On the downside, the parameters estimated from the training data might be too generic and therefore have a poor fit to any particular dataset. A potential solution for that problem is to estimate some parameters from the data using maximum likelihood (or some other method). In studies of protein evolution the equilibrium amino acid frequencies $$\vec{\pi} = (\pi_A, \pi_R, \pi_N, ... \pi_V)$$ (using the one-letter IUPAC codes for amino acids to indicate their equilibrium frequencies) are often estimated from the data while keeping the exchangeability matrix fixed. Beyond the common practice of estimating amino acid frequencies from the data, methods to estimate exchangeability parameters or adjust the $$Q$$ matrix for protein evolution in other ways have been proposed.

With the large-scale genome sequencing still producing very large amounts of DNA and protein sequences, there is enough data available to create empirical models with any number of parameters, including empirical codon models. Because of the problems mentioned above, the two approaches are often combined, by estimating most of the parameters once on large-scale data, while a few remaining parameters are then adjusted to the data set under consideration. The following sections give an overview of the different approaches taken for DNA, protein or codon-based models.

DNA substitution models
The first models of DNA evolution was proposed Jukes and Cantor in 1969. The Jukes-Cantor (JC or JC69) model assumes equal transition rates as well as equal equilibrium frequencies for all bases and it is the simplest sub-model of the GTR model. In 1980, Motoo Kimura introduced a model with two parameters (K2P or K80 ): one for the transition and one for the transversion rate. A year later, Kimura introduced a second model (K3ST, K3P, or K81 ) with three substitution types: one for the transition rate, one for the rate of transversions that conserve the strong/weak properties of nucleotides ($$A\leftrightarrow T$$ and $$C\leftrightarrow G$$, designated $$\beta$$ by Kimura ), and one for rate of transversions that conserve the amino/keto properties of nucleotides ($$A\leftrightarrow C$$ and $$G\leftrightarrow T$$, designated $$\gamma$$ by Kimura ). In 1981, Joseph Felsenstein proposed a four-parameter model (F81 ) in which the substitution rate corresponds to the equilibrium frequency of the target nucleotide. Hasegawa, Kishino, and Yano unified the two last models to a five-parameter model (HKY ). After these pioneering efforts, many additional sub-models of the GTR model were introduced into the literature (and common use) in the 1990s. Other models that move beyond the GTR model in specific ways were also developed and refined by several researchers.

Almost all DNA substitution models are mechanistic models (as described above). The small number of parameters that one needs to estimate for these models makes it feasible to estimate those parameters from the data. It is also necessary because the patterns of DNA sequence evolution often differ among organisms and among genes within organisms. The later may reflect optimization by the action of selection for specific purposes (e.g. fast expression or messenger RNA stability) or it might reflect neutral variation in the patterns of substitution. Thus, depending on the organism and the type of gene, it is likely necessary to adjust the model to these circumstances.

Two-state substitution models
An alternative way to analyze DNA sequence data is to recode the nucleotides as purines (R) and pyrimidines (Y); this practice is often called RY-coding. Insertions and deletions in multiple sequence alignments can also be encoded as binary data and analyzed in using a two-state model.

The simplest two-state model of sequence evolution is called the Cavender-Farris model or the Cavender-Farris-Neyman (CFN) model; the name of this model reflects the fact that it was described independently in several different publications. The CFN model is identical to the Jukes-Cantor model adapted to two states and it has even been implemented as the "JC2" model in the popular IQ-TREE software package (using this model in IQ-TREE requires coding the data as 0 and 1 rather than R and Y; the popular PAUP* software package can interpret a data matrix comprising only R and Y as data to be analyzed using the CFN model). It is also straightforward to analyze binary data using the phylogenetic Hadamard transform. The alternative two-state model allows the equilibrium frequency parameters of R and Y (or 0 and 1) to take on values other than 0.5 by adding single free parameter; this model is variously called CFu or GTR2 (in IQ-TREE).

Amino acid substitution models
For many analyses, particularly for longer evolutionary distances, the evolution is modeled on the amino acid level. Since not all DNA substitution also alter the encoded amino acid, information is lost when looking at amino acids instead of nucleotide bases. However, several advantages speak in favor of using the amino acid information: DNA is much more inclined to show compositional bias than amino acids, not all positions in the DNA evolve at the same speed (non-synonymous mutations are less likely to become fixed in the population than synonymous ones), but probably most important, because of those fast evolving positions and the limited alphabet size (only four possible states), the DNA suffers from more back substitutions, making it difficult to accurately estimate evolutionary longer distances.

Unlike the DNA models, amino acid models traditionally are empirical models. They were pioneered in the 1960s and 1970s by Dayhoff and co-workers by estimating replacement rates from protein alignments with at least 85% identity (originally with very limited data and ultimately culminating in the Dayhoff PAM model of 1978 ). This minimized the chances of observing multiple substitutions at a site. From the estimated rate matrix, a series of replacement probability matrices were derived, known under names such as PAM250. Log-odds matrices based on the Dayhoff PAM model were commonly used to assess the significance of homology search results, although the BLOSUM matrices have superseded the PAM log-odds matrices in this context because the BLOSUM matrices appear to be more sensitive across a variety of evolutionary distances, unlike the PAM log-odds matrices.

The Dayhoff PAM matrix was the source of the exchangeability parameters used in one of the first maximum-likelihood analyses of phylogeny that used protein data and the PAM model (or an improved version of the PAM model called DCMut ) continues to be used in phylogenetics. However, the limited number of alignments used to generate the PAM model (reflecting the limited amount of sequence data available in the 1970s) almost certainly inflated the variance of some rate matrix parameters (alternatively, the proteins used to generate the PAM model could have been a non-representative set). Regardless, it is clear that the PAM model seldom has as good of a fit to most datasets as more modern empirical models (Keane et al. 2006 tested thousands of vertebrate, bacterial, and archaeal proteins and they found that the Dayhoff PAM model had the best-fit to at most <4% of the proteins).

Starting in the 1990s, the rapid expansion of sequence databases due to improved sequencing technologies led to the estimation of many new empirical matrices (see for a complete list). The earliest efforts used methods similar to those used by Dayhoff, using large-scale matching of the protein database to generate a new log-odds matrix and the JTT (Jones-Taylor-Thornton) model. The rapid increases in compute power during this time (reflecting factors such as Moore's law) made it feasible to estimate parameters for empirical models using maximum likelihood (e.g., the WAG and LG models) and other methods (e.g., the VT and PMB models). The IQ-Tree software package allows users to infer their own time reversible model using QMaker, or non-time-reversible using nQMaker.

The no common mechanism (NCM) model and maximum parsimony
In 1997, Tuffley and Steel described a model that they named the no common mechanism (NCM) model. The topology of the maximum likelihood tree for a specific dataset given the NCM model is identical to the topology of the optimal tree for the same data given the maximum parsimony criterion. The NCM model assumes all of the data (e.g., homologous nucleotides, amino acids, or morphological characters) are related by a common phylogenetic tree. Then $$2T-3$$ parameters are introduced for each homologous character, where $$T$$ is the number of sequences. This can be viewed as estimating a separate rate parameter for every character × branch pair in the dataset (note that the number of branches in a fully resolved phylogenetic tree is $$2T-3$$). Thus, the number of free parameters in the NCM model always exceeds the number of homologous characters in the data matrix, and the NCM model has been criticized as consistently "over-parameterized."