Direct linear plot

In biochemistry, the direct linear plot is a graphical method for enzyme kinetics data following the Michaelis–Menten equation. In this plot, observations are not plotted as points, but as lines in parameter space with axes $$K_\mathrm{m}$$ and $$V$$, such that each observation of a rate $$v_i$$ at substrate concentration $$a_i$$ is represented by a straight line with intercept $$-a_i$$ on the $$K_\mathrm{m}$$ axis and $$v_i$$ on the $$V$$ axis. Ideally (in the absence of experimental error) the lines intersect at a unique point $$(\hat{K}_\mathrm{m}, \hat{V})$$ whose coordinates provide the values of $$\hat{K}_\mathrm{m}$$ and $$\hat{V}$$.

Comparison with other plots of the Michaelis–Menten equation
The best known plots of the Michaelis–Menten equation, including the double-reciprocal plot of $$1/v$$ against $$1/a$$, the Hanes plot of $$a/v$$ against $$a$$, and the Eadie–Hofstee plot of $$v$$ against $$v/a$$ are all plots in observation space, with each observation represented by a point, and the parameters determined from the slope and intercepts of the lines that result. This is also the case for non-linear plots, such as that of $$v$$ against $$a$$, often wrongly called a "Michaelis-Menten plot", and that of $$v$$ against $$\log a$$ used by Michaelis and Menten. In contrast to all of these, the direct linear plot is a plot in parameter space, with observations represented by lines rather than as points.

Effect of experimental error
The case illustrated above is idealized, because it ignores the effect of experimental error. In practice, with $$n$$ observations, instead of a unique point of intersection, there is a family of $$n(n-1)/2$$ intersection points, with each one giving a separate estimate of $$K_{\mathrm{m}_{ij}}$$ and $$V_{ij}$$ for the lines drawn for the $$i\,\mathrm{th}$$ and $$j\,\mathrm{th}$$ observations. Some of these, when the intersecting lines are almost parallel, will be subject to very large errors, so one must not take the means (weighted or not) as the estimates of $$\hat{K}_\mathrm{m}$$ and $$\hat{V}$$. Instead one can take the medians of each set as estimates $$K_\mathrm{m}^*$$ and $$V^*$$.

The great majority of intersection points should occur in the first quadrant (both $$K_{\mathrm{m}_{ij}}$$ and $$V_{ij}$$ positive). Intersection points in the second quadrant ($$K_{\mathrm{m}_{ij}}$$ negative and $$V_{ij}$$ positive) do not require any special attention. However, intersection points in the third quadrant (both $$K_{\mathrm{m}_{ij}}$$ and $$V_{ij}$$ negative) should not be taken at face value, because these can occur if both $$v$$ values are large enough to approach $$V$$, and indicate that both $$K_{\mathrm{m}_{ij}}$$ and $$V_{ij}$$ should be taken as infinite and positive: $$K_{\mathrm{m}_{ij}} \rightarrow +\infty, V_{ij} \rightarrow +\infty$$.

The illustration is drawn for just four observations, in the interest of clarity, but in most applications there will be much more than that. Determining the location of the medians by inspection becomes increasingly difficult as the number of observations increases, but that is not a problem if the data are processed computationally. In any case, if the experimental errors are reasonably small, as in Fig. 1b of a study of tyrosine aminotransferase with seven observations, the lines crowd closely enough together around the point $$(K_\mathrm{m}^*, V^*)$$ for this to be located with reasonable precision.

Resistance to outliers and incorrect weighting
The major merit of the direct linear plot is that median estimates based on it are highly resistant to the presence of outliers. If the underlying distribution of errors in $$v$$ is not strictly Gaussian, but contains a small proportion of observations with abnormally large errors, this can have a disastrous effect on many regression methods, whether linear or non-linear, but median estimates are very little affected.

In addition, to give satisfactory results regression methods require correct weighting: do the errors $$\varepsilon(v)$$ follow a normal distribution with uniform standard deviation, or uniform coefficient of variation, or something else? This is very rarely investigated, so the weighting is usually based on preconceptions. Atkins and Nimmo made a comparison of different methods of fitting the Michaelis-Menten equation, and concluded that "We have therefore concluded that, unless the error is definitely known to be normally distributed and of constant magnitude, Eisenthal and Cornish-Bowden's method is the one to use."