GOR method

The GOR method (short for Garnier–Osguthorpe–Robson) is an information theory-based method for the prediction of secondary structures in proteins. It was developed in the late 1970s shortly after the simpler Chou–Fasman method. Like Chou–Fasman, the GOR method is based on probability parameters derived from empirical studies of known protein tertiary structures solved by X-ray crystallography. However, unlike Chou–Fasman, the GOR method takes into account not only the propensities of individual amino acids to form particular secondary structures, but also the conditional probability of the amino acid to form a secondary structure given that its immediate neighbors have already formed that structure. The method is therefore essentially Bayesian in its analysis.

Method
The GOR method analyzes sequences to predict alpha helix, beta sheet, turn, or random coil secondary structure at each position based on 17-amino-acid sequence windows. The original description of the method included four scoring matrices of size 17×20, where the columns correspond to the log-odds score, which reflects the probability of finding a given amino acid at each position in the 17-residue sequence. The four matrices reflect the probabilities of the central, ninth amino acid being in a helical, sheet, turn, or coil conformation. In subsequent revisions to the method, the turn matrix was eliminated due to the high variability of sequences in turn regions (particularly over such a large window). The method was considered as best requiring at least four contiguous residues to score as alpha helices to classify the region as helical, and at least two contiguous residues for a beta sheet.

Algorithm
The mathematics and algorithm of the GOR method were based on an earlier series of studies by Robson and colleagues reported mainly in the Journal of Molecular Biology and The Biochemical Journal. The latter describes the information theoretic expansions in terms of conditional information measures. The use of the word "simple" in the title of the GOR paper reflected the fact that the above earlier methods provided proofs and techniques somewhat daunting by being rather unfamiliar in protein science in the early 1970s; even Bayes methods were then unfamiliar and controversial. An important feature of these early studies, which survived in the GOR method, was the treatment of the sparse protein sequence data of the early 1970s by expected information measures. That is, expectations on a Bayesian basis considering the distribution of plausible information measure values given the actual frequencies (numbers of observations). The expectation measures resulting from integration over this and similar distributions may now be seen as composed of "incomplete" or extended zeta functions, e.g. z(s,observed frequency) − z(s, expected frequency) with incomplete zeta function z(s, n) = 1 + (1/2)s + (1/3)s+ (1/4)s + …. +(1/n)s. The GOR method used s=1. Also, in the GOR method and the earlier methods, the measure for the contrary state to e.g. helix H, i.e. ~H, was subtracted from that for H, and similarly for beta sheet, turns, and coil or loop. Thus the method can be seen as employing a zeta function estimate of log predictive odds. An adjustable decision constant could also be applied, which thus implies a decision theory approach; the GOR method allowed the option to use decision constants to optimize predictions for different classes of protein. The expected information measure used as a basis for the information expansion was less important by the time of publication of the GOR method because protein sequence data became more plentiful, at least for the terms considered at that time. Then, for s=1, the expression z(s,observed frequency) − z(s,expected frequency) approaches the natural logarithm of (observed frequency / expected frequency) as frequencies increase. However, this measure (including use of other values of s) remains important in later more general applications with high-dimensional data, where data for more complex terms in the information expansion are inevitably sparse.