Classical test theory

Classical test theory (CTT) is a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers. It is a theory of testing based on the idea that a person's observed or obtained score on a test is the sum of a true score (error-free score) and an error score. Generally speaking, the aim of classical test theory is to understand and improve the reliability of psychological tests.

Classical test theory may be regarded as roughly synonymous with true score theory. The term "classical" refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as item response theory, which sometimes bear the appellation "modern" as in "modern latent trait theory".

Classical test theory as we know it today was codified by Novick (1966) and described in classic texts such as Lord & Novick (1968) and Allen & Yen (1979/2002). The description of classical test theory below follows these seminal publications.

History
Classical test theory was born only after the following three achievements or ideas were conceptualized:

1. a recognition of the presence of errors in measurements,

2. a conception of that error as a random variable,

3. a conception of correlation and how to index it.

In 1904, Charles Spearman was responsible for figuring out how to correct a correlation coefficient for attenuation due to measurement error and how to obtain the index of reliability needed in making the correction. Spearman's finding is thought to be the beginning of Classical Test Theory by some (Traub, 1997). Others who had an influence in the Classical Test Theory's framework include: George Udny Yule, Truman Lee Kelley, Fritz Kuder & Marion Richardson involved in making the Kuder–Richardson Formulas, Louis Guttman, and, most recently, Melvin Novick, not to mention others over the next quarter century after Spearman's initial findings.

Definitions
Classical test theory assumes that each person has a true score,T, that would be obtained if there were no errors in measurement. A person's true score is defined as the expected number-correct score over an infinite number of independent administrations of the test. Unfortunately, test users never observe a person's true score, only an observed score, X. It is assumed that observed score = true score plus some error: X        =       T      +    E           observed score     true score     error

Classical test theory is concerned with the relations between the three variables $$X$$, $$T $$, and $$ E $$ in the population. These relations are used to say something about the quality of test scores. In this regard, the most important concept is that of reliability. The reliability of the observed test scores $$X$$, which is denoted as $${\rho^2_{XT}}$$, is defined as the ratio of true score variance $${\sigma^2_T}$$ to the observed score variance $${\sigma^2_X}$$:


 * $$\rho^2_{XT} = \frac{\sigma^2_T}{\sigma^2_X}$$

Because the variance of the observed scores can be shown to equal the sum of the variance of true scores and the variance of error scores, this is equivalent to


 * $$\rho^2_{XT} = \frac{\sigma^2_T}{\sigma^2_X} = \frac{\sigma^2_T}{\sigma^2_T+\sigma^2_E}$$

This equation, which formulates a signal-to-noise ratio, has intuitive appeal: The reliability of test scores becomes higher as the proportion of error variance in the test scores becomes lower and vice versa. The reliability is equal to the proportion of the variance in the test scores that we could explain if we knew the true scores. The square root of the reliability is the absolute value of the correlation between true and observed scores.

Evaluating tests and scores: Reliability
Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be acquired by diverse means. One way of estimating reliability is by constructing a so-called parallel test. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that


 * $$\mathbb{E}[X_i]=\mathbb{E}[X'_i]$$

and


 * $$\sigma^2_{E_i} = \sigma^2_{E'_i} $$

Under these assumptions, it follows that the correlation between parallel test scores is equal to reliability (see Lord & Novick, 1968, Ch. 2, for a proof).



\rho_{XX'}= \frac{\sigma_{XX'}}{\sigma_X \sigma_{X'}}= \frac{ \sigma_T^2 }{ \sigma_X^2 }= \rho_{XT}^2 $$

Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's ${\alpha}$. Consider a test consisting of $$k$$ items $$u_{j}$$, $$j=1,\ldots,k$$. The total test score is defined as the sum of the individual item scores, so that for individual $$i$$


 * $$X_i=\sum_{j=1}^k U_{ij}$$

Then Cronbach's alpha equals


 * $$ \alpha =\frac k {k-1}\left(1-\frac{\sum_{j=1}^k \sigma^2_{U_j}}{\sigma^2_X}\right)$$

Cronbach's $${\alpha}$$ can be shown to provide a lower bound for reliability under rather mild assumptions. Thus, the reliability of test scores in a population is always higher than the value of Cronbach's $${\alpha}$$ in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's $${\alpha}$$ is included in many standard statistical packages such as SPSS and SAS.

As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for $${\alpha}$$, say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing. These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear.

Evaluating items: P and item-total correlations
Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the item-total correlation (point-biserial correlation coefficient). The P-value represents the proportion of examinees responding in the keyed direction, and is typically referred to as item difficulty. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as item discrimination. In addition, these statistics are calculated for each response of the oft-used multiple choice item, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed psychometric software.

Alternatives
Classical test theory is an influential theory of test scores in the social sciences. In psychometrics, the theory has been superseded by the more sophisticated models in item response theory (IRT) and generalizability theory (G-theory). However, IRT is not included in standard statistical packages like SPSS, but SAS can estimate IRT models via PROC IRT and PROC MCMC and there are IRT packages for the open source statistical programming language R (e.g., CTT). While commercial packages routinely provide estimates of Cronbach's $${\alpha}$$, specialized psychometric software may be preferred for IRT or G-theory. However, general statistical packages often do not provide a complete classical analysis (Cronbach's $${\alpha}$$ is only one of many important statistics), and in many cases, specialized software for classical analysis is also necessary.

Shortcomings
One of the most important or well-known shortcomings of classical test theory is that examinee characteristics and test characteristics cannot be separated: each can only be interpreted in the context of the other. Another shortcoming lies in the definition of reliability that exists in classical test theory, which states that reliability is "the correlation between test scores on parallel forms of a test". The problem with this is that there are differing opinions of what parallel tests are. Various reliability coefficients provide either lower bound estimates of reliability or reliability estimates with unknown biases. A third shortcoming involves the standard error of measurement. The problem here is that, according to classical test theory, the standard error of measurement is assumed to be the same for all examinees. However, as Hambleton explains in his book, scores on any test are unequally precise measures for examinees of different ability, thus making the assumption of equal errors of measurement for all examinees implausible (Hambleton, Swaminathan, Rogers, 1991, p. 4). A fourth, and final shortcoming of the classical test theory is that it is test oriented, rather than item oriented. In other words, classical test theory cannot help us make predictions of how well an individual or even a group of examinees might do on a test item.