Holistic grading

Holistic grading or holistic scoring, in standards-based education, is an approach to scoring essays using a simple grading structure that bases a grade on a paper's overall quality. This type of grading, which is also described as nonreductionist grading, contrasts with analytic grading, which takes more factors into account when assigning a grade. Holistic grading can also be used to assess classroom-based work. Rather than counting errors, a paper is judged holistically and often compared to an anchor paper to evaluate if it meets a writing standard. It differs from other methods of scoring written discourse in two basic ways. It treats the composition as a whole, not assigning separate values to different parts of the writing. And it uses two or more raters, with the final score derived from their independent scores. Holistic scoring has gone by other names: "non-analytic," "overall quality," "general merit," "general impression," "rapid impression." Although the value and validation of the system are a matter of debate, holistic scoring of writing is still in wide application.

Definition
In holistic scoring, two or more raters independently assign a single score to a writing sample. Depending on the evaluative situation, the score will vary (e.g., "78," "passing." "deserves credit," "worthy of A-level," "very well qualified"), but each rating must be unitary. If raters are asked to consider or score separate aspects of the writing (e.g., organization, style, reasoning, support), their final holistic score is not mathematically derived from that initial consideration or those scores. Raters are first calibrated as a group so that two or more of them can independently assign the final score to writing sample within a pre-determined degree of reliability. The final score lies along a pre-set scale of values, and scorers try to apply the scale consistently. The final score for the piece of writing is derived from two or more independent ratings. Holistic scoring is often contrasted with analytic scoring.

Need
The composing of extended pieces of prose has been required of workers in many salaried walks of life, from science, business, and industry to law, religion, and politics. Competence in writing extended prose has also formed part of qualifying or certification tests for teachers, public servants, and military officers. Consequently, the teaching of writing is part of formal education in school and, in the US, in college. How can that competence in composing extended prose be best evaluated? Isolated parts of it can be tested with "objective", short-answer items: correct spelling and punctuation, for instance. Such items are scored with high degrees of reliability. But how well do item questions evaluate potential or accomplishment in writing coherent and meaningful extended passages? Testing candidates by having them write pieces of extended discourse seems a more valid evaluation method. That method, however, raises the issue of reliability. How reliably can the worth of a piece of writing be judged among readers and across assessment episodes? Teachers and other judges trust their knowledge of the subject and their understanding of good and bad writing, yet this trust in "connoisseurship" has long been questioned. Equally knowledgeable connoisseurs have been shown to give widely different marks to the same essays. Holistic scoring, with its attention to both reliability and validity, offers itself as a better method of judging writing competence. With attention to fairness, it can also focus on consequences of score use.

Model
While analytic grading involves criterion-by-criterion judgments, holistic grading appraises student works as integrated entities. In holistic grading, the learner's performance is approached as one and cannot be reduced or divided into several component performances. Here, teachers are required to consider specific aspects of the student's answer as well as the quality of the whole.

Holistic grading operates by distinguishing satisfactory performance from one that is simply adequate or outstanding.

Four kinds of scoring
Although a wide variety of procedures for holistic scoring have been tried, four forms have established distinct traditions.

Pooled-rater
Pooled-rater scoring typically uses three to five independent readers for each sample of writing. Although the scorers work from a common scale of rates, and may have a set of sample papers illustrating that scale ("anchor papers" ), usually they have had a minimum of training together. Their scores are simply summed or averaged for the sample's final score. In Britain, pooled-rater holistic scoring was first experimentally tested in 1934, employing ten teacher-raters per sample. It was first put into practice with 11+ examination scripts in Devon in 1939 using four teachers per essay. In the United States its rater reliability was validated from 1961 to 1966 by the Educational Testing Service; and it was used, sporadically, in the Educational Testing Service's English Composition Test from 1963 to 1992, employing from three to five raters per essay. A nearly synonymous term for "pooled-rater score" is "distributive evaluation"

Trait-informed
Trait-informed scoring trains raters to score to a scoring guide (also called a "rubric" or "checklist" )—a short set of writing criteria each scaled in grid format to the same number of accomplishment levels. For instance, the scoring guide used in a 1969 City University of New York study of student writing had five criteria (ideas, organization, sentence structure, wording, and punctuation/mechanics/spelling) and three levels (superior, average, unacceptable). The rationale for scoring guides argues that it forces scorers to attend to a spread of writing accomplishments and not give undue influence to one or two (the "halo effect"). Trait-informed scoring comes close to analytic scoring methods that have raters score each trait independently of the other traits and then add up the scores for a final mark, as in the Diederich scale. Trait-informed holistic scoring, however, remains holistic at heart and asks raters only to take into some account all the traits before deciding on a single final score.

Adjusted-rater
Adjusted-rater scoring assumes that some scorers are more accurate in their scores than other raters. Each paper is read independently by two raters and if their scores disagree to a certain extent, usually by more than one point on the rating scale, then the paper is read by a third, more experienced reader. Scorers who cause too many third readings are sometimes re-trained during the scoring session, sometimes dropped out of the reading corps. Adjusted-rater holistic scoring may have first been applied by the Board of Examiners for The College of the University of Chicago in 1943. Today large-scale commercial testing services sometimes use adjusted-rater scoring where one rater for an essay is a trained human and the other a computer programmed for automatic essay scoring, for instance GRE testing.

Single-rater
Single-rater monitored scoring trains raters as a group and may provide them with a detailed marking scheme. Each writing sample is scored, however, by only one rater unless, through periodic checking by a monitor, its score is deemed outside the range of acceptability and then it is re-rated, usually by the supervisor. This method, called "single marking" or "sampling" has long been standard in Great Britain school examinations, even though it has been shown to be less valid than double marking or multiple marking. In the United States, for the Writing Section of the TOEFLiBT, the Educational Testing Service now uses the combination of automated scoring and a certified human rater.

History
In Great Britain, formal pooled-rater holistic scoring was proposed as early as 1924 and formally tested in 1934–1935. It was first applied in 1939 by Chief Examiner R. K. Robertson to 11+ scripts in the Local Examination Authority of Devon, England, and continued there for ten years. Although other LEAs in Great Britain tried the system during the 1950s and 1960s and its reliability and validity was much studied by British researchers, it failed to take hold. Multiple marking of school scripts, usually written to show competence in subject areas, largely gave way to single-rater monitored scoring with analytical marking schemes.

In the US, the first applied holistic scoring of writing samples was administered by Paul B. Diederich at The College of the University of Chicago as a comprehensive examination for credit in the first-year writing course. The method was adjusted-rater scoring with teachers of the course as scorers and members of the Board of Examiners as adjusters. Around 1956 the Advanced Placement examination of the College Board began an adjusted-rater holistic system to score essays for advance English credit. Raters were high-school teachers, who brought the rating system back to their schools. One teacher was Albert Lavin, who installed similar holistic scoring at Sir Francis Drake High School in Marin County, California, 1966–1972, at grades 9, 10, 11, and 12 in order to show progress in school writing over those years. In 1973 teachers in the California State University and Colleges system used the Advanced Placement adjusted-rater system to score essays written by matriculating students for advance English composition credit. Pooled-rater holistic scoring was tested as early as 1950 by the Educational Testing Service (using the term "wholistic"). It was first applied in the College Board's 1963 English Composition Test. In higher education, the Georgia Regents' Testing Program, a rising-junior test for language skills, used it as early as 1972.

In the USA an exponential spread in holistic scoring took place from around 1975 to 1990, fueled in part by the educational accountability movement. In 1980 assessment of school writing was being conducted in at least 24 states, the large majority by writing samples rated holistically. In post-secondary education, more and more colleges and universities were using holistic scoring for advance credit, placement into first-year writing courses, exit from writing courses, and qualification for junior status and for undergraduate degree. Writing teachers were also instructing their students in holistic scoring so they could judge one another's writing—a pedagogy taught in National Writing Projects.

Beginning in the last two decades of the 20th century use of holistic scoring somewhat declined. Other means of rating a student's writing competence, perhaps more valid, were becoming popular, such as portfolios. College were turning more and more to testing agencies, such as ACT and ETS, to do scoring of writing samples for them, and by the first decade of the 21st century those agencies were doing some of that by automatic essay scoring. But holistic scoring of essays by humans is still applied in large-scale commercial tests such as the GED, TOEFL iBT, and GRE General Test. It is also used for placement or academic progression in some institutions of higher education, for instance at Washington State University. For admission and placement into writing courses, however, most colleges now rely on the analytical scoring of writing skills in tests such as ACT, SAT, CLEP, and International Baccalaureate.

Validation
Holistic scoring is often validated by its outcomes. Consistency among rater scores, or "rater reliability," has been computed by at least eight different formulas, among them percentage of agreement, Pearson's r correlation coefficient, the Spearman-Brown formula, Cronbach's alpha, and quadratic weighted kappa. Cost of scoring can be calculated by measuring average time raters spend on scoring a writing sample, the percent of samples requiring a third reading, or the expenditure on stipends for raters, salary of session leaders, refreshments for raters, machine copying, room rental, etc. Occasionally, especially with high-impact uses such as in standardized testing for college admission, efforts are made to estimate the concurrent validity of the scores. For instance in an early study of the General Education Development test (GED), the American Council on Education compared an experimental holistic essay score with the existing multiple-choice score and found that the two scores measured somewhat different sets of skills. More often, predictive validity is measured by comparing a school student's holistic score with later achievement in college courses, usually first-semester GPA, end-of-course grade in a first-year writing course, or teacher opinion of the student's writing ability. These correlations are usually low to moderate.

Criticism
Holistic scoring of writing attracted adverse criticism almost from the beginning. In the 1970s and 1980s and beyond, the criticism grew.


 * 1) Cost. In the 1980s, when examinations were often scored entirely by humans, valid and reliable holistic scoring of a writing sample took more time and therefore more money than scoring of items. For instance, it cost $0.75 per essay for the first and $0.53 for the second in the 1980-1981 Georgia Regents' Testing Program. Later, in terms of expense, holistic scoring of papers by humans could compete even less against machine-scored item tests or machine-rated essays, which cost from around half to a quarter of the cost of human scoring.
 * 2) Diagnosis. The most common complaint about holistic scoring is the paucity of diagnostic information it provides. Scores of "passing"—or of "3" on a 4-point, 6-point, or 9-point scale—provide little concrete guidance for the student, the teacher, or the researcher. In educational barrier exams, holistic scoring may serve administrators in locating which students did not pass but little serve teachers in helping those students pass on a second try.  The need to amplify diagnostic information was the reason why, in the second round of the National Assessment of Educational Progress (1973-1974), the Education Commission of the States supplemented holistic scoring with primary-trait scoring of writing samples.  The same reason prompted the International English Language Testing System, run by the British Council and the Cambridge English Language Assessment for second-language speakers and writers, to adopt "profile scoring" in 1985.
 * 3) Rubrics. As a pre-set checklist of a few writing traits each scaled equally on a few levels of accomplishment, the rubric has been criticized because it is simplistic, blind to cultural and developmental differences, and falsely premised. When a group of college composition teachers were asked for their "criteria for evaluation" of writing, they mentioned not 5 or 6 criteria but 124. While the rubric assumes that criteria are independent of one another, studies have shown that the scores readers give to one or two criteria influence the scores they give to the other criteria (the halo effect). Pre-set and equally valued criteria also do not fit the development of young adult writers, development which may be uneven, non-universal, and regressive.   Most fundamentally, standardized rubrics propose a pre-determined language outcome, whereas language is never determined, never free of context. Rubrics use "deterministic formulas to predict outcomes for complex systems" —a critique that has been leveled at rubrics used for summative scores in large-scale testing as well as for formative feedback in the classroom.
 * 4) De-contextualization. Traditional holistic scoring may erase vital context of the composing, for instance the influence on different writers responding in a timed, impromptu draft to different topics and different genres of writing. From the point of view of contrastive rhetoric, vital cultural differences of the writers may also be erased. For instance when researchers for the International Association for the Evaluation of Educational Achievement tried to create measures for rating essays composed by students from Finland, Korea, and the US, they found that "holistic scoring would be doomed at the outset because of the differences in communities". Holistic scoring—particularly trait-informed scoring with rater training strongly controlled to achieve high rater reliability—also may disregard the ecology of the scorers. The scoring system creates a set of readers artificially forced out of their natural reading response by an imposed consensus.  Such concerns encouraged institutions such as Ohio University, the University of Louisville, and Washington State University to assess the writing competency of students with portfolios of their essays written from past classes.
 * 5) Fairness. Although holistic scoring of writing has been defended as more fair for minorities and second-language writers than objective testing,  evidence has also been gathered to show that holistic scoring has its own problems with fairness. Coaching was less affordable for low-income candidates. African American students had more problems with the essay portion of Florida's CLAST test.  The essay prompts for the CUNY Writing Assessment Test were not "content-fair and culture-free" and posed more problems for Hispanic and other second-language writers. The Educational Testing Service has shown a long-standing concern about test fairness,    although currently research into unfair outcomes of holistic scoring probably lags behind the intuitions of practitioners and probably needs to apply more discriminant statistical analysis to document those outcomes.

Projects using holistic grading
Many institutions use holistic grading when evaluating student writing as part of a graduation requirement. Some examples include:


 * The National Certificate of Educational Achievement is the New Zealand graduation certificate, which bases its score on holistic grading.
 * In the United States, the Graduate Record Examination (GRE) uses holistic grading.