Writing center assessment

Writing center assessment refers to a set of practices used to evaluate writing center spaces. Writing center assessment builds on the larger theories of writing assessment methods and applications by focusing on how those processes can be applied to writing center contexts. In many cases, writing center assessment and any assessment of academic support structures in university settings builds on programmatic assessment principles as well. As a result, writing center assessment can be considered a branch of programmatic assessment, and the methods and approaches used here can be applied to a range of academic support structures, such as digital studio spaces.

History
While writing centers have been prominent features in university settings dating back to the 1970s in American higher education, questions remain about the role of the writing center in improving student writing ability. In discussing the lack of discussion about writing center assessment, Casey Jones compares writing centers to the group Alcoholics Anonymous, claiming that "both AA and writing labs have similar features" yet "the structure of AA complicates empirical research, the desired outcome, sobriety, can be clearly defined and measured. The clear-cut assessment of writing performance is a far more elusive task". Between 1985 and 1989 the Writing Lab Newsletter, a popular publication among writing center directors, lacked discussion of hard evaluation of writing centers, illustrating the early lack of discussion about assessment in the context of writing centers, instead focusing primarily on advice and how-to guides. In many cases, writing center directors or writing program administrators (WPAs) are responsible for assessing writing centers, and must communicate these results to academic administration and various stakeholders. Assessment is seen as beneficial for writing centers because it leads us to assume the professional and ethical behaviors important not just for writing centers but for all higher education.

Methods
One of the major sources of methods and approaches to writing center assessment comes from writing assessment at large, and programmatic assessment. James Bell argues that directors of writing centers should "turn to educational program evaluation and select general types of evaluations most appropriate for writing centers". Writing center assessment methods can largely be divided into two major forms of methods: qualitative and quantitative. Qualitative methods are predicated on the desire to understand teaching and learning from the actions and perspectives of teachers and learners, and has largely dominated knowledge making in composition studies, particularly in the last twenty years. Quantitative methods, meanwhile, stem from the belief that the world works in predictable patterns, ones that might be isolated in terms of their causes and effects or the strengths of their relationships (i.e., correlation). The use of quantitative methods in writing center contexts leaves room for issues to arise, however, such as data being interpreted incorrectly to support the work of the writing center, or not choosing appropriate data to measure student success like ACT writing test scores or course grades in first-year composition courses. Some writing scholars endorse quantitative methods more thoroughly than others, and see them as most helpful when reframed in a postmodern epistemology since most writing center directors subscribe to a theory of epistemology that sees knowledge as constructed, tenuous, and relative. Writing center scholars such as Stephen North group these methodologies into three larger approaches: Reflections on Experience, or looking back of writing center events to help others out in similar situations; Speculation, or a theory of how writing centers should work; and Surveys, or what he champions as enumeration. Fitting into and blending these methods, several writing studies scholars have published articles on methods used in assessing different elements of writing centers which can be seen in the sections below.

Focus Groups
One method of assessment used in writing center contexts is focus groups. In writing centers, using this method allows writing center directors to collect responses to specific questions and to use the social dynamics of the group to allow for the participants to play off one another's answers, resulting in changes that can be implemented rapidly to make their organization or product more productive. For writing center assessment, focus groups should be about 7-12 people.

Surveys
Another common method of assessing writing centers is use of surveys, one of the most common quantitative methods used to gather data in writing centers. This fits into the notion of enumeration mentioned by North above. Surveys are commonly used to determine information such as student satisfaction with tutoring sessions in the form of a post-session survey, or the confidence of students as writers following their sessions in the writing center. Due to the nature of tutoring sessions, collecting this type of data in the middle of sessions may prove difficult, and as such, while writing in 1984, North claimed that "there is not a single published study of what happens in writing center tutorials". Typically, surveys determine the number of students seen, number of hours tutored, reaction of students to center, reaction of teachers to center, and so on.

Recording Sessions
Recording sessions are seen by some writing center scholars as a viable method of data gathering that answers critiques from the likes of Stephen North about the lack of research regarding what happens during tutoring sessions. To accomplish this, writing center directors using this method explicitly study what happens during a tutoring session using audio or video tapes and analyzing the transcripts.

Assessment Plans
Assessment plans are encouraged by some writing center scholars as a means of planning and enacting improvement of centers. Discussed in many forms, several writing center scholars advise directors to develop assessment plans, and provide a series of approaches for doing so. These typically begin with figuring out what to measure, validating these plans, and presenting these findings to the relevant stakeholders.

Developing Assessment Plans
One prominent example of an assessment plan can be seen in the Virginia Commonwealth Assessment Plan. In discussing the VCAP, Isabelle Thompson lists six general heuristics of program assessment that fit into this context. According to her, program assessment and improvement should be: According to Thompson, in order to develop an assessment plan, writing center directors should: Others like Neal Lerner endorse frameworks for assessment plans for writing centers that consist of heuristics such as determining: who participates in the writing center, what students need from the writing center, how satisfied students are with the writing center, identifying campus environments, outcomes, finding comparable institution assessments, analyzing nationally accepted standards, and measuring cost-effectiveness.
 * Pragmatic, intending to be informative and, hence, improve conditions for student learning as well as summative and, hence, justify a program or service.
 * Systematic, orderly, and replicable.
 * Faculty-designed and led.
 * Multiply measured and sourced.
 * Mission-driven.
 * Ongoing and cumulative.
 * 1) Prepare a mission statement for the writing center based on the services the center provides and aspires to provide.
 * 2) Develop goals, objectives, or intended educational outcomes for the center
 * 3) Determine appropriate assessment methods for the writing center.
 * 4) Conduct the assessment of the writing center's services.
 * 5) Analyze the results of the assessment and draw conclusions about the results in terms of outcomes and the current strengths and weaknesses of the writing center.
 * 6) Use the results to bring about improvements in the center's services.

Validating Assessment Plans
Assessment of writing relies on the concept of validity, or insuring that you measure what you intend to measure. Chris Gallagher supports developing writing assessments locally, something that many scholars in writing assessment firmly support,  but adds that we should validate our assessment methods and choices on a larger scale and suggests the following heuristics for doing so in his Assessment Quality Review Heuristic:
 * 1) Briefly describe the writing program, including curricular and instructional goals, institutional constraints and opportunities (e.g. resources issues, labor conditions, professional development offerings), and student and teacher demographics. Append relevant documentation.
 * 2) Briefly describe the assessment and its relationship, if any, to other assessments conducted in the program. If this assessment is part of an overall assessment plan, append the plan.
 * 3) Answer the following questions about the assessment under review:
 * 4) * Meaningful:
 * 5) ** What are the purposes of this assessment? What are its intended uses? How were these purposes arrived at? Who formulated them? Why and to whom are those purposes significant? How were these purposes made known to students and teachers? How does the content of the assessment match its purpose?
 * 6) * Appropriate
 * 7) ** How is the assessment suitable for this context, these participants, and its intended purposes and uses? How does the assessment reflect the values, beliefs, and aspirations of the participants and their immediate communities?
 * 8) * Useful
 * 9) ** How does the assessment help students learn and help teachers teach? How does the assessment provide information that may be used to improve teaching and learning, curriculum, professional development, program policies, accountability, etc.? Who will use the information generated from this assessment and for what purposes?
 * 10) * Fair
 * 11) ** How does the assessment ensure that all students are able to do and demonstrate their best work? How does the assessment contribute to the creation or maintenance of appropriate working conditions for teachers and students? How does it ensure adequate compensation and/or recognition for the labor required to produce it?
 * 12) * Trustworthy
 * 13) ** How are the assessment results arrived at and by whom? How does the assessment ensure that these results represent the best professional judgment of educators? How does the assessment ensure that the results derive from a process that honors articulated differences even as it seeks common ground for decisions?
 * 14) * Just
 * 15) ** What are the intended and unintended consequences of this assessment—for students, teachers, administrators, the program, the institution, etc.? How does the assessment ensure that these consequences are in the best interest of participants, especially students and teachers?
 * 16) In light of this review, what changes, if any, do you plan to make to this assessment?

Presenting Findings to Stakeholders
After designing and implementing an assessment plan in writing center contexts, assessment experts advise considering how this information is provided to the rest of the administrators in the university setting. Writing center practitioners recommend that directors of these spaces balance the usefulness of findings in writing center assessments to the improvement of the space itself, and rhetorically appealing to the intended audience. Some administrators advise using quantifiable data, and connecting that data to important concepts in a given university, like retention, persistence, and time-to-degree, though the important factors to assess and present may vary depending on what a given university administration values.

In their book Building Writing Center Assessments that Matter, Ellen Schendel and William J. Macauley Jr. provide a set of heuristics for presenting information to stakeholders in the university setting: Some of this advice, such as the desire to tell a story about the writing center space, clashes directly with advice from administrators like Josephine Koster, who claims that "administrators don't want to read essays. Directors should use bulleted lists, headings, graphs, and charts, and executive summaries in documents sent to administrators". These clashes appear to support the larger importance placed on local writing assessment practices  in determining what local administrators may expect.
 * Carefully presenting "good news" and "bad news" in the report, and discussing them ethically.
 * Articulating what we do in ways that non-experts will understand.
 * Creating a story about the writing center that is supported by the data and that is told clearly in the report.
 * Planning and articulating trajectories for our work and using the assessment reports to collaborate with others.
 * Using the report to inform our public communications about the writing center.