User:Clee375/Data-driven Instructional Systems

Background and origins
Data-Driven Instructional Systems refers to a comprehensive system of structures that school leaders and teachers design in order to incorporate the data into their instructions. Building on organizational and school change literature, Richard Halverson, Jeffrey Grigg, Reid Prichett, and Chris Thomas developed a DDIS framework in an attempt to describe how relevant actors manage school-level internal accountability to external accountability. Specifically, high-stakes external accountability policies such as No Child Left Behind Act (NCLB) was implemented to hold schools accountable for the reported standardized, summative assessment metrics. However, schools already had active internal accountability systems that place high emphasis on an ongoing cycle of instructional improvement based on the use of data including formative assessment results and behavioral information. Therefore, when the high-stakes accountability was implemented, schools naturally go through process of alignment between different types of data different purposes and the corresponding tension. Richard Halverson and his colleagues, employing case study approaches, explore leaders’ effort of coordination and alignment process which occurs between extant “central practices and cultures of schools” and “new accountability pressure” in a pursuit of improving student achievement score.

Key concepts
In their article, Richard Halverson, Jeffrey Grigg, Reid Prichett, and Chris Thomas suggest that the DDIS framework is composed of six organizational functions: data acquisition; data reflection; program alignment; program design; formative feedback; test preparation.

Data Acquisition
Data acquisition includes the data collection, data storage, and data reporting functions. “Data” in DDIS model is broadly conceptualized as any type of information that guides teaching and learning. In practice, schools collect academic data standardized assessment test scores, as well as non-academic data like student demographic information, community survey data, curricula, technological capacity, and behavioral records. In order to store such data, some schools develop their own local collection strategies using low-tech printouts and notebooks, whereas other schools rely on high-tech district storage systems, which provide tremendous amounts of reports. School leaders have discussions around which data needs to be reported and how to report the data in a way that they can use to guide teaching practices.

Data Reflection
In the DDIS model, data reflection refers to collectively making sense of the reported data. District-level data retreats provide key opportunities for the schools within districts to identify the school-level strengths and weaknesses in terms of achievement data. Retreats help districts to develop district-level visions for instruction. In contrast, through local data reflection meetings, teachers have conversations focused on the individual students’ progress by examining each student’s performance on the assessed standards.

Program Alignment
Richard Halverson and his colleagues states that program alignment function refers to “link[ing] the relevant content and performance standards with the actual content taught in classroom.” For example, the benchmark assessment results, as “problem-finding tools,” help educators to identify the curricular standards that are not aligned well with the current instructional programs.

Program Design
After identifying the main areas in relation to students learning needs and school goals, leaders and teachers design interventions: faculty-based programs; curriculum-based programs; and student-based programs. In an effort to improve the faculty’s data literacy, educators are provided with a variety of professional development opportunities and coaching focused on professional interaction (faculty-based programs). In addition, educators modify their curriculum as a whole-classroom approach (curriculum-based programs) or develop customized instructional plans taking into account individual students’ needs (student-based programs).

Formative Feedback
Educators interact with each other around the formative feedback on the local interventions implemented across classrooms and programs. Formative feedback systems are made of three main components: intervention, assessment, and actuation. Intervention artifacts here include curriculum materials like textbooks and experiments, or programs such as individualized education programs (Intervention). The effect of these intervention artifacts can be evaluated through formative assessments, either commercial or self-created, from the perspective that they had brought intended changes to teaching and learning (Assessment). In the actuation space, educators interpret the assessment results in relation to the initial goals of the intervention, and discuss how to modify the instruction delivery or assessment as measurement tools, which lays groundwork for the new interventions (Actuation).

Test Preparation
This function is not intended for teachers to “teach to the test.” Rather, it points to the following activities: curriculum-embedded activities, test practice, environmental design, and community outreach. Teachers incorporate the content of standardized assessment into their day-to-day instructions (curriculum-embedded activities), assist students to practice or be accustomed to test-taking with similar types of tests (test practice), and establish a favorable test-taking environment (environmental design). Further, teachers communicate with parents and the community members on the topics ranging from test implementation to interpreting the test results (community outreach).