User:Karimderrick/E-scape

E-scape is a project run by the Technology Education Research Unit(TERU) at Goldsmiths University in London, England that has developed an approach to the assessment of creativity and collaboration in a range of contexts.

E-scape assessment activities provide a mechanism for student to record the story of their learning using a variety of recording techniques. The focus is on the process of learning. The product of the exercise is a dynamic portfolio of evidence that can then be used for the basis of assessment

Background
Project e-scape originated in a QCA project (2003-4) entitled 'Assessing Design Innovation'. Phase 1 of the project looked at how ICT (Information and communication technologies) within subject teaching and learning could be used to encourage the assessment of creativity and teamwork. DfES and QCA supported the phase 1 'proof of concept'.

In e-scape phase 1 it was established that the use of digital peripheral tools could enable learners to create authentic, real-time, electronic portfolios of their performance. The value of peripheral tools lay in their 'back-pocket' potential. Learners were not tied to desktops and work-stations, but could roam the classroom / workshop. The peripheral digital tools enabled them to build an authentic story of their designing through a combination of drawings; photos; voice files and text. Their story emerged as the trace-left-behind by their purposeful activity in the task. The focus of phase 2 was to integrate these techniques into a complete system.

In e-scape phase 2 a prototype system was built that enabled teachers to run design & technology test activities in 11 schools across England. This resulted in 250 performance portfolios on a website that were then assessed using a Comparative Pairs assessment methodology based on work by Thurstone and the Law of comparative judgment. Learners were enthusiastic about using the system in schools and the reliability of the subsequent assessments was significantly higher than is possible using conventional approaches.

However, there were two limitations with the phase 2 system:

Firstly, it operated only in design & technology, and this raised the question of its transferable value into other subjects.

Secondly, the phase 2 tests had been run as a research project - with the research team operating the system in schools. This was not a scalable model for national assessment. It was necessary for such a national system to be operable by teachers in their own classrooms and this became the focus of the third phase of the project.

The brief for phase 3 derived from these two imperatives.