Wikipedia:Ambassadors/Research/Spring 2012 burden analysis

This table is an attempt to record the burden on the editing community of the work done by students in the US & Canada Education Programs in the Spring 2012 semesters. Mike Christie (talk – contribs – library) 15:42, 21 October 2012 (UTC)

Students evaluated
The students were selected by the same method used for the quality analysis for spring 2012: the alphabetically first and last student in each class was chosen, using the course tool. When a student had no qualifying edits (in the time frame of the semester, and not counting some edits such as their own user pages) then the next student was selected instead.

Summary
The following table summarizes the results. A score of 1 is given for each student for whom the assessment below found their contributions to be a net benefit to the encyclopedia; a score of −1 is given for each student whose contributions are not a net benefit. Marginal cases score 0.5 and −0.5; a neutral case is scored as zero.

Notes: It is possible that I have mis-evaluated some content additions in areas I'm not familiar with. If you see an incorrect evaluation, please either correct it or post a note on the talk page.

The "New article quality" and "Existing article quality change" columns refer to the assessments at Ambassadors/Research/Article quality/Completed and Ambassadors/Research/Article quality/Priority_2, and are calculated as an average of the numbers there. "None" means that no articles in that category were assessed for that class, not that none exist. "N/A" means that no assessments were completed for that course.

The "Burden questionnaire score" documents scores assessed at Ambassadors/Research/Editor impact. The scores are only included below if they relate to articles edited in the assessment table further down this page. "0,0" means two articles were assessed, each with a score of zero. "-1+" means that one article was assessed with a score of −1 but additional work remains to be done. The burden metric score is defined as follows (actually the numbers were positive, but I've converted them to negative here to fit the overall scheme of negative scores being bad, and positive scores being good):
 * 0 – No unproductive work to clean up
 * −1 – A few minutes of work needed
 * −2 – Between a few minutes and half an hour of work needed
 * −3 – Half an hour to an hour of work needed
 * −4 – More than an hour of work needed

Course quality is an attempt to combine all these numbers into a single number for ranking. A high score indicates a course where the students are doing good work. It's calculated as:


 * 3 * (net score) + (0.4) * (new article quality) + (existing article quality change) + sum (burden questionnaire score)

Note that if the course has a score for both new article quality and existing article quality change, those scores are halved since otherwise those courses would benefit from double-counting.

This is not a scientific number – just my guess at how to add these numbers together. An asterisk indicates that so much data is missing that the number is not very helpful. Note that the burden metric and the burden questionnaire are both included which double counts the burden, but this has only a minor impact and only affects two articles. I think this is preferable to ignoring the questionnaire results.

Definitions
Columns are:
 * Course: link to the course tool page for each course.
 * Student: link to the student page for the alphabetically first and last student in that course. Students with no edits are ignored.
 * Article or talk page: a link to each page that the student edited other than their own or a fellow student's user page or user talk page, or course pages, or their instructor's talk page, or their ambassador's talk page, or sandboxes. Only includes articles the student edited during the semester; this restriction is because in at least one case the student was already an editor with an existing edit history, and assessing those edits would be outside the scope of this evaluation.
 * Diff links: diff link for each edit or set of edits to that article. If a long series of edits by the student includes a couple of minor edits by others, a single diff may be provided; this will be noted.
 * Diff description: a description of what the student did in that diff – add text, add references, delete text, format, etc.
 * Response: how other editors responded – i.e. by adding tags, reverting, or doing nothing
 * Net benefit?: overall, were this user's edits a net benefit for the project? There is no distinction made between a user with hundreds of beneficial edits and a user with a single typo fix; this metric measures the impact on other editors, not the quality of the work.