Wikipedia:WikiProject User warnings/Testing

This page is the tracking page for efforts on the English Wikipedia to improve the quality of user talk warnings. Links to this project on other Wikipedias can be found on the cross-wiki hub on Meta.

Scope
The purpose of this page is to measure the efficacy of different user talk templates through randomized testing. This effort does not entail creating new categories of user warnings or to organize them more efficiently, but rather to improve on the quality of our current communication methods.

We're aiming to fine tune the template messages we send to editors, in order to encourage more good faith contributors and discourage outright vandals, spammers, and other bad faith contributors.

Participants
Sign up here if you'd like to stay notified about testing updates.


 * 1) User:Steven (WMF)
 * 2) User:Maryana (WMF)
 * 3) User:Staeiou
 * 4) User:EpochFail
 * 5) User:Vitor Mazuco
 * 6) User:Philippe (WMF)
 * 7) User:Kudpung
 * 8) User:Kubigula
 * 9) User:DGG
 * 10) User:Fluffernutter
 * 11) User:Wikipelli
 * 12) Ebe123
 * 13) Σ
 * 14) User:The Blade of the Northern Lights
 * 15) User:Writ Keeper
 * 16) Eraserhead1 &lt;talk&gt; 21:53, 25 October 2011 (UTC)
 * 17) User:Jtmorgan
 * 18) User:Chzz
 * 19) Hurricanefan25
 * 20) Ljhenshall
 * 21) XLinkBot (forced by its operators)
 * 22) Beetstra (as one of the operators of XLinkBot)
 * 23)  Andy Mabbett ( Pigsonthewing )
 * 24) User:Racconish
 * 25) User:Recon Etc
 * 26) User:Jayen466
 * 27) User:gwickwire (talk)
 * 28) User:Thogo
 * 29) User:Logan
 * 30) User:Snowolf
 * 31) User:Johannes Rohr (WMDE)
 * 32) User:mabdul (AFC related)
 * 33) Katarighe
 * 34) Martijn Hoekstra (talk)
 * 35) User:Gaijin42
 * 36) User:DMacks
 * 37) User:Rcsprinter
 * 38) User:JohnCD
 * 39) User:Ciphers
 * 40) Nathan2055talk - contribs 19:49, 27 June 2012 (UTC)
 * 41) Rich Farmbrough, 02:49, 15 September 2012 (UTC).
 * 42) User:Lexein
 * 43) User:Rdicerb_(WMF)

Tasks
Any effort to make templates more simple, friendly, and accessible is welcome. Specifically, you can:


 * Draft new templates or assess the templates that are currently in the draft phase and suggest improvements in their content or structure. We aim to lessen the bitey-ness of warnings, but templates should still comply with the design guidelines and the usage and layout best practices.
 * Suggest which templates to test next and what changes should be made.
 * Help us analyze our data. We're using mixed methods, with both quantitative statistics and qualitative coding.
 * Recruit more participants.

Tests
Templates have been tested in the following places:


 * Huggle
 * Twinkle
 * Shared IPs
 * SDPatrolBot
 * XLinkBot
 * Articles for creation (Afc)
 * ImageTaggingBot
 * CorenSearchBot
 * 28bot

Those pages each have lists and dates of all the templates tested including results if available. The following are tracking tables with start and end dates. For tests in projects other than English Wikipedia, please see our hub on Meta.

Testing method
The following are the requirements to conduct comparative A/B testing of any user talk template. Doing randomized experiments allows us to get hard data about what kinds of content are most successful at helping us achieve these goals. What you'll need is...
 * 1) A "randomizer" that delivers all the templates in your test. This is the template that should be included in the configuration of whichever bot or tool you are testing with, and it randomly delivers one of the templates via a parser function.
 * 2) A control, usually the existing default template. Note that you should replicate the default in a new template rather than use the current template page, in order to avoid including old instances of the default in your experiment.
 * 3) A new version or versions of the template you want to test. Try to use a canonical name that matches the type, purpose, and level of the warning you're interested in.
 * 4) A Z number tracking template for all templates being tested. If you do not include a separate Z number in each template, you will lose track of your test cases once they are substituted.

In some cases, such as for bots where all contribs are by one account, this method can be greatly simplified.

Analysis
We have so far used a mixed method of both quantitative measurement and qualitative assessment. If you'd like to help sort and analyze tests, please sign up above.

Testing results from all projects are available here.