Wikipedia:Bots/Requests for approval/Hazard-Bot 33


 * The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Symbol keep vote.svg Approved

Hazard-Bot 33
Operator:

Time filed: 21:18, Wednesday, December 16, 2015 (UTC)

Automatic, Supervised, or Manual: Automatic

Programming language(s): Python

Source code available: GitHub

Function overview: Adding research help to articles from specific WikiProjects

Links to relevant discussions (where appropriate): bot request, WikiProject Medicine discussion, and WikiProject Military history discussion

Edit period(s): Batches of increasing sizes, about once a week

Estimated number of pages affected: thousands, eventually

Exclusion compliant (Yes/No): Yes

Already has a bot flag (Yes/No): Yes

Function details: research help will be added to articles from specific WikiProjects (starting with WikiProject Medicine and WikiProject Military history), directly above the general reflist (or ) on the page. Hazard SJ 21:20, 16 December 2015 (UTC)

Discussion
- Speedy Trial - Initial trial can proceed, the discussion called for 100 pages to start - so trial can match. — xaosflux  Talk 21:33, 16 December 2015 (UTC)
 * Here are the edits. You might be interested in this. I didn't update Research help/Proposal since it was 50 of each, so not a full hundred of any. Hazard SJ 05:07, 17 December 2015 (UTC)
 * I accidentally restarted the script and did 4 more. Hazard SJ 05:22, 17 December 2015 (UTC)
 * I did a spotcheck of most of the articles, it looks like its doing the right thing, and I improved the talk page in the edit summary to give an explanatory header there: so from a social/execution standpoint, it looks to be good. Astinson (WMF) (talk) 16:28, 17 December 2015 (UTC)
 * I looked at two at random. is a little amusing; are you really intending to add this to articles which don't cite any sources? In, should the template be going right below the section header or where it currently is? I'm not sure. Lastly, I adjusted this template to be a self-reference. —  Earwig   talk  08:01, 18 December 2015 (UTC)
 * Thanks for the self-reference link. As for the odd articles: I think the first is a fluke created by odd editing behavior. Most articles without footnotes, are not going to have or tags in it. The WP:Research help page is designed to help readers figure out why there aren't sources supporting information or missing information in articles, so we might actually encourage people to add sources :)
 * In terms of machine rules to help with the second example: in theory it should be directly after the section header; though in practice, I can't remember the last time I saw a footnote section that had freetext before the reflist (There are the pd-text templates, like Template:ACMH which we would probably want our "research help" template between those and the footnotes): that section is also highly irregular, with full citations instead of footnotes. The MOS suggests that this kind of explanatory note shouldn't be in the same sections as citations: see here. I think this might be an issue to discuss later if these kinds of inconsistency show up regularly. When we do an eventual RFC, one of the points of comment will have to be where the template should be transcluded (my initial thought is at the top of "reflist" unless it uses named refs). Astinson (WMF) (talk) 15:28, 18 December 2015 (UTC)
 * To be perfectly frank, there is probably going to need to be a better solution to this when it comes to the following step from Research_help/Proposal:
 * My guess is it might involve something related to or a modification to mw:Extension:Cite to do something special with and what it outputs, but I think it's probably less ideal to go through all pages with references on them and add a template.  I'm not sure if now is necessarily the time to be worried about it, as this is early in the process and is clearly sort of a "trial run" to evaluate the effectiveness of the text in the first place, but it's still something to consider.  At the very least, I'd suggest getting the ball rolling on it before raising it at RFC, as people like seeing what something will do and how it affects their workflow before agreeing to implement it everywhere.
 * -- slakr \ talk / 04:18, 19 December 2015 (UTC)
 * Yes I wasn't clear in the last discussion: my thought is that the WP:Research Help link will likely become part of reflist transcluded alongside the rest of the template, and reacting to the variable for named refs, and/or having a seperate variable allowing you to turn off the link. And yes, the bot is purely for the trial run: I don't expect more than 50-100,000 articles effected by the bot in the graduated trial: anything more, and we would need a much wider community consensus, and a much wider conversation. In part, the trial is designed to gradually introduce the concept to the community, and partially to understand reader behavior and feedback on the page.
 * There are some other options to where the link should be included as well: about half of the people (n=10) that have responded to our survey (link at the top of WP:Research help) have recommended the link in Reference section, and about half recommended a link in the left hand Wikipedia menu. This is what needs to be discussed by the community: if their is sufficient evidence of need for a page like this (which the initial feedback, suggests, there is) and where to place it.
 * BAGAssistanceNeeded Otherwise, is there anything else that needs to be reviewed, for the trial? Astinson (WMF) (talk) 22:59, 20 December 2015 (UTC)
 * Sorry I'm not watching this one too closely, I knew the operator was competent enough to speedily approve the initial run, but will step aside to the other approvers right now. — xaosflux  Talk 23:03, 20 December 2015 (UTC)
 * Wait, 50,000–100,000 articles? That's a very large number. — Earwig   talk  23:04, 20 December 2015 (UTC)
 * You can see the scaling figures at: Research_help/Proposal and bot WikiProjects indicated that we can do a pilot on their pages. I would be open to a slightly smaller group, if BAG wants to put a limit (say 30-40,000 total articles), but we need enough critical mass to deal with the 1% rule (Internet culture) in the survey and click throughs. Astinson (WMF) (talk) 23:08, 20 December 2015 (UTC)
 * And we need to compensate for the unequal visibility of the different article classes and levels of demand: stubs often get far less views than featured articles, and we don't want the batches to be just the first couple letters of the alphabet from the bot's selection. Astinson (WMF) (talk) 23:14, 20 December 2015 (UTC)
 * Okay, I thought we were dealing with a few thousand at most. I'm going to ramble a bit...
 * To make relative figures clearer, WPMIL and WPMED have under 230,000 articles combined (just a bit under 5% of Wikipedia), ignoring any intersections, and a chunk of those are going to lack a references section. WP:LOCALCONSENSUS starts to come into play here, and I'm not sure I'm comfortable moving forward with this for tens of thousands of pages without broader community input on WP:VPR. We already have the "cite this page" link on the sidebar; as I understand it, this idea is essentially to make that more prominent and provide more detailed information. With that scale, a modification to reflist (e.g. render when $$\textit{pageid} \bmod 100 = 0$$) may be easier. The difficulty is then in restricting it by project, but with such a large sample space, is per-project restriction necessary? Generally, I imagine that an "alpha test" would focus on a few thousand hand-picked and widely read articles, before moving on to a beta test of some percentage of the entire article space. You said the selection was based on pages where we can rely on a) significant research being available on articles and b) a significant public interest in researching these topics—I am once again curious if we should be tagging articles like from above, since it seems to go against this. What data are you specifically hoping to gather from this test? —  Earwig   talk  23:39, 20 December 2015 (UTC)
 * To make relative figures clearer, WPMIL and WPMED have under 230,000 articles combined (just a bit under 5% of Wikipedia), ignoring any intersections, and a chunk of those are going to lack a references section. WP:LOCALCONSENSUS starts to come into play here, and I'm not sure I'm comfortable moving forward with this for tens of thousands of pages without broader community input on WP:VPR. We already have the "cite this page" link on the sidebar; as I understand it, this idea is essentially to make that more prominent and provide more detailed information. With that scale, a modification to reflist (e.g. render when $$\textit{pageid} \bmod 100 = 0$$) may be easier. The difficulty is then in restricting it by project, but with such a large sample space, is per-project restriction necessary? Generally, I imagine that an "alpha test" would focus on a few thousand hand-picked and widely read articles, before moving on to a beta test of some percentage of the entire article space. You said the selection was based on pages where we can rely on a) significant research being available on articles and b) a significant public interest in researching these topics—I am once again curious if we should be tagging articles like from above, since it seems to go against this. What data are you specifically hoping to gather from this test? —  Earwig   talk  23:39, 20 December 2015 (UTC)

^ Hazard SJ</b> 02:53, 28 December 2015 (UTC)
 * Many apologies, I was out of office for the holiday last week, saw the comment in a brief email, but inconsistency of internet access and family activities kept me from responding. Your questions identify a couple more FAQ items that I need to clarify in the research proposal (research questions, etc). One of my tasks this afternoon is to document the whole range of answers to the qquestions
 * In brief, we are collecting different types of data that we hope will help us assess the following:
 * First we need to figure out if people will click through -- this could be done with a number of different strategies, we have shifted towards a template with variables for a couple reasons (see other discussion below ).
 * We very much don't want this to only be on Featured and/or high traffic articles, because we want to identify the typical behavior on our "normal" articles (particularly start and stub articles which constitute the majority of Wikipedia's pages) a cut across both MED and MILHIST gets us a much more consistent cross section.
 * However, we have a theory that different topical areas, might have different click through rates -- and we want to see if there is evidence for a need for different types of communication strategy: Scientific or humanities/social science readers may behave differently and we hope to proxy with WPMED and MILHIST articles. The redirects linked through the template variable, allow us to look at the page views for each topic -- this could be done with a variable to reflist, but we didn't want to get bogged down in a lua or core template -- right now its a proof of concept.
 * We want to get a thresh-hold of clickthroughs so that the sample is appropriate: right now we are getting between 4-10 click throughs on 50 articles in each topic -- to get a ratio of different class articles with different visibilities and a range of we are going to need a larger sample.
 * We also want to more self selected survey respondents-- that only happens with a critical mass, but we don't want that critical mass to be overly biased towards any one type of article strength or visibility.
 * As for the question why the template? We wanted something that wasn't buried deep in a Lua module, and could be tweaked (and/or noincluded or blanked with minimal need for modifying a core template and with minimal need for an admin to be "on call") -- we realize that community might reach a point in conversation where we need to put a full stop on the experimentation -- and we want to be able to do this nimbly.
 * As for the local-consensus concern -- I would be open to a broader conversation, but we wanted to start with smaller conversations so that we can incrementally identify gaps in our communication (as your questions have done) --rather than get a barrage of concerns. 10-20,000 articles that are governed by local style guides, seems like a small swath, when their is a contingency in place for removal -- but I am happy to concede the need for a broader conversation if you feel its important. However, if we can identify a number of articles that would not be too broad for the current bot, that would be awesome -- after we could use the data for that sample as the Alpha pilot data for a broader conversation. Astinson (WMF) (talk) 16:31, 28 December 2015 (UTC)
 * Just on one point here, I don't see how this approach is more nimble than a higher-level modification to a core template. If we decide to stop, we need to remove it from each page it was added to. (Yes, we can blank it temporarily, but we're not leaving a blank template on thousands of pages.) If we want to move the location, every page needs to be edited; etc. — Earwig   talk  22:56, 28 December 2015 (UTC)
 * ^ <b style="color:#FFF">Hazard</b> <b style="color:#FFF">SJ</b> 01:13, 6 January 2016 (UTC)
 * Sorry for taking so long to respond, it has been a crazy couple weeks, both with helping navigate/work through the recent movement/WMF politics and the larger strategy conversation, and TWL’s Wikipedia15 Campaign #1lib1ref. I hadn’t really anticipated this conversation to overlap with those.
 * Basically the reason why I don’t feel modifying reflist is a good next step is because right now we’re looking to test the impact aspects of the link, on a small scale, comparing topic areas -- we think that there is going to be very different readerships for scientific/medical vs more historical/humanities topics who might respond to the “how to use Wikipedia in research” information differently at WP:Research help . The only way to do this in reflist would include a parameter in that template -- which also requires modifying each page (if there is another possibility, please let me know, I’d love to find out more). You are right that we will need a different deployment method for a larger beta of a random sample of “English pages”, and I would love to get your expertise in that discussion if/when we get to that point - but for now, as suggest, that’s probably getting ahead of ourselves to take on.
 * The point of the alpha test is not to master the technical implementation but rather reader response to the content -- for that purpose, you are absolutely correct that anything that covers all WPMED or WPMILHIST articles is a big number, but that’s not our current target. We’re planning on a stepped approach of 500, 1000, 5000 and then ‘’x’’ articles from each of the two projects - and I’m very open to hearing your thoughts on what ‘’x’’ should be for this trial; we would definitely cede the definition of our limit to wherever BAG caps the bot. Before moving on to a larger, less targeted beta test, we will of course be seeking wider feedback at VPR.
 * @HazardSJ: would you be willing to have your bot called on to remove the templates, if for whatever reason we decide to blank them based on community consensus?
 * Does that answer your questions? I also talked to recently about this, while working on another project. He might also have some insight.
 * Hope you had a great #Wikipedia15! Astinson (WMF) (talk) 22:14, 20 January 2016 (UTC)
 * Realized I was pinging the wrong Earwig. Sorry, Astinson (WMF) (talk) 15:08, 21 January 2016 (UTC)
 * Yes, I would be willing to make those removals. <b style="color:#FFF">Hazard</b> <b style="color:#FFF">SJ</b> 06:34, 22 January 2016 (UTC)

Saw you removed the template on two pages. I'm sure your commentary would be appreciated here. — Earwig   talk  02:43, 23 January 2016 (UTC)
 * Well, I haven't read the lengthy discussion above. First, I do not consider 1,4-Dioxin a medical article. Second, IMHO these templates including the quite large question mark are too striking. At least it should be placed after the references or in the toolbox at the left hand side. --Leyo 02:56, 23 January 2016 (UTC)
 * Well, the point of the template is to serve readers at the point of need - as they reach the references. Thus the placement at the beginning of the section makes the most sense, so that they can find this information portal at the time at which the information it contains will be most useful to them. Disclaimer: I'm one of the people working on building the portal. Nikkimaria (talk) 02:35, 26 January 2016 (UTC)
 * Sure, put the placement is also most distracting there. If we start to put such notices everywhere in the article, the readability will decrease. --Leyo 11:37, 26 January 2016 (UTC)
 * This is why I imagined that we would need a wider discussion on the village pump before going through with the full trial. By the way, Alex, I concede on my earlier point about there being an easier way to do this if we insist on targeting a specific group of articles; I've gone through a lot of candidate ideas, but the per-project targeting is really unfeasible to do without tagging each page... —  Earwig   talk 07:41, 27 January 2016 (UTC)
 * Thanks for clarifying what the outstanding issues are. and The Earwig: our pageviews for the existing links: for the Military History and for the Medical articles being linked, suggest that our readers can discriminate if they need that help. We are also seeing much higher click through at WP namespace pages which are linked (see the stats). But this data is far from representative: I would love to have a much wider conversation about this BUT, without any evidence on how readers actually will use it, the conversation will likely devolve into supposition, assumptions or personal preferences about design, rather than whether or not our readers and new editors benefit from a tool designed around a problem well documented by the libraries and education communities. Since we have pretty substantial support from 30+ editors on the two projects, can we at least test the concept in these spaces, on even a smaller subsection of articles (10,000, 8,000, 5,000 in total)? We would be even open for a specific timeframe for those to be live (say a month or two)? With feedback like Leyo's (which we are asking for in the edit summary) alongside this data, we can approach the larger community in a more structured and well informed consultation.
 * Per Leyo's comments about notices everywhere cluttering design: yes, if the notices were everywhere readability might be challenged; I agree completely. However, this link would be going in a section that is only useful if you have a basic literacy of Wikipedia's policies, and contextual information; without knowing a) what the footnotes are for, b) how those references are used to verify content, and c) how to intelligibly use Wikipedia in the research process the reader is already doing (if they show up in the reference section). Besides, that is not a "reading" section per se: you only go to the Reference section, if you are looking at the footnotes as part of a research process (or if you are a Wikipedia editor). Our survey for the page very clearly asks, whether a) the page is useful and b) where it could be offered in the UI. We would love for a closer examination of this data, once we have it.
 * Thank you for the welcome feedback, and if you decide to close this discussion until a community consultation, we will try that first. However, without data from a defined test, we think that full scale consultation will not be constructive, Astinson (WMF) (talk) 18:57, 27 January 2016 (UTC)
 * I don't feel comfortable with an uncapped number here. I'd say 10,000 is the extreme upper limit, as it's more than enough sample size for statistical significance when divided into 10 or so smaller groups (say 1k a piece).  That still means 20,000 total edits (10k add, 10k remove) when all is said and done.  Even at that rate, it will take a bot a long time to add or remove the template.  The order of magnitude more (100k, which means 200k all said and done) is literally about 2% of all articles, big and small, and is an Extremely Bad Idea as far as I'm concerned.  Not only would it take a bot that wayyy exceeds our normal edit rate limits over a week to deal with that many pages, it also is way past the point where there are better solutions (including possibly just modifying the site javascript to inject stuff, which itself isn't a great idea is probably the best idea, see below).  On top of all of that&mdash;and outside the scope of the issues being raised here&mdash;"needing" a sample size that huge likely means the research methodology, as a whole, might need to be better refined. -- slakr  \ talk / 04:42, 28 January 2016 (UTC)
 * ...and actually, having given this a second thought, the ideal situation actually probably is just javascript + a node or nginx instance. Redis, on a single thread, can SISMEMBER in O(1) time, so even if you really want 100k pages, you'll likely be fine setting up a server at, e.g., http://uxtesting.wikipedia.org and pinging it with a page id to see if a visitor's page is participating in the experiment, and if so, then loading the injection code as needed.  The number of hits to the server can further be reduced by a rand-modulo check (e.g., rand 1-100 % 2 for half the users), and you automagically filter out bots because they normally don't load javascript.  The reflist yields a section (target with jquery to $('.reflist') usually in a header (findable from that by *.parents('h2')) and you can append the same help text there, too, with the same net result.  On top of all of this, you could even load code that monitors whether someone even sees the link and reports back.  It's win-win. -- slakr  \ talk / 05:39, 28 January 2016 (UTC)

Current ask
So Slakr's second solution and concerns, I think is responding to some of my earlier asks; his solution sounds great for the beta test at some point in the future. However, right now we are the point where: Can we pilot with all these constraints in mind, so that we collect data for a more well informed conversation about including a link to this help page in research sections? Astinson (WMF) (talk) 18:56, 29 January 2016 (UTC)
 * We would be happy with an clear upper limit of 10,000 articles in total (5,000 for each WikiProject) for the alpha test.
 * Once we are able to collect data, any further scaling beyond that cap would require Community concensus, and would have to use a technical (lua or software based) rather than a template transclusion solution.
 * The edit summary has a clear ask for feedback, and the page itself has a survey on it. We are monitoring these forums closely for feedback and issues.
 * If we need to, we can blank the Research help template and Hazard-SJ is willing to remove the template at that point with his bot.

We might as well go ahead, since I don't think arguing over the initial formulation is very productive for either side, and I am satisfied with the reduced test size. 10,000 pages maximum. —  Earwig   talk 04:31, 31 January 2016 (UTC)
 * The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.