Wikipedia:Bots/Requests for approval/CitationCleanerBot 2


 * The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was

CitationCleanerBot 2
Operator:

Time filed: 13:43, Saturday, March 25, 2017 (UTC)

Automatic, Supervised, or Manual: Semi-automated during development, Automatic after

Programming language(s): AWB

Source code available: Upon request. Regex-based

Function overview: Convert bare identifiers to templated instances, applying AWB genfixes along the way (but skip if only cosmetic/minor genfixes are made). This will also have the benefits of standardizing appearance, as well as providing error-flagging and error-tracking. A list of common identifiers is available here, but others exist as well.

Links to relevant discussions (where appropriate): RFC, Bots/Requests_for_approval/PrimeBOT_13. While not the issue of unlinked/raw identifiers wasn't directly addressed, I know of no argument that doi:10.1234/whatever is better than. If ISBNs/PMIDs/RFCs are to be linked (current behaviour) and templated (future behaviour), surely all the other ones should be linked as well.

I have notified the VP about this bot task, as well as others similar ones.

Edit period(s): Every month, after dumps

Estimated number of pages affected: ~5,400 for bare DOIs, probably comparable for the other similar identifiers (e.g. PMC), and much less for the 'uncommon' identifiers like MR or JFM. This will duplicate Bots/Requests_for_approval/PrimeBOT_13 to a great extent. However, I will initially focus on non-magic words, while I believe PrimeBot_13 will focus on magic word conversions.

Exclusion compliant (Yes/No): Yes

Already has a bot flag (Yes/No): Yes

Function details: Because of the great number of identifiers out there, I'll be focusing on "uncommon" identifiers first (more or less defined as <100 instances of the bare identifier). I plan on semi-automatically running the bot while developing the regex, and only automating a run when the error rate for that identifier is 0, or only due to unavoidable GIGO. For less popular identifiers, semi-automatic trialing might very well cover all instances. If no errors were found during the manual trial, I'll consider that code to be ready for automation in future runs.

However, for the 'major' identifiers (doi, pmc, issn, bibcode, etc), I'd do, assuming BAG is fine with this, an automated trial (1 per 'major' identifier) because doing it all semi-automatically would just take way too much time. So more or less, I'm asking for
 * Indefinite trial (semi-automated mode) to cover 'less popular identifiers'
 * Normal trial (automated mode) to cover 'popular identifiers'

Discussion
. Headbomb {talk / contribs / physics / books} 13:56, 25 March 2017 (UTC)

For cases where a particular ISBN/PMC/etc. should not be linked, for whatever reason, will this bot respect "nowiki" around the ISBN/PMC/etc. link? &mdash; Carl (CBM · talk) 18:39, 26 March 2017 (UTC)
 * Unless the "Ignore external/interwiki links, images, nowiki, math, and " option is malfunctioning, I don't see why it wouldn't respect nowiki tags. Headbomb {talk / contribs / physics / books} 18:58, 26 March 2017 (UTC)

This proposal looks like a good and useful idea. Thanks for taking the time to work on it! − Pintoch (talk) 11:14, 11 April 2017 (UTC)

I'd rather this task be explicit as as to its scope "identifiers" is to vague. Can you specify exactly which identifiers this will cover? Additional identifiers can always be addressed under a new task as needed. — xaosflux  Talk 01:38, 20 April 2017 (UTC)
 * Pretty much those User:Headbomb/Sandbox. Focusing on the CS1/2 supported ones initially, then moving on to less common identifiers, if they are actually used in a "bare" format, like INIST:21945937 vs . Headbomb {t · c · p · b} 02:47, 20 April 2017 (UTC)
 * Based on past issues with overly-broad bot tasks, I try to think about degrees of freedom when I look at a bot task. The more degrees of freedom we have, the harder it is to actually catch every issue. You're asking for a lot of degrees of freedom. We've got code that's never been run on-wiki before, edits being made on multiple different types of citation templates for each identifier, a mostly silent consensus, different types of trials being requested, and an unknown/unspecified number of identifiers being processed. It's probably not a great idea to try to accomplish all that in one approval. Would you be willing to restrict the scope of this approval to a relatively small number of identifiers so we can focus on testing the code and ensuring the community has no issues with this task? In looking at your list, I think a manageable list of identifiers would be as follows: doi, ISBN, ISSN, JSTOR, LCCN, OCLC, PMID. These are likely the identifiers with the most instances; I may have missed a couple other high-use ones that I'm less familiar with. We could handle the rest (including less-used identifiers) in a later approval or approvals. Your thoughts? ~ Rob 13 Talk 04:09, 3 June 2017 (UTC)

I'm asking for lots of freedom yes, but in a modular and controlled fashion. I'm fine with restricting myself to the popular identifiers at first, but it will make development a bit more annoying/complicated, since the lesser user identifiers are the hardest to test on a wider scale. If BAG is comfortable with a possibly slightly higher false positive rate post-approval (a very marginal increase, basically until someone finds a false positive, if there are some), I'm fine with multiple BRFAs. Only thing I would ask to that initial list is I'd rather have arxiv, bibcode, citeseerX, doi, hdl, ISBN, ISSN, JSTOR, PMID, and PMCID. OCLC/LCCN could be more used than arxiv/bibcode/citeseerx/hdl/PMCID, but they usually are on different type of articles which will make troubleshooting a bit trickier. Headbomb {t · c · p · b} 19:18, 6 June 2017 (UTC)
 * The list you provided is fine. As soon as we get those sorted and approved, I'm happy to quickly handle future BRFAs, so it shouldn't be too time-consuming of a process for you. Roughly 25 edits per identifier you listed above. Please update your task details to reflect the restricted list of identifiers before running the trial. ~ Rob 13 Talk 19:51, 6 June 2017 (UTC)
 * OperatorAssistanceNeeded Any updated on this trial? — xaosflux  Talk 00:41, 19 June 2017 (UTC)
 * Still working on the code. I can't nail the DOI part, because I haven't yet found a reliable way to detect the end of a doi string, and I've been focusing on that rather fruitlessly since it's the hard part of the bot. I've asked for help with that at the VP. The other identifiers are pretty easy to do, so I'll be working on those shortly. Worse case, I'll exclude DOIs from bot runs and do them semi-automatically. Headbomb {t · c · p · b} 15:49, 19 June 2017 (UTC)

Headbomb {t · c · p · b} 17:13, 27 July 2017 (UTC)
 * 24 edits from the ISSN trial. No issues to report. Headbomb {t · c · p · b} 18:30, 19 June 2017 (UTC)
 * 25 edits from the DOI trial.
 * missed . While I'm planning on taking care of those, down the line, right now my brain is a bit fried from all the other corner cases I've dodged. Headbomb {t · c · p · b} 21:39, 19 June 2017 (UTC)
 * 25 edits for the JSTOR trial.
 * was due to regex order, which is now fixed.
 * is a case of GIGO.
 * has no JSTOR edit, but that's due to database filtering. Headbomb {t · c · p · b} 00:48, 21 June 2017 (UTC)
 * I do not think that instance of GIGO is a problem; replacing an incorrect mention of JSTOR with a broken template makes it easier to detect the issue. Jo-Jo Eumerus (talk, contributions) 15:35, 22 June 2017 (UTC)
 * 25 edits from the OCLC trial
 * , didn't touch an OCLC (filtering issues)
 * could be better, in the sense that it could make use of oclc, but that's what CitationCleanerBot 1 would do
 * touched a DOI, because the OCLC was in an external link which the bot is set to avoid. I plan on doing those manually.
 * shouldn't be done, I've yet to find a good solution for this however. (Follow up: This is now fixed most of the time. Corner cases such as will remain, but they are exceedingly rare). Headbomb {t · c · p · b} 12:43, 24 July 2017 (UTC)
 * 4 from the PMID/PMC trial. I've tested this substantially on my main account, without issues, save for the same corner case as OCLC, which are a bit more present in the case of PMIDs/PMCs than OCLCs, but I've cleaned most them up manually an very few remain. PMIDs/PMCs are now getting hard to test because very few remain. During my testing, I found that PMC is problematic on its own, as many other things than PMCIDs are in the same format. PMCID: PMC is safe and problem free, as are things like PMC:0123456 . I plan to exclude plain PMC from the bot and do those manually instead, and only take care of the safe ones via bot. Headbomb {t · c · p · b} 01:52, 27 July 2017 (UTC)
 * from the Zbl trial.
 * could be better, but didn't break anything.
 * is GIGO, but again the bot didn't break anything.
 * missed, but that's fine.
 * and are borked but have now been fixed.
 * from the JFM trial.
 * is borked, but I took care of it with a comment.
 * is GIGO, but the bot didn't break anything.

Unsafe by automated bot (at least with my coding skills) Headbomb {t · c · p · b} 02:03, 27 July 2017 (UTC)
 * MR / LCCN / plain PMC
 * Defering to CitationCeanerBot 3: arxiv/bibcode/citeseerx/hdl

I believe I'm ready for an extended trial, for doi, ISBN, ISSN, JFM, JSTOR, OCLC, PMID, PMCID, and Zbl. Headbomb {t · c · p · b} 17:22, 27 July 2017 (UTC)


 * Some comments:
 * Re: (mentioned above), I see this one down the line.  Is that something the bot needs?  Ideally the bot simply avoids JFM-tagging anything that's not within &lt;ref&gt;, cite, etc..., as that's where it's probably 99% of the time going to be operating (e.g., you almost certainly won't encounter "And so it was said in JFM (id) that..." in the middle of normal wikitext, in a paragraph block. It seems odd and out of place to have to stick comments like that in the source otherwise.
 * Re: GIGO as a whole / and/or this one &mdash; is there an easy way to validate these? Like either via their identifier format, an API to hit, or something?  Or also just excluding anything you're not certain meets the format?  Like it seems unlikely a date is the identifier, or even more generally, anything with slashes for jstor.  It might help to avoid false positives / making things worse.
 * Re: (and other issues related to parsing), it might be safer to parse the source independently as html/loose xml and iterate through it that way.  Ref tags are fairly predictable as far as attributes go; so, your bot should definitely not apply a cleanup within a "name" attribute (for example) while it should feel safer applying a cleanup knowing it's in the tag content.  That should at least take care of almost all instances where you'd otherwise risk breaking ref tags, which is where the bot is most likely going to be operating.  It would therefore be able to be healthily and confidently suspicious when it's attempting to modify something outside a of a ref tag.
 * -- slakr \ talk / 04:58, 4 August 2017 (UTC)
 * 1. There's no real way of telling AWB to only look within ref tag citations, and that would miss 'further reading' and 'manual refs' bibliography sections, which are often the ones most in need of such bot maintenance. From database scans, that 100.4 Jazz FM article is the only article in need for that comment. This is both so I don't pick it up in database scans in the future, and so the bot doesn't touch it. Every other instance of JFM(:| )\d that does not refer to a JFM identifier can be bypassed by checking for \wJFM.
 * 2. Validation could be done at Help:CS1. It's a long-term project of mine, but validation helps when the identifier structure is known/well defined. I'm not saying those identifiers don't have a well-defined structure, but JFM is a defunct German identifier, and JSTOR can have DOIs as identifiers, which can have slashes in them. I could restrict the bot to purely numerical JSTORs, but in GIGO situations, the crap output often serves to flag the issue.
 * Actually the formats for JfM ( and Zbl  are well-defined. I can do the bad JfM/Zbl identifiers manually. I've updated the code, but since no instance remain, it can't really be tested. But it works in the sandbox . Headbomb {t · c · p · b} 20:09, 4 August 2017 (UTC)
 * 3. I certainly wish there would be an easy way to tell the bot not to touch ref name tags. I've bypassed most instances with creative regex, but there's no easy way to avoid them generally with AWB.
 * Headbomb {t · c · p · b} 12:16, 4 August 2017 (UTC)
 * For number 3, try the following regex:  . That's a negative lookbehind that doesn't handle the edit if the replacement would occur after the string   but before the tag was closed out. ~ Rob 13 Talk 09:46, 6 August 2017 (UTC)
 * I can try that. I'll test it manually a few times, and then I'd like to proceed to bot trial phase 2. Headbomb {t · c · p · b} 17:11, 15 August 2017 (UTC)
 * Are you ready for a bot trial?— CYBERPOWER  (Around ) 06:56, 20 August 2017 (UTC)
 * I am yes. Headbomb {t · c · p · b} 19:03, 20 August 2017 (UTC)

Phase 2

 * — CYBERPOWER  ( Chat ) 15:37, 21 August 2017 (UTC)
 * 500000000000000000000000? That's almost a mole of edits. :P -- slakr  \ talk / 02:45, 22 August 2017 (UTC)
 * Argh, my 0 key got stuck and I didn't even notice. :p— CYBERPOWER  ( Chat ) 07:18, 22 August 2017 (UTC)
 * Any update on this?— CYBERPOWER  ( Message ) 23:36, 18 September 2017 (UTC)
 * The User:Bibcode Bot revival took a bit of my time recently, as have improvements to User:JL-Bot and User:JCW-CleanerBot for WP:JCW/WP:MCW. But I should be able to give CitationCleanerBot 2 some love in the week or two. It's just down on my list of priorities. Headbomb {t · c · p · b} 00:19, 19 September 2017 (UTC)
 * Ok. Take your time.  I'll revisit in 2 weeks. :-)— CYBERPOWER  ( Message ) 00:43, 19 September 2017 (UTC)
 * It's been 2 weeks BTW. Got any news?— CYBERPOWER  ( Message ) 23:06, 4 October 2017 (UTC)
 * Nope, but I'm hoping I'll get around to it this weekend. Still focusing on JCW-CleanerBot and Bibcode Bot for now. Headbomb {t · c · p · b} 23:09, 4 October 2017 (UTC)


 * I'm expiring this for lack of bot activity. When you're ready to proceed, you may re-open this.— CYBERPOWER  ( Chat ) 13:18, 13 October 2017 (UTC)
 * The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.