User:Razr Nation/Proposals

Articles for Improvement

 * In process.

A formal A-Class criteria
I have been searching and it's obvious that A-Class assessment is quite abandoned, so i would like to bring a fresh proposal to help develop the foundations of that class. The first thing that A-Class articles need is a formal list of criterions that help any user understand what need to be done before an article reaches such status. Just like GAs have their criteria, and so does FA, this intermediary class should have too. So, I propose the following criteria here at the Lab:

I would like to see community imput on this draft for a future formal proposal. Thanks. — Hahc 21  21:11, 31 July 2012 (UTC)


 * Two thoughts:
 * The WP:1.0 team owns the assessments, so you should advertise this to them, if you haven't already.
 * IMO the biggest reason behind the abandonment of A-class is the requirement that at least two editors in the WikiProject approve the designation. Most assessment work is a solo activity.  WhatamIdoing (talk) 01:16, 1 August 2012 (UTC)


 * Personally I don't see much point in having separate A and GA classes. There's little motivation in working for an A, when GA is just as satisfactory and with a little more work the article may gain an FA. A-class is essentially a redundant criteria, and it almost feels like you're losing status by trying to take it from GA class to A class. Perhaps if the FAC could also be used to assess whether an article satisfies the A criteria... but I think that idea has been discussed before.


 * That being said, I think what really needs to be clarified is what exactly allows an article to clear the A bar without satisfying the FA criteria. Simply replicating WP:FACR doesn't really help in that regard.


 * Sorry if I'm coming off too negatively. Regards, RJH (talk) 14:35, 1 August 2012 (UTC)
 * I don't have a strong opinion but I think that merging A-Class with either GA or FA would be the best way to go. Some projects do their own, most notably WPMILHIST. so you might want to drop a note their too. They would also be a good source for a good set of criteria. Kumioko (talk) 20:43, 1 August 2012 (UTC)


 * I believe that A-class provides a low-pressure aptitude test of FA-wannabee articles. There is a big gap between GA and FA standards, despite what has been said above. -  ʄɭoʏɗiaɲ  τ ¢  22:47, 1 August 2012 (UTC)
 * I believe the same. I have worked at both GA and FA and they are far too distant from each other, so deleting GA is a no-no and keeping only FA is a no-no. Also, A-Class is kind of distant too from FA, since FA may have "brilliant [prose], and of a professional standard", but A-class doesn't, and neither does GA. — Hahc 21  00:19, 2 August 2012 (UTC)
 * It depends on the wikiproject really. At WP:HWY/ACR, we provide a review that's held practically to the standards of FA, but more as a constructive approach than a raw support/oppose vote. -  ʄɭoʏɗiaɲ  τ ¢  00:26, 2 August 2012 (UTC)
 * Yes but many wikiprojects have no A-class assessment whatsoever, and that's very sad. That's why i think we should have some sort of alternate formal way to make the assessment in case that wikiprojects are not able to do so. — Hahc 21  02:12, 2 August 2012 (UTC)
 * Yes, I agree. The A-class is probably useful for projects where there is a heavy involvement by the community. For smaller projects it's not a practical approach. Regards, RJH (talk) 17:13, 2 August 2012 (UTC)

I'm not convinced of the value. I think the B and C distinction is close to useless as well. I think a four stage categoriation: is just the right amount of refinement.-- SPhilbrick (Talk)  18:28, 4 August 2012 (UTC)
 * 1) Start/Stub
 * 2) Placeholder name for every thing else
 * GA
 * FA
 * You're probably right. But I find the grainier classifications are more useful for articles rated high or top importance than they are for low importance. In the latter case, an A, B, or C class probably doesn't matter that much to a WikiProject. Regards, RJH (talk) 00:44, 6 August 2012 (UTC)


 * Personally, I like B and C. In my view, a C class article touches on most aspects of a subject, but contains obvious deficiencies.  Lacking sourcing, lack of one key aspect of the subject (e.g.: athlete bio missing personal life, early life, etc.)  B, in my view, is an article that is nearing completion.   As far as A class goes, It has been completely supplanted by GA.  That classification is dead, save for the military history project.  There really isn't enough of a gap between B and GA or GA and FA to make A class useful again. This proposal is well considered, but unnecessary. Resolute 02:07, 6 August 2012 (UTC)

A proposal for future drives [if any]
As i've seen in the last drive, we had some problems with the quality of the reviews and the so-called "quick passess" and "rubber-stamp" reviews. To avoid such situations i developed this new strategy to make sure the reviews goes as usual and the articles passed do really meet the GA criteria. How this will work?


 * The revised changes are in italics.


 * Fixed proposal without the stroke text

I consider that with some changes as above, we will improve the quality of the reviews without discouraging users to participate in the drive and so the don't have to go onto a big reassessment process for the next time. Oh, and by the way, this August, the barnstars to all participants of the last drive will be awarded; Cheers! — Hahc 21  02:23, 1 August 2012 (UTC)
 * Additional comment for clarity: As i suppose will happen, i will clarify the role of the verifier a bit further. After the reviewer reviews the article, points the issues and the nominator fixes them, the verifier checks both the articles and review page (not needed to be comprehensive) and gives the "good to go" signal for passing or failing the article. That means that the reviewer is the one who will close the review, not the verifier. The difference here is that the reviewer will need the approval of a verifier to close the review. This way, we avoid rubber stamp reviews and quick passes, and automatically have a second opinion for all articles reviewed. Along with the 5-article-limit, this will ensure that all articles will be gradually verified within the drive's timespan, and so, as a consecuence, we'll avoid long GAR and delisting processess that nobody likes. — Hahc 21  02:39, 1 August 2012 (UTC)


 * I appreciate the thought behind this, but you're placing additional restrictions above and beyond what is normal on a regular day. I'd eliminate any "prize" for the person who does the greatest number of reviews.  Imzadi 1979  →   02:42, 1 August 2012 (UTC)
 * Well, i was thinking about that too. I think that I will keep only the awards until 25 reviews and no special prize for "the winner". That may do the trick too. Thanks. — Hahc 21  02:56, 1 August 2012 (UTC)

I'm puzzled as to why there's a difference between the qualifications for verifiers and trusted users. Presumably, both should have excellent reviewing skills, enough so they're trusted to both do their own quality reviews and to check that other reviews are of appropriate quality. Having three categories, and the trusted one vaguely defined (and only there to allow them to review without limit).

Incidentally, speaking of limits, might I suggest that each reviewer—even trusted ones—is allowed to have a maximum number of active reviews, in addition to whatever daily maximums apply? This doesn't count ones that are on hold or awaiting a second opinion, but ones that say "onreview" in the GA nominee template.

Finally, unless there's an active (and sufficiently large) crop of verifiers, the backlog could grow very quickly: if the recent drive is any indication, they'll have an average of 25 completions a day to verify. BlueMoonset (talk) 21:30, 1 August 2012 (UTC)
 * Actually, the qualifications for trusted users and verifiers are the same, and any trusted user may be automatically considered a verifier. About the 25/day, i think those are not good numbers. Look, on 30 days, the drive took over some 500 articles, so if we had only one verifier, he might check some 16 articles per day (and only those who are awaiting for approval). So, if we had 16 active verifiers, each one would just need to check 1 article per day, which is great. — Hahc 21  00:43, 2 August 2012 (UTC)
 * Not sure where you get the 500 number. My count of the page was over 750, and that was made after the remaining uncompleted (on hold) reviews had been deleted. At 30 days, that's 25 a day, so you ought to plan for that number; odds are it'll be that busy for the first week or two at a minimum. Having 16 active verifiers would be great (and most would have to do two a day if they actually verified daily), but I'm not sure how you'd get so many, looking at the lists of reviewers this time; not everyone is going to want to verify (as witness this time, where you have five verifiers, three of whom are verifying at the level you'd need). If you're lucky, it'll be more like three verifications a day per person (with nine full-time-equivalent verifiers); unlucky, more. Do you think an hour per verifier every day sounds about right for three a day, on average? How long is it taking per review check now? BlueMoonset (talk) 01:21, 2 August 2012 (UTC)
 * Well i was just rounding the numbers to a proposed 500 standard, as this drive had more reviews than the last three. I would expect to have 10 verifiers. Talking about the daily mark, i consider that checking each article would take no more than 10 minutes: verifiers are not boud to re-review the article but to verify that a proper review has been made, and that's not too complicated. I know that the first week will be hard to check, but verifiers are expected to be done within the last day of the drive, so i'm not taking into consideration the idea that verifiers will check articles at the same rhythm at which they are reviewed. — Hahc 21  01:36, 2 August 2012 (UTC)
 * I don't think this will work if there's any significant delay in the verification step, which seems to be what your final sentence is implying. It's not fair to the people whose articles are being reviewed that they might have to wait several days or even weeks between that review seeming to be done and it actually being verified as done and the GA awarded. BlueMoonset (talk) 05:27, 3 August 2012 (UTC)

Revised proposal I designed a revised proposal with several changes to make more explicit the purpose of this metodology and avoid ambiguous interpretations. I wanna think if, as it is, it can be implemented on all future drives (if any) that would take place at GAN. Regards. — Hahc 21  02:22, 3 August 2012 (UTC)
 * Meh. Seems unnecessarily complicated. Who is reviewing, passing and failing articles is there for everyone to see at WikiProject Good articles/GAN backlog elimination drives/June-July 2012 — that's transparent reporting. I'd be more worried about the ones not being reported. You can have more active "Reviewers" next time but there is no need for a holding pen. How about something more positive, like a newcomer-award or a (self-identified) newbie category? Or if you want to go with the points route, give half point for having a review reviewed by one of the reviewers. That drive was a good learning experience and I hope it recruited some people who will stay on. maclean (talk) 03:36, 3 August 2012 (UTC)

Revised proposal #2 I made several twists on the proposal. Now, it only stands for the next: What i have changed is the fact that reviewers will indeed close the review, and then the verifier will do the check, just like it works now. The main difference is that the reviewers group will have a review limit, and the verifiers won't. — Hahc 21  17:11, 3 August 2012 (UTC)
 * 1) The drive will be divided into reviewers and verifiers.
 * 2) The reviewers will have a review limit of 5 reviews per day.
 * 3) The verifiers have no review limit.
 * I don't understand why the wording reads that all participants have a limit, and then contradicts the statement by adding that verifiers do not have a limit. Why not simply say that reviewers have a limit of five, but verifiers do not? And why not add a limit to the number of open reviews (not including held or second opinion) that anyone, even verifiers, can have? That keeps folks from hoarding "easy" reviews that they wouldn't be getting to for several days. It does add complexity, which is the best reason against it, but I don't see why people can load up with even five a day if they haven't finished what they've already signed up for. BlueMoonset (talk) 19:59, 3 August 2012 (UTC)
 * I fixed the proposal to avoid confusion and rewrote it without the stroke text for better reading. I will add the specification of open reviews. How is it now? — Hahc 21  20:17, 3 August 2012 (UTC)
 * I am sorry but I don't feel that this proposal doesn't do anything to address the perennial problem of bad reviews. Hahc21, i appreciate that you are trying to improve things but this doesn't do it for me. The problem can only be solved by encouraging reviewers who show signs of promise and exposing those with other agendas. Increased bureaucracy won't do this. Jezhotwells (talk) 22:19, 4 August 2012 (UTC)
 * What might be more to the point is to make sure that verifiers take on new or quick reviewers right away to see if their work meets reviewing standards. It took forever before some of the more voluminous reviewers last time got a single verification, and only then after complaints were made here about the quality of reviews from the people being reviewed. If some of those red X marks, along with negative points, had been given in the first week, it could have saved a lot of grief later on. BlueMoonset (talk) 02:18, 5 August 2012 (UTC)
 * Didn't we just gone thru similar discussion a month ago? I think it's time to list this on Perennial proposals OhanaUnitedTalk page 02:52, 5 August 2012 (UTC)
 * It may not be useful but we should make some kind of test to see how it works. I mean, a test drive of 3 days using the method just to see if it is usable or not. If we don't evaluate the proposal in practice, we would never know if it really works or not. Sorry for the late response, i'm on vacations. Regards. — Hahc 21  17:19, 7 August 2012 (UTC)
 * I'll add a proposal of my own based on what I saw; me and Hahc are going back and forth and I think keeping future drives bare bones will help cut down the backlog while minimizing those abusing it for personal gain. Wizardman  Operation Big Bear 01:17, 8 August 2012 (UTC)

Proposal #3 I've made a radical twist on two previous versions: — Dmitrij D. Czarkoff (talk) 23:08, 7 August 2012 (UTC)
 * 1) Eliminate the backlog elimination drive.
 * 2) Twice a year the interested parties would gather wherever appropriate and award somebody they subjectively consider the best in GA backlog elimination.
 * Although i consider your proposal a good one, drives bring new reviewers who will potentially stay. We cannot be the same reviewers all the time. This last drive showed that. — Hahc 21  23:20, 7 August 2012 (UTC)
 * Awards are still there. Just no pressing period of time and no reason to outperform others particularly on count. Making editors compete for award is good, but the exact period of time is a pressing factor that harms us more then helps us. In effect, the threads above are devoted to discussion of how many editors should duplicate one's work: two or three. With plainly subjective awards and "Upon thoughtful and thorough analysis of everyone's reviews we decided ..." the goal of making more people compete is reached with no side effect of "I like it (and my +1 in the table). Go!" reviews. — Dmitrij D. Czarkoff (talk) 23:34, 7 August 2012 (UTC)


 * The "event" aspect brings people in, especially people who are responsive to deadlines. WhatamIdoing (talk) 23:36, 7 August 2012 (UTC)
 * Deadlines are also there (see the "twice a year" part). The lacking element is count. — Dmitrij D. Czarkoff (talk) 23:37, 7 August 2012 (UTC)
 * The event aspect also brings in people who are only interested in collecting awards. I like this idea as it can be used to reward good reviewers not just fast reviewers. AIR corn (talk) 01:24, 8 August 2012 (UTC)
 * Actually, coordinators have reached the conclusion that removing the drives is a no-no yet. Events as such are needed to bring new editors, although new changes will be implemented in the next drive to improve the quality of the reviews. The event aspect help us find new potential great reviewers and we need them. We cannot just erase the drives just because they prompt rubber stamp reviews: Some articles need to wait for months to be reviewed, even if they meet the GA criteria from the moment they were nominated. The verifier-reviewer mode allowed to avoid such bad reviews and people claimed it was unfair making the nom to wait to get their article passed (and it would be just several days); well i consider unfair that we have to wait two-to-three months to get our articles reviewed: that's one reason for the drives to exist. Bad issues are fixed with proposals and improvements over the process, not with scrapping it. I appreciate Czarkoff's proposals, although i think those awards will be only given to those that participate the whole year, and then only longtime collaborators will be awarded, no newcomers. Awards are a way to encourage participation, not quantity of reviews. — Hahc 21  02:54, 8 August 2012 (UTC)
 * Who are the new reviewers? I am not saying there aren't any, but an example or two would help. Most of the names there I have seen around GA a while. I really beleive we have to be careful not to sacrifice quality for quantity, and if the price we pay for that is a backlog then it is a relatively small one. I commend your enthusiasm and am genuinely impressed that you are attempting to work through issues brought up over the drive rather than sweeping them under the carpet, but I do not feel that the drives are accomplishing that much in the long run. They seem to burn out our best reviewers and we end up back where we started in a few weeks (there are already nominations approaching four months to be reviewed). The added verifier level is only likely to drain them even more. Maybe the best way to bring in new reviewers would be to specifically seek out potential reviewers and encourage them to get involved. Has a recruitment drive been attempted before? As to the awards proposal, if it takes off it should include an award for the "best new reviewer". AIR corn (talk) 08:46, 8 August 2012 (UTC)
 * Of course we have. I never saw TBrandley or Rp0211 before the drive. Even myself: I just started working on GAN in February and it was my first drive ever. I still believe that drives are not as bad as some might think, they just need polish. Also, i see drives as an opportunity for the reviewers to gather together in a proper event, alltogether, not as a "elimination of nominations". I consider it an reunion event instead of a proper drive. Maybe, in the future, we could develop our own event, different from a drive, that could work. I like the idea of a gather event and give some awards to the best reviewers, but six months is too long, maybe each 3 months would be better. I don't know. I'm still developing the next drive because as we all know, we cannto just scrap the drives: we have to develop a good sustitute before removing what is implemented now. I hope that the modifications that are planned for the next drive would greatly improve the quality of the reviews; that's my goal. — Hahc 21  04:32, 9 August 2012 (UTC)
 * Rp0211 has been reviewing for a while (although he only resumed again recently, hence why you might not have seen him). TBrandley is probably an example of the potential problems with encouraging new reviewers to join a drive. His reviews have improved substantially since the drive, but not before 60 articles had been given superficial treatment. AIR corn  (talk) 07:00, 9 August 2012 (UTC)


 * The events remind me to consider picking up a review, rather than just commenting/clerking on others. I only review a couple of articles a year, but I am more likely to pick one up around the time of an event than during other times of the year.  WhatamIdoing (talk) 22:21, 13 August 2012 (UTC)

The GA criteria and copyright violations/close paraphrasing
Ok. I've seen this in the drive and I consider it is needed a talk about this matter. The Good article criteria says that an article is subject of a quick fail if "contains significant close paraphrasing or copyright violations." Now, on the 6 points an article must meet to become a Good Article, nothing about copyright violations and close paraphrasing is mentioned. The issue is the next: It is obvious that an article must not contain close paraphrasing without the proper quotation and inline citation. Now, if the sources are off-line and the contributor decided to copy the exact information they're citing within quotations as part of the references, it is still considered copyright violation? I went in and overlooked the fact that the references on some articles such as Big Painting No. 6 and Drowning Girl were a bit extensive, but only though that the contributor wanted all readers to be able to read the original text for verifiability purposes. WP:NFC says that "Extensive quotation of copyrighted text is prohibited." but it applies for the entire article or just the article body? Does the references section fall into this assumption? Also, WP:REF says that "A footnote may also contain a relevant exact quotation from the source, if this may be of interest (this is particularly useful if the source is not easily accessible)." So, on a GAN basis, how we may proceed? Regards. — Hahc 21  21:30, 28 July 2012 (UTC)


 * well, the template does (one of the many reasons for using it).

GA review-see WP:WIAGA for criteria (and here for what they are not)


 * 1) Is it reasonably well written?
 * a. prose: clear and concise, respects copyright laws, correct spelling and grammar:
 * b. complies with MoS for lead, layout, words to watch, fiction, and list incorporation:
 * 1) Is it factually accurate and verifiable?
 * a. provides references to all sources in the section(s) dedicated to footnotes/citations according to the guide to layout:
 * b. provides in-line citations from reliable sources where necessary:
 * c. no original research:
 * 1) Is it broad in its coverage?
 * a. it addresses the main aspects of the topic:
 * b. it remains focused and does not go into unnecessary detail (see summary style):
 * 1) Does it follow the neutral point of view policy.
 * fair representation without bias:
 * 1) Is it stable?
 * no edit wars, etc:
 * 1) Does it contain images to illustrate the topic?
 * a. images are copyright tagged, and non-free images have fair use rationales:
 * b. images are provided where possible and appropriate, with suitable captions:
 * 1) Overall:
 * Pass or Fail:
 * 1) Is it stable?
 * no edit wars, etc:
 * 1) Does it contain images to illustrate the topic?
 * a. images are copyright tagged, and non-free images have fair use rationales:
 * b. images are provided where possible and appropriate, with suitable captions:
 * 1) Overall:
 * Pass or Fail:
 * b. images are provided where possible and appropriate, with suitable captions:
 * 1) Overall:
 * Pass or Fail:
 * Pass or Fail:

Maybe GA needs to decide some of these issue that reoccur. MathewTownsend (talk) 21:41, 28 July 2012 (UTC)


 * Generally reproduction of copyrighted material for the purpose of quotation isn't considered a copyright violation, if the portion of reproduced material is adequate for the purpose. That leaves some amount of judgment, which is the reviewer's duty to make, but overall there is nothing wrong with quoting sources with proper attribution. — Dmitrij D. Czarkoff (talk) 23:22, 28 July 2012 (UTC)


 * Agree with Matthew: copyright violations are alluded to right at the beginning. 1a (in the reasonably well written section) says "respects copyright laws" with a link to WP:COPYVIO: it's perhaps a bit obscurely phrased, but it's definitely in there. Maybe it needs to be spelled out more clearly, although WP:RGA and WP:GACR both list copyright violations and close paraphrasing as reasons to quickfail. The amount of quoted material is another matter, which gets into fair use and how necessary the material is to the article. BlueMoonset (talk) 23:29, 28 July 2012 (UTC)
 * If I'm not mistaken, this seems like a recent addition; people might not be aware of it yet. --Rschen7754 23:56, 28 July 2012 (UTC)
 * The 1a clause on copyvio was added last summer; the quickfail criteria added this April. New reviewers absolutely should have known about it, and anyone using templates like the above, or having templates used on reviews of their own GANs, should have as well. I first encountered it on GAN submissions last August. BlueMoonset (talk) 04:38, 29 July 2012 (UTC)


 * The purpose of respecting copyrights is to refrain from infringing on the creator's ability to profit from his/her work. If large quotes from an author's work on a particular subject is quoted from their work, and especially when other large quotes from the same author is used across several articles, then I believe it violates the spirit and the law of copyvio. In the cases such as I Can See the Whole Room...and There's Nobody in It!, Look Mickey, Golf Ball, Girl with Ball, Torpedo...Los!, Portrait of Madame Cézanne, Artist's Studio—Look Mickey, and other related articles, the quotes seem as large as the article, especially as there are two non free images in these articles. There is no reason why the editor could not have taken the time to understand the quotes and reworded them in his original wording. MathewTownsend (talk) 23:52, 28 July 2012 (UTC)
 * Yes but my question is still unanswered. How do we evaluate the existence on copyvio on references as long as those on these articles? The fact that the quotes are kind of too long is a reasong to quickfail the article? or a reson to fail it per 1(a) or no reason to fail it? My confusion lies within the fact that such extensive quotes appear on the references section, or that is irrelevant? — Hahc 21  16:53, 29 July 2012 (UTC)
 * I'd be hard pressed to say some of those examples respect copyright law, or the idea that the use of the quotes is to reference information. The first example I looked at is at Big Painting No. 6, cite 4.  The text of the article is:
 * The source for the entire Brushstrokes series was Charlton Comics' Strange Suspense Stories 72 (October 1964) by Dick Giordano.[4][5]".


 * The citation quotes:
 * "Begun in the autumn of 1965, Lichtenstein's series of Brushstroke paintings was initiated after he saw a cartoon in Charlton Comics' Strange Suspense Stories. 72 (October 1964). One scene shows an exhausted yet relieved artist who has just completed a painting. This depicts two massive brushstrokes that take up the entire surface area. The absurdity of using a small paintbrush to create an image of two monumental brushstrokes was explored in many different variations. Transforming an expressive act that was mythologized for its immediacy and primal origins into a cartoon-like, mechanically produced-lookiing image. Lichtenstein created a reflexive commentary on gestural painting."


 * Three of the four sentences are plainly unnecessary in terms of providing verification. Speaking far more in someone else's voice rather than our own is often a signal of a copyright issue. Whether it's a quick or slow fail, I'll leave to the GA regulars to decide, but it's surely a problem. --j⚛e deckertalk 17:11, 29 July 2012 (UTC)


 * IMO if the quotes are obviously more extensive then needed for the purpose of citation (that is: to verify the statement), this is an issue with 1(a) criterion. Otherwise it is not an issue at all. In this particular case it is an issue, as all unnecessary parts of the text should be replaced with [...]. — Dmitrij D. Czarkoff (talk) 17:21, 29 July 2012 (UTC)


 * Ok, some of my concenrs have been explained already, but some others are not. The first one. Clause 1(a), when defining prose, it includes the references or only the article body? The references are only covered by Clause No.2? — Hahc 21  01:27, 31 July 2012 (UTC)