Wikipedia talk:Historical archive/Policy/Approval mechanism

Let ordinary users approve
Let ordinary users approve articles first. some people might abuse the system. but we can also have approved approvers and unapproved approvers.

-Hemanshu 20:06, 14 Dec 2003 (UTC)

Genuine experts may not be interested
Basically, I am not sure that we can generate enough interest, yet, on the part of "genuine experts" to act as reviewers for Wikipedia. That is my one big misgiving about this whole project. What do you think? --LMS

Dumb question: why do we need reviewers? So far, quality control seems to be a potential problem that as of yet shows no sign of turning into an actual problem. ---

See "advantages," above. :-) --LMS ---

Approvals and revisions
This is not supposed to freeze the article, but, what is approved is a particular revision of the article. Will the approval be to revision n of the article, that the viewer of version n+m can check? --AN --- Yes, and yes, or that's my notion of the thing. --LMS

---

Will any experts be interested?
I think this could be useful, but I have some vague misgivings about how well it will work in practice. Will we be able to get enough reviewers, who are actively involved? Who in the world is going to come up with the reviewer validation criteria? The problem here is that some articles are about SF authors, others are cookbook type recipes, and what makes an expert cook does not make an expert on Jamaican Jerk Chicken...

Beyond the logistical questions, I'm a bit worried that this may have some effects on wikipedia productivity. I'm sure one of the reasons that wikipedia thrives is just because it is easy to use. But I also think that there are delicate aspects to the way the community works and is structured which are just as important. If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects. On the other hand I'm not certain that it will have adverse effects either...MRC --- Related to this, is the idea that if I write an article on Jamaican Jerk Chicken, and do a thorough web-search which supports what I write, why can't I be considered an expert for the purposes of review. After all, I may have a more open mind than a lot of cooks out there. --- To quote Lee Daniel Crocker from another page on Wikipedia, "Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. Facts are facts, no matter who writes about them"

I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding. There are examples in almost any field - this generation's experts scoffs at the silly ideas of a previous generation, while destined to be scoffed at themselves by a future generation.

- Tim

Replies to the above:
 * Will we be able to get enough reviewers, who are actively involved?

That's an excellent question. I just don't know.
 * Who in the world is going to come up with the reviewer validation criteria?

That obviously would be a matter of some deliberation.
 * If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects.

I agree that this is a very, very legitimate concern, and I think we probably shouldn't take any steps until we're pretty sure that the project wouldn't suffer in this way.
 * Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. ... I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding.

I think there is some merit to these claims. But I'm wondering what this has to do with the proposal. Is the idea that we cannot trust authorities, or experts, to reliably state what constitutes a clear, accurate statement of human knowledge on the subjects within their expertise? Or is it, perhaps, that fine articles would be given a thumbs-down because they do not toe the party line sufficiently, whatever it is? Well, I don't know about that. Anyway, I'm not sure what the point is here.

Gotta go, the dog is taking me for a walk. --LMS

Another possible problem is that even the experts can be wrong. How can we verify accuracy, then? Even if the Wikipedia was internally consistent, it can still be wrong. Not only that, but an article can change 30 seconds after it has been reviewed. For starters, we'll have to introduce another concept from programming, the code freeze. What we can do is analyze the change frequency on articles (via a program of some kind), and when the changes in an article stabilize so that changes are minor and infrequent, we copy the article, add all the named authors as authors, date it, and make it a static page on the wikipedia. Then you'll have two articles, one is the latest "stable" revision, and the other is open to flux (the rest are archived, available by request or some other equivalent). The tough part is determining "accuracy". We could go democratically and add a voting system, but that has problems since the majority can be wrong about as easily as an individual. Any verification system either requires money to hire people to check on authors' claims of expertise, or the creation of an elite class of authors. The alternative is to foster the sense of community, and work on the trust a wikipedian can earn from fellow wikipedians, but that opens up the door to any mistakes made by a trusted wikipedian being tougher to correct. So I would tend to think that the best argument for the validity of an article is stability in the face of hits. Perhaps do something like this:

A = (nr/(nh %&Delta;)) * (&radic;na)/T

where A is the accuracy factor, nr is the number of revisions (since some time), %f¢ is the median (or mean if you must) % change in the article per revision, nh is the total number of hits, T is the technical factor (could be determined by the number of authors involved, etc. it is essentially an attempt at determining the odds that a hit will know enough to revise the article), and na is the number of authors involved (under a radical so that it won't increase linearly). This equation is frightfully arbitrary except in the factors it considers, and a statistician should come up with a better form.

Magnus' implementation of my proposal looks good, but it doesn't quite implement my proposal. The role of a moderator is to approve that a reviewer has the billed (and necessary) qualifications. I don't want anyone standing over the reviewers in the sense of saying, "Yes, you were right to approve this article." In fact, a moderator could very well know nothing about the subject the reviewer addresses, but the moderator can check to see whether the reviewer does have the necessary qualifications (by visiting homepages and matching up e-mail addresses, etc.). In other words, the role of reviewers is quality assurance, whereas the role of moderators would be anti-reviewer-fraud assurance.

Otherwise, the implementation looks pretty good. This advantage is important: "By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc." That's exactly why I wanted it designed this way. Someone could be an ad hoc expert about his pet subject, and a moderator might be able to spot this.

I think ...'s (got to change that nickname, guy! :-) ) proposal really pales beside Bryce's.  If we are going to have a "community approval" process, Bryce's is far superior, because it allows us to "approve the approvers."  Frankly, I couldn't give rat's patoot whether lots of people would approve of a given article.  I want to know whether people who know about the subject (i.e., by definition, the people I'm calling experts) approve of it.

If we did go my route, as opposed to Bryce's, I think we should have an in-depth discussion of criteria for reviewers. Basically, I think we should use criteria similar to those used by Nupedia, but modified to allow for specific expertise on specific subjects--where such expertise might not be codified in degrees, certificates, etc. Nevertheless, I think that the expertise even in those cases must be genuine. If you've read a half-dozen books on a subject of the breadth of, say, World War II, then you know a heck of a lot about WWII, and you can contribute mightily, but you ain't an expert on WWII (probably). Essentially, if we want to adopt a review mechanism in order to achieve the goals of attracting more, well, experts, and in order to have Wikipedia's content used by various reputable online sources, then we must work with the concept of expertise that they use. One rough-and-ready conception of expertise goes like this: you identify people who are experts on a subject on anybody's conception; then you determine who those people consider colleagues worth speaking to seriously and professionally. Those are the experts on that subject.

Frankly, this whole thing is starting to give me a bit of a headache, and I'm not sure we should do anything at all anytime soon. :-) --User:Larry Sanger

On Bryce's proposal: this is very interesting and I think we should think more about it. Maybe it would end up being a roaring success. In the context of Wikipedia, I can pretty easily imagine how it could be. There is one main problem with it, though, and that is that it isn't going to make the project any more attractive for high-powered academic types to join the club. They would very likely look on such a system as a reflection of a sort of hopeless amateurism that will doom Wikipedia to mediocrity. We know better, of course--but we would like to have the participation of such people. Or I would, anyway. They know stuff. Stuff that we don't know, because they're smart and well-educated. If we can do something that's otherwise non-intrusive to the community to attract them, we should. Another problem, related to this, is that the world isn't going to be as excited about this cool system as we are. They'll want to see stuff that is approved, period--presented by Wikipedia as approved by genuine experts on the subjects. If they can see this, they'll be a lot more apt to use and distribute Wikipedia's content, which in the end is what we really, really want--because it means world domination. :-)

After some more thought, I'm now thinking, "Why can't we just adapt Bryce's proposal for these (elitist) purposes?" It would go something like this. We all have our own locked pages on which we can list articles of which we approve. There is a general rule that we should not approve of articles in areas on which we aren't experts. Then, as Bryce says, people can choose who to listen to and who not to listen to when it comes to approvals. But as for presenting the Wikipedia-approved articles, it would be pretty straightforward: some advisory board of some sort chooses which people are to be "listened to" as regards approvals. This would be determined based on some criteria of expertise and whether the approvals the person renders are in that person's areas of expertise. Then we could present one set of articles as the "Wikipedia-approved" articles. Other people could choose a different set of reviewers and get a different set of approved articles.

Moreover, we could conceivably make Bryce's system attractive to "experts." We could say: "Hey, you join us and start approving articles, and definitely your approval list will definitely be one to help define the canonical set of Wikipedia-approved articles.

This looks very promising to me. Right now, I'd have to say I like it better than Sanger's proposal! --Sanger

First, let me say that there's no technical problem in implementing both the Bryce and Sanger/Manske proposal. Just as different category schemes can coexist peacefully at wikipedia, these could too. We could even use the "expert verification" (no matter how this will work in the end) for both approaches.

For the difference between the Sanger and the Manske proposal about what moderators do, you should think about this: Say I get to become a reviewier because I know biology and a little about computers;) So, a moderator made me reviewer. What's going to stop me from approving a two-line-article about "Fiddle Traditions in General"? If I'd be restricted to biology and computers, we'd have to put all articles into categories (which we don't want), otherwise the moderators would have to check every approved article, which is what I suggested in the first place. Maybe I wasn't clear on this point: I don't want the moderators to check articles that were approved by reviewers for scientific correctness; they should just act as another filter, basically approving every article they get from the reviewers, except for those with obvious errors, or with "unfitting" topics, such as foobar... --Magnus Manske - Actually, Magnus, I later came around to your thinking and neglected to mention it. I.e., I think it would be better to have the moderators always be working to check adequate qualifications.

The other possibility is to have some way to "undo" illicit approvals. This would be an enormous headache, though--anyone whose approval was undone would probably quit. --User:LMS

I lean toward something like Bryce's suggestion as well where "approved-ness" is just another piece of metadata about an article that can be used to select it, but I'd simplify it even further with a little software support. Let's not forget Wikipedia's strength: it's easy to create and edit stuff. Because of that, we have a lot of stuff that's been created and edited. We need to make it absolutely trivially easy to provide metadata about an article using the same simple Web interface. For example, have an "Approvals" or "Moderation" link which takes the user to a fill-in form where he checks boxes to answer questions like "Is this article factually accurate in your opinion?", "Is this article clear and well-written in your opinion?", "Does this article cover all major aspects of its topic in your opinion?", etc. That information can be stored in the database associated with the appropriate revisions (perhaps the software could even retain article versions for longer if they have a certain level of approval). Storing that info under the page of the Wikipedian who filled in the form as suggested by Bryce (except under program control) is a good way to do it. Then, users can judge for themselves whose opinions they value and whose they don't. This option is available to anyone who is logged in as a specific user (and not to anonymous users), so the software would know to update the "Lee Daniel Crocker/Well written" page when I checked that box.

This software could be very simple--just present the form, and add lines to the appropriate page, which is just an ordinary Wiki page. The "Lee Daniel Crocker/Well written" page, the "Larry Sanger/Copyright status verified" page, the "Magnus Manske/Factually accurate" page, and the "Bryce Harrington/Interesting subject" page are themselves subject to approval by anyone, and their value can be judged by that.

--User:Lee Daniel Crocker

Implementing an internal equivalent of something like Google's PageRank might be a useful alternative to the approval system. This would effectively rate an article based on how well linked it is and how well ranked the items that link it are. Not only does this scale well (as Google has shown), and work automatically, it is effectively the equivalent to the mental heuristic humans use to establish authority (that is, an expert is one because other people say he is).

In my opinion there should be a separate project that is quality controlled and has approved articles. By having a separate domain and project name, people on wikipedia automatically see the latest revision, while the people at the new site always see the approved revision. With my idea, to make it simpler for newbies, there would be just one affiliated approving project. The staff would consist of administrators, qualifiers, and experts.

To be an expert, you would send in your qualifications such as your degree, job experience, articles written, etc. The necessary qualifications would be fairly small, but your qualifications are assigned a number from 1-5 (which is always displayed in front of your name), and people may filter out articles approved only by barely qualified experts.

The qualifiers contact the university where the expert got their degree, check the archives to see if the expert really wrote the article they claimed to write, etc. The administrator's sole duty is to check the work of the qualifiers and to approve new qualifiers.

Under my idea, any articles that are "stable" (as defined by the formula above) are considered community approved, and as such, are sent to a page on the project (much like the recent changes page) which does not show up in search results and is out of the way, but accessible to anyone.

To actually get approved, the article must be checked for plagiarism by a program, grammar checked by an expert in the language the article is in, and finally, get checked by experts in the subject matter. When the article finally gets approved, all of the registered users who edited the article are invited to add their real or screen name to the list of authors, and the experts get their names and qualifications added in the experts' section at the top of the article.

Even when the article gets approved, people can still interact with it. Any user can submit a complaint about an article if they feel it is plagiarizing or is biased, or they can submit a request to the grammar checkers to fix a typo, and experts may and are encouraged to put their stamp of approval on an already approved article.

An article is rated the same as the highest expert that approves the article. For example, one searching for Brownies might see an article that is titled like this:

4:Recipe for Brownies

This means that at least one expert rated a four approved of the article. A person searching for an article specifies how low they are willing the ratings to go.

-- Techieman

First, I clicked on the edit link for "Why wikipedia doesn't need an additional approval mechanism" (to fix a typo), and got what appeared to be the TOP of the article, not the section that I was trying to link. I had to edit the full article to make the change.

Second, someone may have suggested this already (since I only read Talk halfway down), but might want to test multiple approval techniques simultaneously, although I like the heuristics idea. You might then have a user option or button for showing the heuristically-approved revision of an article. Or, perhaps you could specify the approval level that you want, and the system would show you the most recent revision that meets or exceeds that level (or best revision, if level isn't met). Scott McNay 07:18, 2004 Feb 11 (UTC)

Scoring articles
I have some proposal to address. It seems that the mess in VfD and cleanup is due to the disparities among wikipedians about the quality of aritlces in wikipedia. Those called inclusionists including me tend to defend keeping articles that are even less than stubs while deletionists are inclined to maintain the quality of wikipedia at whole even it takes to get rid of articles that are adequate stubs but contribute to making wikipedia look chessy. It seems to me that the problem is rather that every single article is treated equaly. Some articles are brilliant prose while some are crap or bot's generated.

Anyway, my proposal is to evaluate all articles with the range of 1-5 scores. 5 means brilliant prose, 4 peer-reviewed copyedited article, 3 draft, 2 stub and 1 less than stub or non-sense. This put a lot more burden to wikipedians but we really need some kind of approval system. The growth in the number does not consist with that in the quality. I am afraid that the vast number of the nonsense and bot-generated articles make wikipedia look like a trash. It is important to remember that readers might make a quick guess about the quality of wikipedia only by seeing stubs or less than stubs. -- Taku 23:04, Oct 11, 2003 (UTC)


 * See Wikipedia approval mechanism for prior discussion of this point. See bug reports to submit a feature request. See wikitech-l to volunteer to help develop MediaWiki and code your request yourself. Please don't submit feature requests to the village pump. Martin 00:48, 12 Oct 2003 (UTC)

This is not a feature request and I have already read Wikipedia approval mechanism. -- Taku


 * The argument isn't usually about the quality of the text, it's usually about the appropriateness of the topic. These kinds of debates will go on until we formally decide whether or not Wikipedia is the appropriate place for every postal code in the world, or any random elementary school, or any professor who has written a paper, or any subway station in any town, or anyone who gets 20 hits on Google, etc. I'd try to organize some sort of formal decisions on these topics but I'm not sure I have the energy... Axlrosen 14:49, 12 Oct 2003 (UTC)

We have had this debate on the mailing list several times (This username is just an pseudonym for another username). Most people don't want an approval mechanism. That would ruin the wiki-ness of it. ++Liberal 16:26, 12 Oct 2003 (UTC)


 * The proposal is not yet another approval mechanism, but simple editorial information. Scoring is intended only to improve poorly written articles.

I also favor putting the primary author and reviwers of the article. I often checked the page history to know who is primary responsible for the content. It is often convinient to contact such a person to discuss facts or POV issues. The article looks like this.


 * Takuya Murata is bahaba

-- The author:Taku, reviewed by Taku. The article is scored 4.

However, I guess people just don't like things that sound approval at the first glace without looking at the details. -- Taku


 * Do I understand correctly, that the primary purpose of the scoring would be to alert other users that an article needs help? And that the score would be visible on Recent Changes and on Watchlist? Or would you want it to only be visible once you click the link to the article?


 * Hmm. If one were to have a range of scores on different aspects of the article, it would infact amount to a software-fix which would merge Cleanup into Recent Changes, thus making it (Cleanup) obsolete. I could definitely get behind that, if the developers have enough time, and they think it worth their while.


 * Double-hmm. While we are waiting for a software feature to allow this, why don't we try to implement this on a trial basis at Cleanup, add a score element to the comment tags, eh? I know it isn't quite what you originally suggested, but it could provide some guidance as to how such a feature would be used by editors, eh? -- Jussi-Ville Heiskanen 07:16, Oct 15, 2003 (UTC)

I think you are in the same line with my idea. It seems problems seen particularly in VfD is originated from the situation where every article is treated equally. The truth is some articles are in very poor writing and some are completely readly to be read by general readers. I love to see features like low scored articles are not pop up in the google results. Such features allow us to keep articles of the low quality while they don't look wikipedia a trash of crap.

I also think it is important to store some editiorial information to an article itself to avoid duplicated information. Many articles are listed in VfD over and over again and the reason is quite often of the low quality of the article, rather than the question of existence. Unfortunately, tons of article remain as stub for months, which however cannot be used as justification of deletion of such articles.

Scoring is very similar to the stub caveat with more extented and extensive use.

-- Taku

Keep it simple...
To me the issue that we are trying to prevent is abuse of trust (eg, [Vandalism] ). It would seem to me that to do that, it would be better to improve the means by which improper changes are detected.

Rather than a complex approval process, simply make it possible for one or more users to "sign off" on any given version, and allow filtering in the Recent Changes for articles that haven't been signed off by at least a given number of registered users. Then, users on recent changes patrol can "sign off" on what appear to be valid changes, allowing reviewers who are primarily concerned about combating malicious changes to more readily identify which articles still need to be looked at, and which have already been found to be ok.

While malicious edits do occur from logged in users as well, this does seem to be much less frequent, and if you show who's signed off on a change in the recent changes, very easy to spot.

Theres no need for a "disapprove" on a revision. If you disapprove, you should fix the article, or in case of a dispute, start a discussion.

- Triona 23:08, 8 Sep 2004 (UTC)

There are too many pages on this topic. I've started a list at Edit approval. I've moved that to the introduction. Brianjd 10:25, 2005 Jan 29 (UTC)

Let ordinary users approve
Let ordinary users approve articles first. some people might abuse the system. but we can also have approved approvers and unapproved approvers.

-Hemanshu 20:06, 14 Dec 2003 (UTC)

Genuine experts may not be interested
Basically, I am not sure that we can generate enough interest, yet, on the part of "genuine experts" to act as reviewers for Wikipedia. That is my one big misgiving about this whole project. What do you think? --LMS

Dumb question: why do we need reviewers? So far, quality control seems to be a potential problem that as of yet shows no sign of turning into an actual problem. ---

See "advantages," above. :-) --LMS ---

Approvals and revisions
This is not supposed to freeze the article, but, what is approved is a particular revision of the article. Will the approval be to revision n of the article, that the viewer of version n+m can check? --AN --- Yes, and yes, or that's my notion of the thing. --LMS

---

Will any experts be interested?
I think this could be useful, but I have some vague misgivings about how well it will work in practice. Will we be able to get enough reviewers, who are actively involved? Who in the world is going to come up with the reviewer validation criteria? The problem here is that some articles are about SF authors, others are cookbook type recipes, and what makes an expert cook does not make an expert on Jamaican Jerk Chicken...

Beyond the logistical questions, I'm a bit worried that this may have some effects on wikipedia productivity. I'm sure one of the reasons that wikipedia thrives is just because it is easy to use. But I also think that there are delicate aspects to the way the community works and is structured which are just as important. If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects. On the other hand I'm not certain that it will have adverse effects either...MRC --- Related to this, is the idea that if I write an article on Jamaican Jerk Chicken, and do a thorough web-search which supports what I write, why can't I be considered an expert for the purposes of review. After all, I may have a more open mind than a lot of cooks out there. --- To quote Lee Daniel Crocker from another page on Wikipedia, "Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. Facts are facts, no matter who writes about them"

I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding. There are examples in almost any field - this generation's experts scoffs at the silly ideas of a previous generation, while destined to be scoffed at themselves by a future generation.

- Tim

Replies to the above:
 * Will we be able to get enough reviewers, who are actively involved?

That's an excellent question. I just don't know.
 * Who in the world is going to come up with the reviewer validation criteria?

That obviously would be a matter of some deliberation.
 * If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects.

I agree that this is a very, very legitimate concern, and I think we probably shouldn't take any steps until we're pretty sure that the project wouldn't suffer in this way.
 * Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. ... I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding.

I think there is some merit to these claims. But I'm wondering what this has to do with the proposal. Is the idea that we cannot trust authorities, or experts, to reliably state what constitutes a clear, accurate statement of human knowledge on the subjects within their expertise? Or is it, perhaps, that fine articles would be given a thumbs-down because they do not toe the party line sufficiently, whatever it is? Well, I don't know about that. Anyway, I'm not sure what the point is here.

Gotta go, the dog is taking me for a walk. --LMS

Another possible problem is that even the experts can be wrong. How can we verify accuracy, then? Even if the Wikipedia was internally consistent, it can still be wrong. Not only that, but an article can change 30 seconds after it has been reviewed. For starters, we'll have to introduce another concept from programming, the code freeze. What we can do is analyze the change frequency on articles (via a program of some kind), and when the changes in an article stabilize so that changes are minor and infrequent, we copy the article, add all the named authors as authors, date it, and make it a static page on the wikipedia. Then you'll have two articles, one is the latest "stable" revision, and the other is open to flux (the rest are archived, available by request or some other equivalent). The tough part is determining "accuracy". We could go democratically and add a voting system, but that has problems since the majority can be wrong about as easily as an individual. Any verification system either requires money to hire people to check on authors' claims of expertise, or the creation of an elite class of authors. The alternative is to foster the sense of community, and work on the trust a wikipedian can earn from fellow wikipedians, but that opens up the door to any mistakes made by a trusted wikipedian being tougher to correct. So I would tend to think that the best argument for the validity of an article is stability in the face of hits. Perhaps do something like this:

A = (nr/(nh %&Delta;)) * (&radic;na)/T

where A is the accuracy factor, nr is the number of revisions (since some time), %f¢ is the median (or mean if you must) % change in the article per revision, nh is the total number of hits, T is the technical factor (could be determined by the number of authors involved, etc. it is essentially an attempt at determining the odds that a hit will know enough to revise the article), and na is the number of authors involved (under a radical so that it won't increase linearly). This equation is frightfully arbitrary except in the factors it considers, and a statistician should come up with a better form.

Magnus' implementation of my proposal looks good, but it doesn't quite implement my proposal. The role of a moderator is to approve that a reviewer has the billed (and necessary) qualifications. I don't want anyone standing over the reviewers in the sense of saying, "Yes, you were right to approve this article." In fact, a moderator could very well know nothing about the subject the reviewer addresses, but the moderator can check to see whether the reviewer does have the necessary qualifications (by visiting homepages and matching up e-mail addresses, etc.). In other words, the role of reviewers is quality assurance, whereas the role of moderators would be anti-reviewer-fraud assurance.

Otherwise, the implementation looks pretty good. This advantage is important: "By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc." That's exactly why I wanted it designed this way. Someone could be an ad hoc expert about his pet subject, and a moderator might be able to spot this.

I think ...'s (got to change that nickname, guy! :-) ) proposal really pales beside Bryce's.  If we are going to have a "community approval" process, Bryce's is far superior, because it allows us to "approve the approvers."  Frankly, I couldn't give rat's patoot whether lots of people would approve of a given article.  I want to know whether people who know about the subject (i.e., by definition, the people I'm calling experts) approve of it.

If we did go my route, as opposed to Bryce's, I think we should have an in-depth discussion of criteria for reviewers. Basically, I think we should use criteria similar to those used by Nupedia, but modified to allow for specific expertise on specific subjects--where such expertise might not be codified in degrees, certificates, etc. Nevertheless, I think that the expertise even in those cases must be genuine. If you've read a half-dozen books on a subject of the breadth of, say, World War II, then you know a heck of a lot about WWII, and you can contribute mightily, but you ain't an expert on WWII (probably). Essentially, if we want to adopt a review mechanism in order to achieve the goals of attracting more, well, experts, and in order to have Wikipedia's content used by various reputable online sources, then we must work with the concept of expertise that they use. One rough-and-ready conception of expertise goes like this: you identify people who are experts on a subject on anybody's conception; then you determine who those people consider colleagues worth speaking to seriously and professionally. Those are the experts on that subject.

Frankly, this whole thing is starting to give me a bit of a headache, and I'm not sure we should do anything at all anytime soon. :-) --User:Larry Sanger

On Bryce's proposal: this is very interesting and I think we should think more about it. Maybe it would end up being a roaring success. In the context of Wikipedia, I can pretty easily imagine how it could be. There is one main problem with it, though, and that is that it isn't going to make the project any more attractive for high-powered academic types to join the club. They would very likely look on such a system as a reflection of a sort of hopeless amateurism that will doom Wikipedia to mediocrity. We know better, of course--but we would like to have the participation of such people. Or I would, anyway. They know stuff. Stuff that we don't know, because they're smart and well-educated. If we can do something that's otherwise non-intrusive to the community to attract them, we should. Another problem, related to this, is that the world isn't going to be as excited about this cool system as we are. They'll want to see stuff that is approved, period--presented by Wikipedia as approved by genuine experts on the subjects. If they can see this, they'll be a lot more apt to use and distribute Wikipedia's content, which in the end is what we really, really want--because it means world domination. :-)

After some more thought, I'm now thinking, "Why can't we just adapt Bryce's proposal for these (elitist) purposes?" It would go something like this. We all have our own locked pages on which we can list articles of which we approve. There is a general rule that we should not approve of articles in areas on which we aren't experts. Then, as Bryce says, people can choose who to listen to and who not to listen to when it comes to approvals. But as for presenting the Wikipedia-approved articles, it would be pretty straightforward: some advisory board of some sort chooses which people are to be "listened to" as regards approvals. This would be determined based on some criteria of expertise and whether the approvals the person renders are in that person's areas of expertise. Then we could present one set of articles as the "Wikipedia-approved" articles. Other people could choose a different set of reviewers and get a different set of approved articles.

Moreover, we could conceivably make Bryce's system attractive to "experts." We could say: "Hey, you join us and start approving articles, and definitely your approval list will definitely be one to help define the canonical set of Wikipedia-approved articles.

This looks very promising to me. Right now, I'd have to say I like it better than Sanger's proposal! --Sanger

First, let me say that there's no technical problem in implementing both the Bryce and Sanger/Manske proposal. Just as different category schemes can coexist peacefully at wikipedia, these could too. We could even use the "expert verification" (no matter how this will work in the end) for both approaches.

For the difference between the Sanger and the Manske proposal about what moderators do, you should think about this: Say I get to become a reviewier because I know biology and a little about computers;) So, a moderator made me reviewer. What's going to stop me from approving a two-line-article about "Fiddle Traditions in General"? If I'd be restricted to biology and computers, we'd have to put all articles into categories (which we don't want), otherwise the moderators would have to check every approved article, which is what I suggested in the first place. Maybe I wasn't clear on this point: I don't want the moderators to check articles that were approved by reviewers for scientific correctness; they should just act as another filter, basically approving every article they get from the reviewers, except for those with obvious errors, or with "unfitting" topics, such as foobar... --Magnus Manske - Actually, Magnus, I later came around to your thinking and neglected to mention it. I.e., I think it would be better to have the moderators always be working to check adequate qualifications.

The other possibility is to have some way to "undo" illicit approvals. This would be an enormous headache, though--anyone whose approval was undone would probably quit. --User:LMS

I lean toward something like Bryce's suggestion as well where "approved-ness" is just another piece of metadata about an article that can be used to select it, but I'd simplify it even further with a little software support. Let's not forget Wikipedia's strength: it's easy to create and edit stuff. Because of that, we have a lot of stuff that's been created and edited. We need to make it absolutely trivially easy to provide metadata about an article using the same simple Web interface. For example, have an "Approvals" or "Moderation" link which takes the user to a fill-in form where he checks boxes to answer questions like "Is this article factually accurate in your opinion?", "Is this article clear and well-written in your opinion?", "Does this article cover all major aspects of its topic in your opinion?", etc. That information can be stored in the database associated with the appropriate revisions (perhaps the software could even retain article versions for longer if they have a certain level of approval). Storing that info under the page of the Wikipedian who filled in the form as suggested by Bryce (except under program control) is a good way to do it. Then, users can judge for themselves whose opinions they value and whose they don't. This option is available to anyone who is logged in as a specific user (and not to anonymous users), so the software would know to update the "Lee Daniel Crocker/Well written" page when I checked that box.

This software could be very simple--just present the form, and add lines to the appropriate page, which is just an ordinary Wiki page. The "Lee Daniel Crocker/Well written" page, the "Larry Sanger/Copyright status verified" page, the "Magnus Manske/Factually accurate" page, and the "Bryce Harrington/Interesting subject" page are themselves subject to approval by anyone, and their value can be judged by that.

--User:Lee Daniel Crocker

Implementing an internal equivalent of something like Google's PageRank might be a useful alternative to the approval system. This would effectively rate an article based on how well linked it is and how well ranked the items that link it are. Not only does this scale well (as Google has shown), and work automatically, it is effectively the equivalent to the mental heuristic humans use to establish authority (that is, an expert is one because other people say he is).

In my opinion there should be a separate project that is quality controlled and has approved articles. By having a separate domain and project name, people on wikipedia automatically see the latest revision, while the people at the new site always see the approved revision. With my idea, to make it simpler for newbies, there would be just one affiliated approving project. The staff would consist of administrators, qualifiers, and experts.

To be an expert, you would send in your qualifications such as your degree, job experience, articles written, etc. The necessary qualifications would be fairly small, but your qualifications are assigned a number from 1-5 (which is always displayed in front of your name), and people may filter out articles approved only by barely qualified experts.

The qualifiers contact the university where the expert got their degree, check the archives to see if the expert really wrote the article they claimed to write, etc. The administrator's sole duty is to check the work of the qualifiers and to approve new qualifiers.

Under my idea, any articles that are "stable" (as defined by the formula above) are considered community approved, and as such, are sent to a page on the project (much like the recent changes page) which does not show up in search results and is out of the way, but accessible to anyone.

To actually get approved, the article must be checked for plagiarism by a program, grammar checked by an expert in the language the article is in, and finally, get checked by experts in the subject matter. When the article finally gets approved, all of the registered users who edited the article are invited to add their real or screen name to the list of authors, and the experts get their names and qualifications added in the experts' section at the top of the article.

Even when the article gets approved, people can still interact with it. Any user can submit a complaint about an article if they feel it is plagiarizing or is biased, or they can submit a request to the grammar checkers to fix a typo, and experts may and are encouraged to put their stamp of approval on an already approved article.

An article is rated the same as the highest expert that approves the article. For example, one searching for Brownies might see an article that is titled like this:

4:Recipe for Brownies

This means that at least one expert rated a four approved of the article. A person searching for an article specifies how low they are willing the ratings to go.

-- Techieman

First, I clicked on the edit link for "Why wikipedia doesn't need an additional approval mechanism" (to fix a typo), and got what appeared to be the TOP of the article, not the section that I was trying to link. I had to edit the full article to make the change.

Second, someone may have suggested this already (since I only read Talk halfway down), but might want to test multiple approval techniques simultaneously, although I like the heuristics idea. You might then have a user option or button for showing the heuristically-approved revision of an article. Or, perhaps you could specify the approval level that you want, and the system would show you the most recent revision that meets or exceeds that level (or best revision, if level isn't met). Scott McNay 07:18, 2004 Feb 11 (UTC)

Scoring articles
I have some proposal to address. It seems that the mess in VfD and cleanup is due to the disparities among wikipedians about the quality of aritlces in wikipedia. Those called inclusionists including me tend to defend keeping articles that are even less than stubs while deletionists are inclined to maintain the quality of wikipedia at whole even it takes to get rid of articles that are adequate stubs but contribute to making wikipedia look chessy. It seems to me that the problem is rather that every single article is treated equaly. Some articles are brilliant prose while some are crap or bot's generated.

Anyway, my proposal is to evaluate all articles with the range of 1-5 scores. 5 means brilliant prose, 4 peer-reviewed copyedited article, 3 draft, 2 stub and 1 less than stub or non-sense. This put a lot more burden to wikipedians but we really need some kind of approval system. The growth in the number does not consist with that in the quality. I am afraid that the vast number of the nonsense and bot-generated articles make wikipedia look like a trash. It is important to remember that readers might make a quick guess about the quality of wikipedia only by seeing stubs or less than stubs. -- Taku 23:04, Oct 11, 2003 (UTC)


 * See Wikipedia approval mechanism for prior discussion of this point. See bug reports to submit a feature request. See wikitech-l to volunteer to help develop MediaWiki and code your request yourself. Please don't submit feature requests to the village pump. Martin 00:48, 12 Oct 2003 (UTC)

This is not a feature request and I have already read Wikipedia approval mechanism. -- Taku


 * The argument isn't usually about the quality of the text, it's usually about the appropriateness of the topic. These kinds of debates will go on until we formally decide whether or not Wikipedia is the appropriate place for every postal code in the world, or any random elementary school, or any professor who has written a paper, or any subway station in any town, or anyone who gets 20 hits on Google, etc. I'd try to organize some sort of formal decisions on these topics but I'm not sure I have the energy... Axlrosen 14:49, 12 Oct 2003 (UTC)

We have had this debate on the mailing list several times (This username is just an pseudonym for another username). Most people don't want an approval mechanism. That would ruin the wiki-ness of it. ++Liberal 16:26, 12 Oct 2003 (UTC)


 * The proposal is not yet another approval mechanism, but simple editorial information. Scoring is intended only to improve poorly written articles.

I also favor putting the primary author and reviwers of the article. I often checked the page history to know who is primary responsible for the content. It is often convinient to contact such a person to discuss facts or POV issues. The article looks like this.


 * Takuya Murata is bahaba

-- The author:Taku, reviewed by Taku. The article is scored 4.

However, I guess people just don't like things that sound approval at the first glace without looking at the details. -- Taku


 * Do I understand correctly, that the primary purpose of the scoring would be to alert other users that an article needs help? And that the score would be visible on Recent Changes and on Watchlist? Or would you want it to only be visible once you click the link to the article?


 * Hmm. If one were to have a range of scores on different aspects of the article, it would infact amount to a software-fix which would merge Cleanup into Recent Changes, thus making it (Cleanup) obsolete. I could definitely get behind that, if the developers have enough time, and they think it worth their while.


 * Double-hmm. While we are waiting for a software feature to allow this, why don't we try to implement this on a trial basis at Cleanup, add a score element to the comment tags, eh? I know it isn't quite what you originally suggested, but it could provide some guidance as to how such a feature would be used by editors, eh? -- Jussi-Ville Heiskanen 07:16, Oct 15, 2003 (UTC)

I think you are in the same line with my idea. It seems problems seen particularly in VfD is originated from the situation where every article is treated equally. The truth is some articles are in very poor writing and some are completely readly to be read by general readers. I love to see features like low scored articles are not pop up in the google results. Such features allow us to keep articles of the low quality while they don't look wikipedia a trash of crap.

I also think it is important to store some editiorial information to an article itself to avoid duplicated information. Many articles are listed in VfD over and over again and the reason is quite often of the low quality of the article, rather than the question of existence. Unfortunately, tons of article remain as stub for months, which however cannot be used as justification of deletion of such articles.

Scoring is very similar to the stub caveat with more extented and extensive use.

-- Taku

Keep it simple...
To me the issue that we are trying to prevent is abuse of trust (eg, [Vandalism] ). It would seem to me that to do that, it would be better to improve the means by which improper changes are detected.

Rather than a complex approval process, simply make it possible for one or more users to "sign off" on any given version, and allow filtering in the Recent Changes for articles that haven't been signed off by at least a given number of registered users. Then, users on recent changes patrol can "sign off" on what appear to be valid changes, allowing reviewers who are primarily concerned about combating malicious changes to more readily identify which articles still need to be looked at, and which have already been found to be ok.

While malicious edits do occur from logged in users as well, this does seem to be much less frequent, and if you show who's signed off on a change in the recent changes, very easy to spot.

Theres no need for a "disapprove" on a revision. If you disapprove, you should fix the article, or in case of a dispute, start a discussion.

- Triona 23:08, 8 Sep 2004 (UTC)

Approval
We already have an approval mechanism. It's called a wiki. This measure will, I hope, never pass.Dr Zen 04:54, 14 Dec 2004 (UTC)
 * Agreed. Aweful idea as is.  Maybe if certain pages had change on an hour delay where they would automatically be "approved" without human intervention.  Even then it's a pretty bad idea that seems just to try to appease a few critics rather than deal with a genuine issue. Just IMHO. I'd rather see no approval system. --Sketchee 02:24, Dec 30, 2004 (UTC)

A thought, some want to defer for "expert" approval. If that's what we wanted, wouldn't we get an expert to write each article in the first place? --Sketchee 01:46, Jan 30, 2005 (UTC)

If you want approval, go to some other encyclopedia. As far as I can see, all encyclopedias are comparable to each other - the wiki is a good enough approval mechanism. Brianjd 05:33, 2005 Feb 1 (UTC)

"Better"?
"Generally, Wikipedia will become comparable to nearly any encyclopedia, once enough articles are approved. It need not be perfect, just better than Britannica and Encarta and ODP and other CD or Web resources."

Huh? Since when are we in competition with others? Brianjd 11:06, 2005 Jan 27 (UTC)

Meta
See Category talk:Editorial validation. Brianjd 05:36, 2005 Feb 1 (UTC)