Wikipedia talk:Wikipedia Signpost/2016-01-13/Editorial


 * To do this systematically (as we should) requires a little bit of planning. We need a way of recording that references are checked, and by who.  Secondly it would be valuable to use quotations, to speed up the verification process.  Thirdly we need an automated system to evaluate the trust that should be placed in the verifiers.
 * It's important to understand the problems that these three facilities would solve:
 * It is no good verifying references at random, and not recording the verification - that's inefficient, it duplicates effort and misses items.
 * Given a quotation it is easy to see if the quotation supports the statement it references. And it is trivial (if an electronic source is used) to verify that the quotation is accurate.
 * Fake verification is as bad, or worse than, fake references. However fake verifications are likely to follow specific patterns, notably a small clique of new users or single purpose accounts supporting relatively few editors, probably part of the same clique.
 * All the best: Rich Farmbrough, 17:04, 16 January 2016 (UTC).

All of Rich Farmbrough's very sensible suggestions are based on an unstated assumption, namely that references can be seen to apply to a specific claim (such as a sentence, or perhaps a whole paragraph). However, the encyclopedia still contains many thousands of older articles with a list of unattached citations, listed at the end of the article with no connection to any paragraph. Therefore, a vital precursor to verification is to convert unattached citations to specific (attached) references, using inline ref tags to create traceable "little blue numbers". Chiswick Chap (talk) 19:46, 16 January 2016 (UTC)


 * Another aspect is that it would be useful to not just link to an entire work (e.g. "to the entirety of the sixteen volume collection", to use the Thoreau example) but to deeplink into the relevant section, phrase or paragraph instead. For out-of-copyright and openly licensed references, this could be achieved by importing the referenced texts into the relevant Wikisource and using section links or anchors. For scholarly open-access articles, the OA signalling project is trying to build a workflow that would facilitate that. -- Daniel Mietchen (talk) 22:30, 16 January 2016 (UTC)

Demur. By far the bigger problem is that some editors see "ambiguity" in a source - and parse it to reach their own preferred result. Short "quotations" do not necessarily properly represent the view of any author at all, and the result is that the "automated system" using such quotations will simply rubber-stamp horrid misinterpretations of what the work actually states as fact. And the acceptance of "facts in headlines" as proof that the claim is in the body of any source.

Rather, I suggest that:


 * Edits by "flagged authors" including, but not limited to, IPs, authors with under 300 article edits not marked as "minor", and any editors flagged as being found to have committed plagiarism or major multiple copyright violations, be placed in "pending" status" until a reviewer then examines them for accuracy and copyright status.  And all edits giving "foreign language" sources, video sources without accurate transcripts, and "not findable" sources should also be placed in that limbo. 

Which, I suggest, is more readily do-able than the initial suggestion above. Collect (talk) 14:06, 17 January 2016 (UTC)


 * We find a problem- so the first reaction is to use a few more terabytes of memory to provide a global resource that unnamed virtual editors are going to do complex sql matches. No. No, lets see what practical- in my everyday editing I verify dozens of references using online sources as I clean up sections or zap cn's. I use sfn in the main and place the simple tag |p=34|or |pp=34-35| as needs be I could add |four tildes or |v=exact| or |v=dubious| |v=POV  |v= fail, at the same time. We need a simple traffic light code, to inform the reader the quality of the reference. So with a little help from our coding bunnies we have started to address the problem. But is a tool we need before we can start.   Clem Rutter (talk) 23:32, 17 January 2016 (UTC)

While we also need improved tools, I think it very important to point out that a verification foremostly requires detailed work by editors having some domain knowledge on the subject they are checking or who at least are willing to do the leg work to pick it up on the way. So those editors cannot check just the validity of the references, they actually need to look it up, read its content and compare it to the Wikipedia content using it as a source. This is very tedious and time consuming process, which also requires a different attitude from the "tag and forget"-approach. With regard to the latter, we probably also need a more sensible (or at least prioritized) approach to tagging. Right now I still see a lot of purely "formalistic" tagging, meaning editors often without any domain knowledge of the material they are surveying simply tag any paragraph or statement that doesn't come with footnote. They do not seem to spent time on the article to figure out, whether the paragraph just contains obvious domain knowledge, where sourcing isn't necessarily needed or whether a neighbouring footnote might cover that material as well. This can result in unnecessarily long maintenance lists, which contain a lot of articles that upon closer inspection may have no real sourcing or correctness issues. So we need a sometimes more sensible approach and probably also a way to prioritize sourcing/verification issues better.

There is one tool, which has not been all that popular in the English wikipedia, which however maybe important for verification/consolidation attempts and that are the flagged revisions. So far they are mostly used as protection against vandalism. However the original idea also considered an "verified flag" indicating that a particular version of an article was fully fact checked/verified/sourced at that point. So it might be a good idea to revisit the idea and options of flagged revisions. Maybe even a more fine grained tool is necessary where you can break it down to individual section or footnotes. However tools which are very complex with have a lot of options can in practice also become more of an obstacle than a help for the majority of editors.

Another important aspect for a verification drive is to get a large number of editors to help with it. For that the (online) access to the sources is critical. So we probably need to extend (and possibly simplify) the use of Wikipedia library and the resource exchange request. Ideally all regular Wikipedia editors should have access to all (online) resources.

Finally there isn't necessarily a contradiction or conflict of goals between expansion and consolidation/maintenance but they go hand in hand as we do verify articles while we expand them.--Kmhkmh (talk) 02:50, 20 January 2016 (UTC)