User:Charles Matthews/Facto Post/Issue 5 – 17 October 2017

Facto Post – Issue 5 – 17 October 2017
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color:	#7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 5 – 17 October 2017

 

Editorial: Annotations
Annotation is nothing new. The glossators of medieval Europe annotated between the lines, or in the margins of legal manuscripts of texts going back to Roman times, and created a new discipline. In the form of web annotation, the idea is back, with texts being marked up inline, or with a stand-off system. Where could it lead? ContentMine operates in the field of text and data mining (TDM), where annotation, simply put, can add value to mined text. It now sees annotation as a possible advance in semi-automation, the use of human judgement assisted by bot editing, which now plays a large part in Wikidata tools. While a human judgement call of yes/no, on the addition of a statement to Wikidata, is usually taken as decisive, it need not be. The human assent may be passed into an annotation system, and stored: this idea is standard on Wikisource, for example, where text is considered "validated" only when two different accounts have stated that the proof-reading is correct. A typical application would be to require more than one person to agree that what is said in the reference translates correctly into the formal Wikidata statement. Rejections are also potentially useful to record, for machine learning.

As a contribution to data integrity on Wikidata, annotation has much to offer. Some "hard cases" on importing data are much more difficult than average. There are for example biographical puzzles: whether person A in one context is really identical with person B, of the same name, in another context. In science, clinical medicine requires special attention to sourcing (WP:MEDRS), and is challenging in terms of connecting findings with the methodology employed. Currently decisions in areas such as these, on Wikipedia and Wikidata, are often made ad hoc. In particular there may be no audit trail for those who want to check what is decided.

Annotations are subject to a World Wide Web Consortium standard, and behind the terminology constitute a simple JSON data structure. What WikiFactMine proposes to do with them is to implement the MEDRS guideline, as a formal algorithm, on bibliographical and methodological data. The structure will integrate with those inputs the human decisions on the interpretation of scientific papers that underlie claims on Wikidata. What is added to Wikidata will therefore be supported by a transparent and rigorous system that documents decisions.

An example of the possible future scope of annotation, for medical content, is in the first link below. That sort of detailed abstract of a publication can be a target for TDM, adds great value, and could be presented in machine-readable form. You are invited to discuss the detailed proposal on Wikidata, via its talk page.

Links

 * Jon Udell, blogpost Annotating to extract findings from scientific papers, 15 December 2015
 * TDM and Libraries, Virginia Tech report
 * Magnus Manske, The Whelming: Scaling up Wikidata editing
 * OCLC and Internet Archive collaborate to expand library access to digital collections, metadata and linking exchange
 * GLOW week in November: Wikidata workshops on politician info

Editor. Please leave feedback for him. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Opted-out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery
 * }