User talk:Jheald/Archive 12

January 2017
Please don't add or restore poorly sourced material on living people that is sourced only to tabloids. Per WP:BLPSOURCES this is not permitted. As regards your edit summary, the RfC is about a blanket ban on the Daily Mail as a source on Wikipedia. There is an existing consensus that it should not be used on BLPs, which this is. In addition to the Mail, the Daily Record is also a poor source. Thanks for your understanding. --John (talk) 19:55, 20 January 2017 (UTC)
 * WP:BLPSOURCES specifically refers to tabloid journalism, not 'journalism in tabloids'. There is a difference. "Tabloid journalism is a style of journalism that emphasizes sensational crime stories, gossip columns about celebrities and sports stars, junk food news and astrology" as our article puts it.
 * The facts I'm citing are mundane and routine, not sensationalist or gossipy. There's no reason to believe anything but that they are straightforward and accurate.  Isn't it better to indicate where they came from, so people can judge for themselves, rather than to suppress their provenance?  Jheald (talk) 20:46, 20 January 2017 (UTC)
 * I do see what you mean; most of the material is not especially controversial. I had a quick look and there really is not much material on this person. The Daily Mail is definitely and unequivocally tabloid journalism, but just as a liar occasionally tells the truth, there may be no harm in using this mundane material. On the other hand, if this stuff is only available in these two low-quality sources, is it really important that we carry it? Could we compromise on restoring the material but with a source improvement tag, giving editors a month or two to find better sources? I'm uncomfortable having an article on a BLP so largely based on such poor sources. I did find one short article in the Scotsman which might help a bit. Thanks for negotiating in good faith, and I am sure we will work something out. --John (talk) 10:24, 21 January 2017 (UTC)
 * One starts to run into problems of potential citogenesis. I strongly suspect the material on Mary Marquis in that Scotsman article was in fact drawn heavily from our article as it stood in January 2011  -- for example the Lorraine Kelly quote (since removed from our article), the "first person seen on Border TV", "but it became her career for the next twenty-seven years", etc.  Amusingly somebody offered a correction a day later, which was then rebutted a day or two after that, because our own article had had it right all the time after all.
 * Similarly this 2016 Daily Record piece is almost an exact retread of our article.  (Not that you like the Record, but there you go).
 * Even if this Wikipedia-involved citogenesis weren't the case, I think one could make quite a general proposition that an actual interview with a subject herself, even in the Mail, is actually a rather cleaner / more informative source for where material may originally have came from, rather than a subsequent "cuttings piece" in a later newspaper, whose own original sources may be opaque at best.
 * Obviously one needs to be absolutely careful with living people to "first do no harm", and it's a good thing that WP:BLP is a now lot more rigorous than it was in 2009. But in this case, I do feel some reassurance that the article appears to have had an edit from a close relative, possibly MM's husband (diff) -- who removed the Lorraine Kelly quote (and a record cover) -- but seems to have been comfortable enough with all the rest.  Jheald (talk) 11:45, 21 January 2017 (UTC)
 * I've posted at article talk and I suggest we continue the conversation there. --John (talk) 16:09, 21 January 2017 (UTC)

ArtUK links
Hi, thank you for updating the BBC Your Paintings links to the new ArtUK format on so many artist pages. Obviously this is a welcome improvement, but are you aware that for a handful of articles this seems to have generated an error message, which is displayed before the link ? Even with the error message the link still works, so this is a matter of article appearance rather than functionality. The pages affected, from my Watchlist, are Apologies if this has nothing to do with your edits, but I thought it best to contact you as a first step to finding a solution to this.14GTR (talk) 14:04, 20 February 2017 (UTC)
 * Vivian Pitchforth
 * Flora Lion
 * James Boswell (artist)
 * Phoebe Anna Traquair
 * Lucy Kemp-Welch
 * Anna Zinkeisen
 * Olive Edis
 * Mary Adshead
 * Thomas Freeth
 * Norah Neilson Gray
 * James Cowie (artist)
 * Charles Mahoney (artist)
 * Rodrigo Moynihan


 * Hi this happened because I made a change to the Art UK bio template, but my change didn't work.  I've reverted the template, but some of the wiki pages have been cached using the bad version.  To make the red message go away, hit edit on the External links section & preview & save, without changing any text.  That will refresh the cache for the page, using the good version.  In time, the red messages will also go away by themselves, as other edits are made to the page, or the page cache is routinely updated.  But I don't know how long that may take to happen.
 * I've learnt my lesson, and am now developing updates for the template in Art UK bio/sandbox -- specifically, one to draw and display the number of paintings automatically from Wikidata.
 * I'll go through the pages above with the null edit trick, and hope that that will account for most of them. Sorry to have messed up like this!  (I thought I had reverted the bad version of the template within a minute, so was very surprised to see it so much propagated.  Jheald (talk) 14:17, 20 February 2017 (UTC)


 * I just checked your contribution history and seen the edits you made at Module talk:WikidataIB - apologies for not checking that first before posting here. I wouldn't say your change to the template didn't work - I have hundreds of British artists on my watchlist and only a handful have the error message. Many thanks.14GTR (talk) 14:28, 20 February 2017 (UTC)

Wikidata
Hi Jheald, I noticed your question at Module talk:Wikidata. You might want to try out Wikidata as well. Cheers! thayts 💬  14:00, 11 March 2017 (UTC)

Woughton on the Green - really?
Are you sure that the part of the original Woughton parish left behind when Old Woughton left is really called "Woughton on the Green"? The councils websites [BC and PC] both still say "Woughton" or "Woughton Community Council" - see http://www.woughtoncommunitycouncil.gov.uk/council-information/who-are-we/ for example. The name you give is hard to believe, since WotG itself is in Old Woughton. Citation please. --John Maynard Friedman (talk) 11:44, 30 March 2017 (UTC)
 * . It does seem crazy, doesn't it. The source I was following was this page in the ONS database that is the central official repository for such names and boundaries (which the Ordnance Survey also follows, eg .  The downloadable version of the database gives a somewhat precise date of 22 May 2014 for the name-change (from "Woughton") to have apparently come into effect from 22 May 2014.


 * According to the rows in the downloadable database, the name for the original parish was apparently officially "Woughton on the Green". With effect from 1 April 2011 there was a modification to the boundary (under The Milton Keynes (Reorganisation of Community Governance) Order 2011), but the name remained "Woughton on the Green".  Then there was the split, with effect from 1 April 2012 (under The Milton Keynes (Reorganisation of Community Governance) Order 2012), and the name of the new CP was "Woughton".  Then, with effect from 22 May 2014, the boundaries apparently remain the same, but the name is to change to "Woughton on the Green".


 * Unhelpfully, the ONS database cites The Milton Keynes (Electoral Changes) Order 2014 as authority for the name change -- but the name doesn't appear to be in the PDF.


 * Conceivably, there might be an 'official' name for legal purposes that might have needed to be continuity in, for the purposes of some legal trust or bequest or something. Or the ONS may just have got it wrong, I don't know.


 * Commons (I think) tries to have category names that quite closely track the Ordnance Survey names, so that it can find and use official boundaries to move incoming pictures that have coordinates (eg from Geograph, or from smartphones) into the right categories. So I was trying to make sense of "Woughton on the Green" there, and then reflect what I had learnt to Wikipedia and Wikidata.


 * But I agree it does seem very confusing, and if you can make any more sense as to what's been going on, I'd be very grateful. Jheald (talk) 12:34, 30 March 2017 (UTC)
 * This is the same ONS that thinks that the population of Milton Keynes is two-thirds actual because they divided it into Bletchley and ... Milton Keynes! So a definite cock-up. Pending a more reliable source (at least Borough of Milton Keynes), i will do a partial revert. Meanwhile, would you repost your reply to me to the talk: Woughton (parish), please? --John Maynard Friedman (talk) 12:59, 30 March 2017 (UTC)

Orphaned non-free image File:The Changes closing titles.jpg
 Thanks for uploading File:The Changes closing titles.jpg. The image description page currently specifies that the image is non-free and may only be used on Wikipedia under a claim of fair use. However, the image is currently not used in any articles on Wikipedia. If the image was previously in an article, please go to the article and see why it was removed. You may add it back if you think that that will be useful. However, please note that images for which a replacement could be created are not acceptable for use on Wikipedia (see our policy for non-free media).

Note that any non-free images not used in any articles will be deleted after seven days, as described in section F5 of the criteria for speedy deletion. Thank you. Neil S. Walker (t@lk) 15:19, 1 July 2017 (UTC)


 * Fair enough. For some reason it was the closing titles, over the cave, that I found particularly enigmatic and haunting when I remembered back to the series, which is why it was the name card from the end credits that I used.  (Probably also it being the last thing to be seen from each episode, which therefore stayed in the mind, like an old-school DW cliffhanger).  But the opening titles with their suddenly frozen stills were also very striking, and probably more in line with WP conventions (as well as being perhaps a brighter, more engaging, and more dynamic image to stand for the series) -- so I'm happy enough to see the change made.  Jheald (talk) 16:10, 1 July 2017 (UTC)

Wikidata question
Reading your comments in a couple of places, it looks as though your feeling on Wikidata is that it could be a boon to the project but is currently too hard to use. Where exactly on the spectrum are you between thinking it's far too hard to use, and thinking it's almost ready? I ask because I think this will be a key point in any RfC that will come up. I could see some people arguing that they shouldn't have to leave the wikitext editing window to modify Wikidata items; that seems too insular to me -- we don't think that way about Commons, after all. However, at the other end of the spectrum, Alsee's reaction to Cite_Q that he describes here seems like the reaction most editors would have; it's hard to imagine many editors finding that a natural way to handle citations, though many of the computer people among us would applaud the goal of centralizing the data. Do you think an RfC that asks something like "Before edits to Wikidata are allowed to modify an article, how easy must it be for en-wiki editors to observe and modify those Wikidata items" is the right way to address this point? Mike Christie (talk - contribs - library) 11:31, 12 September 2017 (UTC)


 * Hi Mike. Tricky question.  I don't know whether you saw the bit in the comment I made at WP:VPP about the Art UK bio template (diff).  There is no way I would want to switch that template back to using predominantly locally-held data, and I would resist to my utmost any attempt to force that.


 * Similarly, I'd have to say that I think Cite_Q is a step forward. Yes the citation can't be edited locally in Wikitext any more.  But on the other hand, it's hard to see how the form-based editing on Wikidata could be made any easier given what it is.  It's easily accessible from the formatted citation via the Q-number link, and for new editors it may even be easier than citation templates to deal with -- it's not so different after all from Visual Editor, the citations editor from which is increasingly popular. (It wouldn't surprise me if the Cite_Q method was already integrated into VE).  Of course a lot of people hate VE as well, and it isn't something I ever tend to use myself, so that may or may not be such a helpful point.


 * And of course there's also the technology coming along to modify templated values directly from the reading screen.


 * I do have worries about potential vandalism; and that wikidata edits aren't well-enough integrated into change tracking tools, so that e.g watchlists/page histories/recent changes seemingly can't be configured to show (just) the relevant wikidata edits.


 * But I think I'd say that there does need to be some use of Wikidata on Wikipedia if progress is to be made in these areas; and that a blanket ban is the wrong approach. And that there are areas of the project, like Art UK bio and perhaps Cite_Q as well, with data that can be patrolled regularly on Wikidata to ensure continued conformity with external sites, where the downside risks are limited but there is real upside value to holding the data in the more structured form.


 * I appreciate that's only a partial answer to your question; but I hope it's a start. Jheald (talk) 12:31, 12 September 2017 (UTC)


 * Also worth noting that Cite_Q is an experimental template with only 225 uses so far. Jheald (talk) 12:33, 12 September 2017 (UTC)
 * Thanks; I'll have a think about your comments. I think I'll try putting Cite_Q in an article and see how it goes. Mike Christie (talk - contribs -  library) 23:23, 12 September 2017 (UTC)
 * Well.... creating Wikidata citations (in my limited experience) is a real pain -- because you have to create Wikidata items for the values all or most of the fields (except the ones where you don't). Maybe I'm just being uncourageous, or haven't yet dared experiment enough, but it's one part of Wikidata that I would leave to automatic creation & the machines, at least so far.  I suspect Cite_Q is okay for minor edits, if somebody else has already created the basic citation; but I'm not sure I'd want to populate one from scratch.  Jheald (talk) 23:30, 12 September 2017 (UTC)


 * How would a machine create a cite, though? E.g. I want to cite John Smith's birthdate; I have the physical book in front of me with the cite. Are you saying it's not going to be feasible for most users to build a Cite_Q from scratch? Or if there's a Cite_Q in the article, but I can see they have some details wrong so I want to update the Cite_Q to refer to my edition and correct the information, is that feasible for most users? I don't see a bot ever being able to do either of those things, and surely those represent most of the tasks we face when citing. Mike Christie (talk - contribs - library) 23:34, 12 September 2017 (UTC)


 * Okay, so the key pages here are probably cite_Q and d:Help:Sources, with d:Wikidata:WikiProject Books and d:Wikidata:WikiProject Periodicals also useful.
 * The main thing you're probably going to have to do is to create a Wikidata item for the book (if the item doesn't already exist), or for the article, if you wanted to cite an article. The wikidata items linked from the cite_Q page give some good examples to work from.  The relevant "book" and "article" sections of the Help:Sources page also cover the ground, as these are also items that need to be created for internal cites from within Wikidata.  I shouldn't worry too much about the distinction between a "work" and an "edition" for a book unless you know there were multiple editions and there is one that you particularly want to cite.  The WikiProjects give a more a more complete reference list of the sort of properties that could be put on a book or article item.
 * You might also need to create an item for the book's publisher or the article's journal, if they do not already exist.
 * It's not necessary to create an item for a book or article author if one doesn't already exist: d:Property:P2093 (author name string) can be used instead, for authors that are not notable; but I think there's no such alternative (at least not yet) for a book editor.
 * That's quite a lot of work to do manually for one cite. But just as there are already tools on WP to build citations from an ISBN or a PubMed number, so I think there's been quite an effort to try to build machinery to handle at least some of the above item creation automatically, particularly for scientific journal papers (in partnership with ContentMine, I believe).  Given existing templated citations, that machinery may be able to identify the journal titles, publishers, even authors etc and automatically populate the relevant fields of the right items.
 * Once you've got the item for the book or the article created (and all the subsidiary items needed to populate the fields of those items), then Cite_Q would appear to make it reasonably easy to generate a cite for the book as a whole, or a particular chapter or page number.
 * Let me know how you get on! Jheald (talk) 00:33, 13 September 2017 (UTC)
 * Will do; thanks for the help. I did create a book and some associated data on Wikidata some time ago in my first attempt to understand it; I don't think Cite_Q existed back then, but I recalling asking for something like it. Mike Christie (talk - contribs -  library) 01:00, 13 September 2017 (UTC)

Another Wikidata comment
Hi Jheald, thanks for your responses to my comment about Wikidata. I hope it did not come across that I was dismissive of what Wikidata is trying to do, because I understand its potential quite well; I've even contributed a few hundred edits to the project myself. I believe the mistrust on en.wikipedia towards Wikidata has two causes, one solvable & the other not easily solvable. The solvable one should be obvious: people at the Foundation are trying to force Wikidata on other projects like en.wikipedia, without any regard to the opinions of volunteers here. (That is a recurring theme with WMF employees, sad to admit.) Were there a Wikidata advocate visibly working Wikipedia to explain how Wikidata can make their editing easier, that would help things along. (And for all I know, you may be doing that this very moment.) The less solvable one is based on mentalities: contributors at Wikipedia tend to be people taking facts & combining them into an intelligible narrative, while contributors at Wikipedia are taking narratives & systematically extracting facts from them. These goals are obviously at cross-purposes, & it's no surprise that members of one group are frustrated at how the other organizes content. Not to mention after one has finished their work, their results reveal much of value that do not easily migrate to the other's domain.

But to my primary topic, about consuls & Wikidata. I mentioned this more as an example of how a non-Wikidata person comes to that project with a problem to solve or a goal to achieve, that person is easily baffled, & decides to just walk away. Wikidata could provide a lot of insights on Roman social history, for example, if one knew how to enter the data -- let alone query & present it correctly. But to do that, one would need to sift through the vocabulary of Wikidata & hopefully find it before boredom or distraction blocked further progress. The more baffling a project, the more likely only nerds with a specific interest in the subject will take it on. To return to my post in the Village Pump, I mentioned Open Stack for a reason: in over 20 years of working with computers, it is the most complex software I have encountered, yet the worst explained. I spent a month learning how to install it so it works, & might have learned sooner had I the benefit of even a screen shot to see the result of a successful install. While I had help with it, the tendency with computer people is to fix the problem without taking the time to explain how they did it. And I have much the same feeling with Wikidata: beyond a few simple things, it remains a dark forest full of grues.

(Yeah, I know, ask more questions. That works when I'm confident I'll be answered -- or I'll understand the answer.)

What I have been thinking of doing is to write up a use case for Roman consuls & present it. Start with a description of what the object would include, such as the need to indicate a colleague. And include the known exceptions to that rule -- some had more than one colleague, while some were sole consuls. Then the nature of dates: while a consul held office between fixed dates of a given year -- which changed thru history, another complication -- a large number do not have definite dates: see List of undated Roman consuls. This means there must be support for "fuzzy" dates. (Actually, if one could plot out all of these "fuzzy" dates, then overlay them with a grid of known dates of consuls, the result might make a decent amount of progress towards defining the dates of some of these undated consuls. But I'm getting ahead of myself here.) And that a given assertion in the object may not be reliable: many parent-child relationships are not attested in a primary source, but are deduced, inferred, or a speculative guess. Of course, this would need to be presented so that the experts in the Wikidata thesaurus & data tables would be encouraged to help.

And then, there's the matter of providing a source for this data. It exists in the Wikipedia, but I haven't figured out how to translate it over to Wikidata. Cite_Q, which you mention to Mike Christie above, might have the answer.

Until I draft a use case, I am keeping Wikidata in mind: I'm trying to write navigation boxes for the consuls of imperial Rome in a predictable format, & explain things in astandardized manner so a script could scrape the articles & populate the relevant fields correctly in Wikidata's objects. (It's amazing how many stock sentences one can accumulate in writing dozens of similar articles.)

Anyway, such are my thoughts. When I complete my current task of cleaning up the consular list & related articles, then I'll probably turn my attention to Wikidata & populate the data over there. Hopefully Wikidata won't seem as forbidding then as it was when I poked around there a couple of years ago. -- llywrch (talk) 21:36, 13 September 2017 (UTC)


 * Hi llywrch, sorry to be slow to get back to you.
 * Interestingly, I see that there's a d:Wikidata:WikiProject Ancient Rome over at Wikidata. At first glance it's not the busiest or most developed of projects, but I see that somebody did start a thread last year on the talk page about systematically adding some data for the consuls - see d:Wikidata_talk:WikiProject_Ancient_Rome.  It looks like they didn't take it any further though. (Or at least, haven't yet).


 * I think you're right that there it is a bit of a different job, building up Wikidata, than building up Wikipedia -- a bit more like working on a list article here, perhaps. I described how my own feelings and instincts are different a bit when I'm on each of the projects in one of my earlier posts at WP:VPP.


 * You're also right that there's a lot to get to grips with, and there's a crying need for a lot more & a lot better documentation -- both introductory how-to and documenting best practice. Reference pages like the front page of d:Wikidata:WikiProject Books are quite useful; but can often be quite hard to find, or may not entirely not up to date; and there are certainly a lot more of them (and "showcase items" to go with them) needed in many more subject areas.  d:Wikidata:WikiProject British Politicians is quite an interesting one, that is trying to build up a full database of all the current and past members of Parliament and the Lords.  EveryPolitician is also useful on a wider canvas.  Both of these may turn up similar issues to the consuls -- eg joint office holders etc -- so can be useful to see and discuss how people may be thinking about similar issues in parallel settings.


 * It's a useful exercise to gather together a reference page of the key properties and qualifiers that may be most relevant to a project, eg the consuls.


 * In a lot of cases, there probably are already ways to get quite a long way towards what you're looking for, as seen already in the items returned in the query I posted. The qualifier d:Property:P1706 "together with"  seems to be the established way to indicate the 'other' holder of an office.  It can be given the special value "no value" to indicate distinctly that there was no other (i.e. not just that the data hasn't been populated).


 * Fuzzy dates are possible. "circa" can be indicated with the qualifier d:Property:P1480 "sourcing circumstances" with the value d:Property:Q5727902 "circa".  There are also some more values to indicate "disputed", "uncertain" etc.
 * It's also possible to specify just a century (or even a millenium), together with qualifiers for "earliest date" and "latest date"
 * Or one can even just leave the value for a date as the special value "some value", and give more information through qualifiers.


 * ... leaving this for now because I have to go; but more to add in due course. Jheald (talk) 17:36, 14 September 2017 (UTC)

Girth of the Chest listed at Redirects for discussion
An editor has asked for a discussion to address the redirect Girth of the Chest. Since you had some involvement with the Girth of the Chest redirect, you might want to participate in the redirect discussion if you have not already done so. Steel1943 (talk) 05:07, 15 September 2017 (UTC)

Wikimeet
Good to get re-acquainted at the meetup yesterday. FYI, I got the tyre changed before my hands froze (Arthur Beale is recommended for arctic clothing). This morning, my garage found that there was a nail in the tyre and explained that it often happens after cars are serviced there as there are lots of building works in the nearby streets and so lots of nails too. Hmm... Andrew D. (talk) 15:14, 11 December 2017 (UTC)
 * Very glad to hear you got home safely. Did feel a bit guilty as I walked on towards the nice warm tube...  Hope you didn't get any more nails after they replaced the tyre... Jheald (talk) 15:20, 11 December 2017 (UTC)

Facto Post – Issue 7 – 15 December 2017
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 7 – 15 December 2017

 

A new bibliographical landscape
At the beginning of December, Wikidata items on individual scientific articles passed the 10 million mark. This figure contrasts with the state of play in early summer, when there were around half a million. In the big picture, Wikidata is now documenting the scientific literature at a rate that is about eight times as fast as papers are published. As 2017 ends, progress is quite evident.

Behind this achievement are a technical advance (fatameh), and bots that do the lifting. Much more than dry migration of metadata is potentially involved, however. If paper A cites paper B, both papers having an item, a link can be created on Wikidata, and the information presented to both human readers, and machines. This cross-linking is one of the most significant aspects of the scientific literature, and now a long-sought open version is rapidly being built up. The effort for the lifting of copyright restrictions on citation data of this kind has had real momentum behind it during 2017. WikiCite and the I4OC have been pushing hard, with the result that on CrossRef over 50% of the citation data is open. Now the holdout publishers are being lobbied to release rights on citations.

But all that is just the beginning. Topics of papers are identified, authors disambiguated, with significant progress on the use of the four million ORCID IDs for researchers, and proposals formulated to identify methodology in a machine-readable way. P4510 on Wikidata has been introduced so that methodology can sit comfortably on items about papers.

More is on the way. OABot applies the unpaywall principle to Wikipedia referencing. It has been proposed that Wikidata could assist WorldCat in compiling the global history of book translation. Watch this space.

And make promoting #1lib1ref one of your New Year's resolutions. Happy holidays, all!



Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 14:54, 15 December 2017 (UTC)
 * WikidataCon: Giving more people more access to more knowledge, report by Peter Kraker of Open Knowledge Maps
 * This is a story of my knowledge adventure in New Zealand moths via Wikicommons, Wikipedia and Wikidata, @SiobhanLeachman
 * Wikidata and Arabic dialects, research paper, DOI: 10.1109/AICCSA.2017.115
 * c:Commons:British Library/Mechanical Curator collection/georeferencing status, Mechanical Curator project on Commons hits 50K maps milestone
 * Historical dataset on the provenance of Wikipedia text: Who wrote this?, by Tilman Bayer, WMF blogpost
 * "Anyone can edit", not everyone does: Wikipedia and the gender gap (PDF), journal paper, Heather Ford and Judy Wajcman
 * Alpha Zero’s "Alien" Chess Shows the Power, and the Peculiarity, of AI, MIT Technology Review, by Will Knight, December 8, 2017
 * }

Yo Ho Ho


 Ϣere Spiel  Chequers  is wishing you Seasons Greetings! Whether you celebrate your hemisphere's Solstice or Christmas, Diwali, Hogmanay, Hanukkah, Lenaia, Festivus or even the Saturnalia, this is a special time of year for almost everyone!

Spread the holiday cheer by adding ~ to your friends' talk pages.

 Ϣere Spiel  Chequers  20:32, 21 December 2017 (UTC)

Facto Post – Issue 8 – 15 January 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 8 – 15 January 2018

 

Metadata on the March
From the days of hard-copy liner notes on music albums, metadata have stood outside a piece or file, while adding to understanding of where it comes from, and some of what needs to be appreciated about its content. In the GLAM sector, the accumulation of accurate metadata for objects is key to the mission of an institution, and its presentation in cataloguing.

Today Wikipedia turns 17, with worlds still to conquer. Zooming out from the individual GLAM object to the ontology in which it is set, one such world becomes apparent: GLAMs use custom ontologies, and those introduce massive incompatibilities. From a recent article by, we quote the observation that "vocabularies needed for many collections, topics and intellectual spaces defy the expectations of the larger professional communities." A job for the encyclopedist, certainly. But the data-minded Wikimedian has the advantages of Wikidata, starting with its multilingual data, and facility with aliases. The controlled vocabulary — sometimes referred to as a "thesaurus" as term of art — simplifies search: if a "spade" must be called that, rather than "shovel", it is easier to find all spade references. That control comes at a cost. Case studies in that article show what can lie ahead. The schema crosswalk, in jargon, is a potential answer to the GLAM Babel of proliferating and expanding vocabularies. Even if you have no interest in Wikidata as such, simply vocabularies V and W, if both V and W are matched to Wikidata, then a "crosswalk" arises from term v in V to w in W, whenever v and w both match to the same item d in Wikidata.

For metadata mobility, match to Wikidata. It's apparently that simple: infrastructure requirements have turned out, so far, to be challenges that can be met.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 12:38, 15 January 2018 (UTC)
 * 1lib1ref campaign starts today, see The Wikipedia Library/1Lib1Ref: also #1lib1ref introductory video by
 * Funders should mandate open citations, article 9 January 2018 in Nature by David Shotton
 * From snowflake to avalanche: Possibilities of using free citation data in libraries, translation from the German original of Annette Klein, Mannheim University Library
 * GLAM/Newsletter/December 2017/Contents/WMF GLAM report
 * Why Mickey Mouse’s 1998 copyright extension probably won't happen again: Copyrights from the 1920s will start expiring next year if Congress doesn't act, Timothy B. Lee, 8 January 2018, Arstechnica
 * }

Art UK links
Hi, I just noticed a couple of your Art UK additions on my watchlist (Henry Bone and Henry Howard (artist)). When I click on the links generated I get "database error" messages - do you know if this is a fault with the template or is the problem at the Art UK end? DuncanHill (talk) 23:07, 22 January 2018 (UTC)
 * Hmm. Looks like they have a systems problem -- even the site front page https://artuk.org/ is falling over.  As far as I know the IDs are all sound.  (It's possible a couple might have been modified since I last checked -- Art UK sometimes do that -- but I should catch those when I next scan the whole lot).  Jheald (talk) 23:12, 22 January 2018 (UTC)
 * But I might hold off adding the remaining templates until tomorrow. Jheald (talk) 23:14, 22 January 2018 (UTC)
 * OK, thanks - I didn't think to check their front page. I think the links to Art UK are a great idea, and should be very useful (once they get their site fixed!) DuncanHill (talk) 23:16, 22 January 2018 (UTC)
 * Art UK site all back in order again now. Jheald (talk) 12:05, 23 January 2018 (UTC)

Facto Post – Issue 9 – 5 February 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 9 – 5 February 2018

 

m:Grants:Project/ScienceSource is the new ContentMine proposal: please take a look.

Wikidata as Hub
One way of looking at Wikidata relates it to the semantic web concept, around for about as long as Wikipedia, and realised in dozens of distributed Web institutions. It sees Wikidata as supplying central, encyclopedic coverage of linked structured data, and looks ahead to greater support for "federated queries" that draw together information from all parts of the emerging network of websites. Another perspective might be likened to a photographic negative of that one: Wikidata as an already-functioning Web hub. Over half of its properties are identifiers on other websites. These are Wikidata's "external links", to use Wikipedia terminology: one type for the DOI of a publication, another for the VIAF page of an author, with thousands more such. Wikidata links out to sites that are not nominally part of the semantic web, effectively drawing them into a larger system. The crosswalk possibilities of the systematic construction of these links was covered in Issue 8.

External links speaks of them as kept "minimal, meritable, and directly relevant to the article." Here Wikidata finds more of a function. On viaf.org one can type a VIAF author identifier into the search box, and find the author page. The Wikidata Resolver tool, these days including Open Street Map, Scholia etc., allows this kind of lookup. The hub tool by takes a major step further, allowing both lookup and crosswalk to be encoded in a single URL.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 11:50, 5 February 2018 (UTC)
 * What galleries, libraries, archives, and museums can teach us about multimedia metadata on Wikimedia Commons, Wikimedia Foundation blogpost, 29 January 2018, by Jonathan Morgan and Sandra Fauconnier
 * The Wikipedia Library/1Lib1Ref/Connect, 2018 institutional participation in the #1lib1ref campaign
 * Newspeak House queries, created at 3 February 2018 event in London led by
 * Cochrane–Wikipedia Initiative, Wikipedia Signpost special report 5 February 2018, by
 * What is the Last Question?, 5 February 2018
 * }

Piccadilly network
Hi. Just an FYI about your wikidata map of the Piccadilly Line: it's missing Terminal 5. I confess I'm not entirely sure what the purpose of this map is, but I like it and shall show it to my colleagues at LU. Assuming our browsers are recent enough to run it! -mattbuck (Talk) 06:59, 7 February 2018 (UTC)
 * Thanks for that! It seems the data source I was importing the station adjacency data from was a bit out of date -- it was missing [Wood Lane tube station]], and the whole of the Docklands_Light_Railway.  There may be a few more glitches too, but I'm working through it.


 * The purpose of having the data in Wikidata is really to support connections boxes for stations at the bottom of articles in other Wikis that may not yet have them -- either to help generate them, or perhaps (in time) to populate them automatically, or to compare the local version against a central one. But having the data in Wikidata also makes it possible to write queries, such as displaying the maps which makes it easier to spot missing links, missing stations, and other errors; or classic queries such as 'what route has the shortest number of stops between A and B'; and it gives a dataset that can be extracted for people to play with other algorithms on, such as  (which is actually where I got the data from).


 * But I'm glad you liked it! Jheald (talk) 10:51, 7 February 2018 (UTC)

AWB and other semi-automated systems
Hi Jheald, Are you skilled with AWB or bots or similar methods for systematically producing short descriptions with fewer keystrokes? Cheers, &middot; &middot; &middot; Peter (Southwood) (talk): 06:25, 19 February 2018 (UTC)


 * I have a licence to drive AWB, but in reality I only take it out for a spin once every couple of years or so, so I wouldn't call myself an expert.


 * Something that it makes very easy is adding the same template to a lot of pages. You just use the find/replace function to slot it in where you want it.  So with AWB you could certainly create a list of pages to go through, and have it open the wikitext of the next page with short description already added at the top.  You could then fill in the template by hand, hit 'save', and AWB will automatically open the wikitext of the next page on the list.  This is the classic mode of AWB use.


 * What's more challenging, using AWB, is to see whether it can also help you fill in something different inside the template, specific to each page.


 * One possibility is if you can subst a creation template, that will draw information from somewhere, put it in the right slots of the creation template, and then output the correct wikitext. That way you can use AWB to write the same text string to each page, but it ends up creating different wikitext.  It would be fairly easy, for example, to make a creation template that pulled the description from Wikidata, and output short description with this inserted.  Subst'ing the template could therefore copy over the descriptions from Wikidata for every page in the run, but in a way that was semi-automated rather than a fully automated bot script, allowing you the user the chance to modify each one before hitting the save button and going on to the next one.


 * Similarly, one might have a version that added possible drafts from various sources, and let the user keep or delete or edit whichever they wanted. But the content would need to come from sources that either AWB (with its search & replace functions) or the template being subst'd (with access to Wikidata for the item) could access.


 * Another way to use AWB is to have pre-prepared the right values to slot in, sitting in a text editor -- so AWB opens the page, you copy/paste in the pre-written text, hit save, AWB automatically opens the next page. That's how I've used it eg if I need to add template parameters, and I can somehow pre-generate those template parameters.


 * What I don't know (but have long wondered) is whether it is possible to get rid of the manual cut and paste from the text editor -- whether one can just give AWB a file, and tell it to take n lines at a time, for n template parameters. This I don't know; and I also don't know about what other semi-automated tools that are out there might be able to do. Jheald (talk) 22:36, 19 February 2018 (UTC)


 * A different aspect would be a process to see a new short description had been added to the article page, and write a message to the talk page. I have limited experience with bots, but this one I think ought to be quite easy.  The bot simply gets the list of pages in the category with short descriptions once every few hours, compares it with an offline list of pages it has already notified, and substs a template message, with appropriate added parameters, to the bottom of the talk page.


 * There might be ways to make this more efficient, as the size of the category with short descriptions gets very large -- eg there might be a service (I'm not sure) that one can use to get recent changes in a category membership. But this one, I think, should not be hard, as a fully automated (but easily customisable) bot service. Jheald (talk) 22:45, 19 February 2018 (UTC)


 * It would seem that you are ahead of me, but maybe not by too far. I also have authorisation to run AWB, but have only used it to find and replace within an article. I may try some more complicated stuff (complicated for me, that is) and see what I can make happen. &middot; &middot; &middot; Peter (Southwood) (talk): 08:42, 20 February 2018 (UTC)

Another Daily Mail RfC
There is an RfC at Talk:Daily Mail. Your input would be most helpful. --Guy Macon (talk) 12:20, 1 March 2018 (UTC)

Facto Post – Issue 10 – 12 March 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 10 – 12 March 2018

 

Milestone for mix'n'match
Around the time in February when Wikidata clicked past item Q50000000, another milestone was reached: the mix'n'match tool uploaded its 1000th dataset. Concisely defined by its author,, it works "to match entries in external catalogs to Wikidata". The total number of entries is now well into eight figures, and more are constantly being added: a couple of new catalogs each day is normal.

Since the end of 2013, mix'n'match has gradually come to play a significant part in adding statements to Wikidata. Particularly in areas with the flavour of digital humanities, but datasets can of course be about practically anything. There is a catalog on skyscrapers, and two on spiders.

These days mix'n'match can be used in numerous modes, from the relaxed gamified click through a catalog looking for matches, with prompts, to the fantastically useful and often demanding search across all catalogs. I'll type that again: you can search 1000+ datasets from the simple box at the top right. The drop-down menu top left offers "creation candidates", Magnus's personal favourite. Mix'n'match/Manual for more.

For the Wikidatan, a key point is that these matches, however carried out, add statements to Wikidata if, and naturally only if, there is a Wikidata property associated with the catalog. For everyone, however, the hands-on experience of deciding of what is a good match is an education, in a scholarly area, biographical catalogs being particularly fraught. Underpinning recent rapid progress is an open infrastructure for scraping and uploading.

Congratulations to Magnus, our data Stakhanovite!

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 12:26, 12 March 2018 (UTC)
 * Wikipedia goes 3D allowing users to upload .STLs for digital reference, Beau Jackson for 3dprintingindustry.com, February 22 2018
 * WikiCite report (video)
 * Formal publication and announcement of ISBN citation dataset, see Twitter post, February 23 2018
 * Plotting the Course Through Charted Waters, workshop on data visualization literacy from Mikhail Popov, Wikimedia Foundation
 * Using Wikidata to build an authority list of Holocaust-era ghettos, Nancy Cooey, United States Holocaust Memorial Museum, February 12 2018
 * Why Should You Learn SPARQL? Wikidata! Mark Longair, blogpost November 29 2017
 * Back to the future: Does graph database success hang on query language?, George Anadiotis for Big on Data, March 5 2018
 * }

Passover
Regarding, Passover was listed on Selected anniversaries/March 31. Regards, — howcheng  {chat} 01:17, 1 April 2018 (UTC)

I'll add that Jewish holy days used to be posted on the day before, as you expected for Pesach two days ago, with the notation begins at sundown, but this was changed. I don't know why this change was made nor can I suggest how you might find the discussion that preceded it. Dyspeptic skeptic (talk) 07:40, 1 April 2018 (UTC)
 * There's a follow-up discussion at Wikipedia talk:WikiProject Judaism/Archive 36, but I can't remember where the original discussion was. WP:ERRORS pr Talk:Main Page are the likely candidates. — howcheng  {chat} 07:14, 3 April 2018 (UTC)
 * Thanks, Howard. I started a thread at Wikipedia_talk:WikiProject_Judaism, but nobody seems at all interested, so ... whatever, I suppose. Jheald (talk) 09:08, 3 April 2018 (UTC)

Facto Post – Issue 11 – 9 April 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 11 – 9 April 2018

 

The 100 Skins of the Onion
Open Citations Month, with its eminently guessable hashtag, is upon us. We should be utterly grateful that in the past 12 months, so much data on which papers cite which other papers has been made open, and that Wikidata is playing its part in hosting it as "cites" statements. At the time of writing, there are 15.3M Wikidata items that can do that.

Pulling back to look at open access papers in the large, though, there is is less reason for celebration. Access in theory does not yet equate to practical access. A recent LSE IMPACT blogpost puts that issue down to "heterogeneity". A useful euphemism to save us from thinking that the whole concept doesn't fall into the realm of the oxymoron.

Some home truths: aggregation is not content management, if it falls short on reusability. The PDF file format is wedded to how humans read documents, not how machines ingest them. The salami-slicer is our friend in the current downloading of open access papers, but for a better metaphor, think about skinning an onion, laboriously, 100 times with diminishing returns. There are of the order of 100 major publisher sites hosting open access papers, and the predominant offer there is still a PDF. From the discoverability angle, Wikidata's bibliographic resources combined with the SPARQL query are superior in principle, by far, to existing keyword searches run over papers. Open access content should be managed into consistent HTML, something that is currently strenuous. The good news, such as it is, would be that much of it is already in XML. The organisational problem of removing further skins from the onion, with sensible prioritisation, is certainly not insuperable. The CORE group (the bloggers in the LSE posting) has some answers, but actually not all that is needed for the text and data mining purposes they highlight. The long tail, or in other words the onion heart when it has become fiddly beyond patience to skin, does call for a pis aller. But the real knack is to do more between the XML and the heart.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 16:25, 9 April 2018 (UTC)
 * Crossref as a new source of citation data: A comparison with Web of Science and Scopus, CWTS blogpost 17 January 2018, Nees Jan van Eck, Ludo Waltman, Vincent Larivière, Cassidy Sugimoto
 * Citations with identifiers in Wikipedia, figshare dataset
 * Making women more visible online—with Wikidata tools!, Wikimedia blogpost 29 March 2018 by Sandra Fauconnier
 * Village pump discussion, Turn on mapframe? We’re ready if you are reaches conclusions
 * The Power of the Wikimedia Movement beyond Wikimedia, Forbes 28 March 2018, Michael Bernick
 * Tracing stolen bitcoin, blogpost 26 March 2018 by Ross J. Anderson
 * }

Wikidata linking to Wikipedia
Just a reminder, you phrased your message as a comment and note a !vote with oppose or support. --RAN (talk) 13:03, 10 May 2018 (UTC)

Facto Post – Issue 12 – 28 May 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 12 – 28 May 2018

 

ScienceSource funded
The Wikimedia Foundation announced full funding of the ScienceSource grant proposal from ContentMine on May 18. See the ScienceSource Twitter announcement and 60 second video.

The proposal includes downloading 30,000 open access papers, aiming (roughly speaking) to create a baseline for medical referencing on Wikipedia. It leaves open the question of how these are to be chosen.
 * A medical canon?

The basic criteria of WP:MEDRS include a concentration on secondary literature. Attention has to be given to the long tail of diseases that receive less current research. The MEDRS guideline supposes that edge cases will have to be handled, and the premature exclusion of publications that would be in those marginal positions would reduce the value of the collection. Prophylaxis misses the point that gate-keeping will be done by an algorithm.

Two well-known but rather different areas where such considerations apply are tropical diseases and alternative medicine. There are also a number of potential downloading troubles, and these were mentioned in Issue 11. There is likely to be a gap, even with the guideline, between conditions taken to be necessary but not sufficient, and conditions sufficient but not necessary, for candidate papers to be included. With around 10,000 recognised medical conditions in standard lists, being comprehensive is demanding. With all of these aspects of the task, ScienceSource will seek community help.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. ScienceSource pages will be announced there, and in this mass message. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 10:16, 28 May 2018 (UTC)
 * d:Wikidata:Lexicographical data, Wikidata's multi-lingual dictionary project gets going
 * Ordia tool, a basic search interface for Wikidata lexemes and forms
 * OpenRefine tool 3.0, May update allows wrangling of tabular information into Wikidata
 * d:Wikidata:WikiProject British Politicians pushes ahead with data modelling and imports
 * #1Lib1Ref Returns for a Second Time in 2018, IFLA blogpost 25 May 2018, second chance this year to participate in referencing Wikipedia
 * }

Facto Post – Issue 13 – 29 May 2018
MediaWiki message delivery (talk) 18:19, 29 June 2018 (UTC)

Sphinx pinastri Commons/Wikidata error
Hello! I've just created an article in Czech Wiki about Sphinx pinastri, linked it to its Wikidata item... and when I checked the Commons gallery for this species a few hours later, it showed an error. I tried to fix it, but then I noticed your bot doing something about it already, so I stopped. But the error message still stays in place. Is it just some kind of time delay before it recognizes the changes your bot has done, or did I do something wrong? I'm not very experienced here, but I've done such linking with multiple articles before, and never got this error... Do you have any explanation, please? --GeXeS (talk) 19:22, 29 June 2018 (UTC)
 * Sorry you got caught up with this. I think it is all fixed now.
 * The warning messages you saw were all down to the batch edits that I ran last night. The messages were mostly to do with various reciprocal relations that there ought to be on Wikidata if there are items both for a category and for an article or gallery for the same subject.  My batch last night was making some new items on Wikidata corresponding to the categories (rather than the galleries) on Commons.  Usually this isn't necessary, because usually a Commons category can be linked straight to the main Wikidata item for the corresponding thing;  but it can be a helpful thing, when there is a Common gallery as well, because the rule is when there's a gallery then that is the page that should be linked directly.  So making these new category items meant there was then a dedicated Wikidata item for the Commons category to be linked to as well, which the templates on the Commons category page can use to draw information from Wikidata.
 * But the job has to be done in two steps, because it was only this morning that I could find out what the Q-numbers were for the new category-items that had been created overnight, and only when I had those Q-numbers could I then put in the statements on Wikidata in the other direction, pointing from the main item to the new Q number.
 * The taxonomy template is quite clever, and can tell when those reciprocal statements are missing. So when there's a category-item that has a statement to the main item, but the main item doesn't have a statement pointing back to the category item, the taxonomy template shows a big red warning.
 * As I said, I am now in the process of filling in those missing reverse links, in a batch run that's about 60% of the way through.
 * However, it can sometimes take time for a page with a template that draws from Wikidata to update, following a change on Wikidata. Instead, for a time, the previous version that has been saved in the server cache can be continue to be shown for a while.  This is what you were seeing with c:Category:Sphinx_pinastri.  Even after the missing reciprocal statements had been added on Wikidata, the old version of the category page continued to be shown, with its red warnings.  In this case, just pressing 'edit' and then 'preview' was enough to force the system to rebuild the page and show a current up-to-date version -- in this case with no red warnings any more, because the problem had now been fixed on Wikidata.
 * I am sorry you got caught up with this, and that the pages ended up with these warning messages on them today, which was all down to my edits. Unfortunately, I still have about 4 more batches of the same size still to do, which will trigger similar problems (albeit transient problems). I do have one idea that I'm going to try that might help to avoid them, but it might not work, so we will see.  But it shouldn't be too long before everything is fixed and normal again.
 * Sorry again for having caused you this confusion, but I hope it is all sorted out now. Best regards, Jheald (talk) 20:19, 29 June 2018 (UTC)
 * Oh, I understand now. Glad to hear that I found the proper string to pull - and thank you for the broad explanation. As you say, the Edit+Preview returns an errorless page, so I guess it will all calm down after the server cache catches up. Thank you and good night! --GeXeS (talk) 21:20, 29 June 2018 (UTC)

Facto Post – Issue 14 – 21 July 2018
MediaWiki message delivery (talk) 06:10, 21 July 2018 (UTC)

Facto Post – Issue 15 – 21 August 2018
MediaWiki message delivery (talk) 13:23, 21 August 2018 (UTC)

UK Electoral Wards
I'm working on adding information for a set of UK council electoral wards to Wikidata. I'm doing this work in my job as a Data Analyst at mySociety and it is part of a larger project to collect information on elected representatives across all countries. I'm currently working on electoral wards in London boroughs and I can see you have done some work on these previously.

In terms of entering the data I'm currently following the model from WikiProject_every_politician, and for wards with changing boundaries I'm trying to follow the advice at https://www.wikidata.org/wiki/Wikidata:WikiProject_British_Politicians#Constituencies. In general this means creating separate items for Wards vs the general geographical area and creating new items where the Ward boundaries change even when the name stays the same.

I've come across some situations where the Ward and a geographic area have the same name e.g., and I'm trying to work out if I should separate these into two separate entries. I actually went ahead and did this in the case of leading to the creation of  and the movement of some values from one to the other. I wondered if you had any views on the best or preferred approach in these cases? Owenpatel (talk) 16:49, 24 August 2018 (UTC)


 * In general I think I *would* separate them into different entities. The City of London wards are fairly historic, so perhaps there I might be less convinced that the term had a separate distinct significance as a general area outwith the identity of the ward.  But for the rest, where the term has had a long previous use for the area as a whole before the creation of the ward, then I would think it would make sense to have separate entity for the ward,  the older area.
 * In terms of what to do re: boundary changes, it's a good question. Pinging also  for thoughts.  For civil parishes, which typically have (or are considered to have) a long continuity of history, are where boundary changes are typically no more than fairly minor adjustments, we have tended to only have a single item.  For constituencies, as you say, Andrew has tended to the view that where there are substantial changes, it makes sense to have new items.  My guess is that for wards the balance of convenience will be somewhere between the two -- in most cases, as fairly artificial items, without much historical continuity, distinct new items might well be more appropriate.  But where boundary changes have been only minor adjustments, and the wards have a particular tradition and identity, then a single item might make sense.
 * To a large extent, also, it depends what you want to use them for. If having separate items makes it easier to record and analyse data, then have separate items. If it doesn't, and there is appreciable continuity, then maybe don't.
 * My apologies if that's all been a bit woolly, and not as cut-and-dried as perhaps might be preferable. Jheald (talk) 22:54, 24 August 2018 (UTC)


 * Fun question. With the UK constituencies, we haven't yet done any significant splitting by "change of shape" - the model does support it but as we don't store geoshapes, it doesn't make much difference, so I'm taking "same name" as "unchanged". At the moment splitting is done by:
 * change of name from XYZ to ABC
 * dissolution of XYZ as part of boundary changes (a later XYZ gets a new item with new dates, so we have one 1832-1885 and one 1918-1974)
 * changing the number of seats associated with a constituency even if name & boundary are consistent (I'm in the middle of rolling this out)
 * I agree it depends on quite what you want to model. If geographical association is important and you plan to import geoshapes and marry wards up to what's inside them, then definitely new items for new boundaries. Otherwise, though, you can probably get by with a single item if it keeps the same name and remains in existence. Andrew Gray (talk) 09:24, 25 August 2018 (UTC)


 * thought I would chip though basically to agree with what @Andrew Gray says. I just wanted to draw attention to GSS codes which are identifiers for the geography, as opposed to the name/no of seats/electoral term. I assume (though I haven't checked) that a ward where the only change is the number of seats would keep the same GSS code, however if there is change to the boundary with no change to anything else the GSS code would change. Depending on what modelling route is taken it could lead to Wikidata identifiers and GSS codes having a many to many relationship. I'm not experienced enough on the Wikidata side of things to understand all the implications of this, but thought it worth flagging. --Wroper (talk) 08:27, 28 August 2018 (UTC)
 * The relationship shouldn't be many-to-many because I think there will only ever be one Wikidata item for a particular GSS code. But it can be one-to-many, as shown by eg the results of this query:  .  In such cases, for civil parishes as at d:Q895512 I have tended to set the old ID to have "deprecated rank", together with a qualifier  and a qualifier  = .  Where this has been done, there should only be one entry in the "preferred ID" column of the query.  However, because the re-ranking can't be done by the most common semi-automated tool, it looks like there are quite a few where  has been added, but the change of rank to deprecated has never actually been done. (Soz!) It looks like the same also applies for constituencies (E14...), which have been more 's domain.  Jheald (talk) 09:30, 28 August 2018 (UTC)


 * Thanks all for the discussion. Goegraphical association is important to us, so I think generally I'll create a new item for new boundaries. I'll have a look at the City of London wards - it sounds like there might be a good reason to treat these differently. I'll also have a look at wards with with a  and/or  qualifier to see if I can mark values as deprecated

Facto Post – Issue 16 – 30 September 2018
MediaWiki message delivery (talk) 17:57, 30 September 2018 (UTC)

Nomination of Rotation group (disambiguation) for deletion
A discussion is taking place as to whether the article Rotation group (disambiguation) is suitable for inclusion in Wikipedia according to Wikipedia's policies and guidelines or whether it should be deleted.

The article will be discussed at Articles for deletion/Rotation group (disambiguation) until a consensus is reached, and anyone, including you, is welcome to contribute to the discussion. The nomination will explain the policies and guidelines which are of concern. The discussion focuses on high-quality evidence and our policies and guidelines.

Users may edit the article during the discussion, including to improve the article to address concerns raised in the discussion. However, do not remove the article-for-deletion notice from the top of the article. Nowak Kowalski (talk) 16:48, 17 October 2018 (UTC)

Facto Post – Issue 17 – 29 October 2018
MediaWiki message delivery (talk) 15:01, 29 October 2018 (UTC)

Facto Post – Issue 18 – 30 November 2018
MediaWiki message delivery (talk) 11:20, 30 November 2018 (UTC)

Yo Ho Ho


 Ϣere Spiel  Chequers  is wishing you Seasons Greetings! Whether you celebrate your hemisphere's Solstice or Christmas, Diwali, Hogmanay, Hanukkah, Lenaia, Festivus or even the Saturnalia, this is a special time of year for almost everyone!

Spread the holiday cheer by adding ~ to your friends' talk pages.

 Ϣere Spiel  Chequers  21:45, 22 December 2018 (UTC)

WikiProject Genealogy - newsletter No.6
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color:Cornsilk; border: 2px solid #bddff2; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Newsletter Nr 6, 2018-12-25, for  WikiProject Genealogy (and Wikimedia genealogy project on Meta)

 <hr style="border-bottom: 1px solid rgba( 109, 193, 240, 0.75 );" /> Participation:

This is the sixth newsletter sent by mass mail to members in WikiProject Genealogy, to everyone who voted a support for establishing a potential Wikimedia genealogy project on meta, and anyone who during the years showed an interest in genealogy on talk pages and likewise.

(To discontinue receiving Project Genealogy newsletters, please see below)

Now 100 supporters
At 3 December 2018, the list of users who support the potential Wikimedia genealogy project, reached 100!

A demo wiki is up and running!
You can already now try out the demo for a genealogy wiki at https://tools.wmflabs.org/genealogy/wiki/Main_Page and try out the functions. You will find parts of the 18th Pharao dynasty and other records submitted by the 7 first users, and it would be great if you would add some records.

And with those great news we want to wish you a creative New Year 2019!

'''Don't want newsletters? If you wish to opt-out of future mailings, please remove yourself from the mailing list or alternatively to opt-out of all massmessage mailings, you may add Category:Opted-out of message delivery to your user talk page.'''

Cheers from your WikiProject Genealogy coordinator. To discontinue receiving Project Genealogy newsletters, please remove your name from our mailing list. Newsletter delivered by MediaWiki message delivery
 * }

Merry Christmas
<div style="border-style:solid; border-color:#01902a; background-color:#fff; border-width:3px; text-align:left; padding:2px;"><div style="border-style:solid; border-color:red; background-color:#fff; border-width:2px; text-align:left; padding:6px;" class="plainlinks">

Red rose64 &#x1f339; (talk) is wishing you a Merry Christmas! This greeting (and season) promotes WikiLove and hopefully this note has made your day a little better. Spread the WikiLove by wishing another user a Merry Christmas, whether it be someone you have had disagreements with in the past, a good friend, or just some random person. Happy New Year!

Spread the cheer by adding {{subst:Xmas6}} to their talk page with a friendly message. -- Red rose64 &#x1f339; (talk) 12:32, 25 December 2018 (UTC)

Dahn Y'Israel Nokeam listed at Redirects for discussion
An editor has asked for a discussion to address the redirect Dahn Y&. Since you had some involvement with the Dahn Y'Israel Nokeam redirect, you might want to participate in the redirect discussion if you have not already done so. Catrìona (talk) 03:52, 26 December 2018 (UTC)

Facto Post – Issue 19 – 27 December 2018
MediaWiki message delivery (talk) 19:08, 27 December 2018 (UTC)