User talk:Maxlath

A belated welcome!
Here's wishing you a belated welcome to Wikipedia, Zorglub27. I see that you've already been around a while and wanted to thank you for your contributions. Though you seem to have been successful in finding your way around, you may benefit from following some of the links below, which help editors get the most out of Wikipedia: Also, when you post on talk pages you should sign your name on talk pages using four tildes ( ~ ); that should automatically produce your username and the date after your post.
 * Introduction
 * The five pillars of Wikipedia
 * How to edit a page
 * Help pages
 * How to write a great article

I hope you enjoy editing here and being a Wikipedian! If you have any questions, feel free to leave me a message on, consult Questions, or place helpme on your talk page and ask your question there.

Again, welcome! -- Irn (talk) 17:46, 9 November 2011 (UTC)

September 2013
Hello, I'm BracketBot. I have automatically detected that [//en.wikipedia.org/w/index.php?diff=573566857 your edit] to Mediapart may have broken the syntax by modifying 2 "[]"s. If you have, don't worry: just [ edit the page] again to fix it. If I misunderstood what happened, or if you have any questions, you can leave a message on [//en.wikipedia.org/w/index.php?action=edit&preload=User:A930913/BBpreload&editintro=User:A930913/BBeditintro&minor=&title=User_talk:A930913&preloadtitle=BracketBot%20-%20&section=new my operator's talk page].
 * List of unpaired brackets remaining on the page:

Thanks, BracketBot (talk) 22:56, 18 September 2013 (UTC)
 * Article by Scott Sayare, published by the New York Times - March 19, 2013.

Merger discussion for Vitalik Buterin
An article that you have been involved in editing&mdash;Vitalik Buterin &mdash;has been proposed for merging with another article. If you are interested, please participate in the merger discussion. Thank you. Jtbobwaysf (talk) 16:15, 29 April 2016 (UTC)

JavaScript RegExp problem
I noticed you have experience in JavaScript. I'm hoping you can help me with a problem I've run into writing a userscript.

Please see my post at Wikipedia talk:WikiProject JavaScript.

Thank you. The Transhumanist 12:22, 5 May 2017 (UTC)

Wikidata-cli
Just couple questions about wikidata-cli Example use case could be and P3602 (candidacy in election) in that page. I used pywikibot to save that but would be nice to have more general solution. --Zache (talk) 07:26, 25 May 2017 (UTC) (edit: no wikidata P-template in enwiki fix)
 * Does wikidata-cli support qualifiers or multirow sources?
 * What datatypes it supports as value?

Facto Post – Issue 9 – 5 February 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 9 – 5 February 2018

 

m:Grants:Project/ScienceSource is the new ContentMine proposal: please take a look.

Wikidata as Hub
One way of looking at Wikidata relates it to the semantic web concept, around for about as long as Wikipedia, and realised in dozens of distributed Web institutions. It sees Wikidata as supplying central, encyclopedic coverage of linked structured data, and looks ahead to greater support for "federated queries" that draw together information from all parts of the emerging network of websites. Another perspective might be likened to a photographic negative of that one: Wikidata as an already-functioning Web hub. Over half of its properties are identifiers on other websites. These are Wikidata's "external links", to use Wikipedia terminology: one type for the DOI of a publication, another for the VIAF page of an author, with thousands more such. Wikidata links out to sites that are not nominally part of the semantic web, effectively drawing them into a larger system. The crosswalk possibilities of the systematic construction of these links was covered in Issue 8.

External links speaks of them as kept "minimal, meritable, and directly relevant to the article." Here Wikidata finds more of a function. On viaf.org one can type a VIAF author identifier into the search box, and find the author page. The Wikidata Resolver tool, these days including Open Street Map, Scholia etc., allows this kind of lookup. The hub tool by takes a major step further, allowing both lookup and crosswalk to be encoded in a single URL.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 11:50, 5 February 2018 (UTC)
 * What galleries, libraries, archives, and museums can teach us about multimedia metadata on Wikimedia Commons, Wikimedia Foundation blogpost, 29 January 2018, by Jonathan Morgan and Sandra Fauconnier
 * The Wikipedia Library/1Lib1Ref/Connect, 2018 institutional participation in the #1lib1ref campaign
 * Newspeak House queries, created at 3 February 2018 event in London led by
 * Cochrane–Wikipedia Initiative, Wikipedia Signpost special report 5 February 2018, by
 * What is the Last Question?, 5 February 2018
 * }

Facto Post – Issue 10 – 12 March 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 10 – 12 March 2018

 

Milestone for mix'n'match
Around the time in February when Wikidata clicked past item Q50000000, another milestone was reached: the mix'n'match tool uploaded its 1000th dataset. Concisely defined by its author,, it works "to match entries in external catalogs to Wikidata". The total number of entries is now well into eight figures, and more are constantly being added: a couple of new catalogs each day is normal.

Since the end of 2013, mix'n'match has gradually come to play a significant part in adding statements to Wikidata. Particularly in areas with the flavour of digital humanities, but datasets can of course be about practically anything. There is a catalog on skyscrapers, and two on spiders.

These days mix'n'match can be used in numerous modes, from the relaxed gamified click through a catalog looking for matches, with prompts, to the fantastically useful and often demanding search across all catalogs. I'll type that again: you can search 1000+ datasets from the simple box at the top right. The drop-down menu top left offers "creation candidates", Magnus's personal favourite. Mix'n'match/Manual for more.

For the Wikidatan, a key point is that these matches, however carried out, add statements to Wikidata if, and naturally only if, there is a Wikidata property associated with the catalog. For everyone, however, the hands-on experience of deciding of what is a good match is an education, in a scholarly area, biographical catalogs being particularly fraught. Underpinning recent rapid progress is an open infrastructure for scraping and uploading.

Congratulations to Magnus, our data Stakhanovite!

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 12:26, 12 March 2018 (UTC)
 * Wikipedia goes 3D allowing users to upload .STLs for digital reference, Beau Jackson for 3dprintingindustry.com, February 22 2018
 * WikiCite report (video)
 * Formal publication and announcement of ISBN citation dataset, see Twitter post, February 23 2018
 * Plotting the Course Through Charted Waters, workshop on data visualization literacy from Mikhail Popov, Wikimedia Foundation
 * Using Wikidata to build an authority list of Holocaust-era ghettos, Nancy Cooey, United States Holocaust Memorial Museum, February 12 2018
 * Why Should You Learn SPARQL? Wikidata! Mark Longair, blogpost November 29 2017
 * Back to the future: Does graph database success hang on query language?, George Anadiotis for Big on Data, March 5 2018
 * }

Discovering inventaire.io
I had some ideas for improving inventaire.io to attract more users.

When I first visit a book page, I only see the basic catalog record. The site only provides info on book sharing of your friends, which for a new user like myself stumbling in, means it doesn't show anything about book sharing. While I understand this focus on friend groups may be part of your design (and makes a lot of sense to localize a geographic area for physical books), this makes the site of limited value for newcomers.

Perhaps it could show a stat eg. "242 users have copies of this book." to lure passers by to become registered users. Maybe you can affiliate with a bookseller to give them a small discount if they buy it through your site and add it to their library on your site.

Try to consider the user experience of a new user who discovers your site from a book page, rather than the home page and how to demonstrate your site's value to them right away.

Hope you appreciate my unsolicited advice and best of luck to you! There's not a ton of competition in the book info webpage market. Daask (talk) 10:08, 16 March 2018 (UTC)
 * thank you for the advice! Yes, there is definitely room for improvement on helping new user discover what this is all about :) -- Maxlath (talk) 19:05, 16 March 2018 (UTC)

Facto Post – Issue 11 – 9 April 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 11 – 9 April 2018

 

The 100 Skins of the Onion
Open Citations Month, with its eminently guessable hashtag, is upon us. We should be utterly grateful that in the past 12 months, so much data on which papers cite which other papers has been made open, and that Wikidata is playing its part in hosting it as "cites" statements. At the time of writing, there are 15.3M Wikidata items that can do that.

Pulling back to look at open access papers in the large, though, there is is less reason for celebration. Access in theory does not yet equate to practical access. A recent LSE IMPACT blogpost puts that issue down to "heterogeneity". A useful euphemism to save us from thinking that the whole concept doesn't fall into the realm of the oxymoron.

Some home truths: aggregation is not content management, if it falls short on reusability. The PDF file format is wedded to how humans read documents, not how machines ingest them. The salami-slicer is our friend in the current downloading of open access papers, but for a better metaphor, think about skinning an onion, laboriously, 100 times with diminishing returns. There are of the order of 100 major publisher sites hosting open access papers, and the predominant offer there is still a PDF. From the discoverability angle, Wikidata's bibliographic resources combined with the SPARQL query are superior in principle, by far, to existing keyword searches run over papers. Open access content should be managed into consistent HTML, something that is currently strenuous. The good news, such as it is, would be that much of it is already in XML. The organisational problem of removing further skins from the onion, with sensible prioritisation, is certainly not insuperable. The CORE group (the bloggers in the LSE posting) has some answers, but actually not all that is needed for the text and data mining purposes they highlight. The long tail, or in other words the onion heart when it has become fiddly beyond patience to skin, does call for a pis aller. But the real knack is to do more between the XML and the heart.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 16:25, 9 April 2018 (UTC)
 * Crossref as a new source of citation data: A comparison with Web of Science and Scopus, CWTS blogpost 17 January 2018, Nees Jan van Eck, Ludo Waltman, Vincent Larivière, Cassidy Sugimoto
 * Citations with identifiers in Wikipedia, figshare dataset
 * Making women more visible online—with Wikidata tools!, Wikimedia blogpost 29 March 2018 by Sandra Fauconnier
 * Village pump discussion, Turn on mapframe? We’re ready if you are reaches conclusions
 * The Power of the Wikimedia Movement beyond Wikimedia, Forbes 28 March 2018, Michael Bernick
 * Tracing stolen bitcoin, blogpost 26 March 2018 by Ross J. Anderson
 * }

Facto Post – Issue 12 – 28 May 2018
{| style="position: relative; margin-left: 2em; margin-right: 2em; padding: 0.5em 1em; background-color: #7FFFD4; border: 2px solid #00FFFF; border-color: rgba( 109, 193, 240, 0.75 ); border-radius: 8px; box-shadow: 8px 8px 12px rgba( 0, 0, 0, 0.7 );"
 * Facto Post – Issue 12 – 28 May 2018

 

ScienceSource funded
The Wikimedia Foundation announced full funding of the ScienceSource grant proposal from ContentMine on May 18. See the ScienceSource Twitter announcement and 60 second video.

The proposal includes downloading 30,000 open access papers, aiming (roughly speaking) to create a baseline for medical referencing on Wikipedia. It leaves open the question of how these are to be chosen.
 * A medical canon?

The basic criteria of WP:MEDRS include a concentration on secondary literature. Attention has to be given to the long tail of diseases that receive less current research. The MEDRS guideline supposes that edge cases will have to be handled, and the premature exclusion of publications that would be in those marginal positions would reduce the value of the collection. Prophylaxis misses the point that gate-keeping will be done by an algorithm.

Two well-known but rather different areas where such considerations apply are tropical diseases and alternative medicine. There are also a number of potential downloading troubles, and these were mentioned in Issue 11. There is likely to be a gap, even with the guideline, between conditions taken to be necessary but not sufficient, and conditions sufficient but not necessary, for candidate papers to be included. With around 10,000 recognised medical conditions in standard lists, being comprehensive is demanding. With all of these aspects of the task, ScienceSource will seek community help.

Links
To subscribe to Facto Post go to Facto Post mailing list. For the ways to unsubscribe, see below. Editor, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. ScienceSource pages will be announced there, and in this mass message. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page. Newsletter delivered by MediaWiki message delivery MediaWiki message delivery (talk) 10:16, 28 May 2018 (UTC)
 * d:Wikidata:Lexicographical data, Wikidata's multi-lingual dictionary project gets going
 * Ordia tool, a basic search interface for Wikidata lexemes and forms
 * OpenRefine tool 3.0, May update allows wrangling of tabular information into Wikidata
 * d:Wikidata:WikiProject British Politicians pushes ahead with data modelling and imports
 * #1Lib1Ref Returns for a Second Time in 2018, IFLA blogpost 25 May 2018, second chance this year to participate in referencing Wikipedia
 * }

Facto Post – Issue 13 – 29 May 2018
MediaWiki message delivery (talk) 18:19, 29 June 2018 (UTC)

Facto Post – Issue 14 – 21 July 2018
MediaWiki message delivery (talk) 06:10, 21 July 2018 (UTC)

Ways to improve Manalisco
Thanks for creating Manalisco.

A New Page Patroller Boleyn just tagged the page as having some issues to fix, and wrote this note for you:

"Please add your references."

The tags can be removed by you or another editor once the issues they mention are addressed. If you have questions, you can reply over here and ping me. Or, for broader editing help, you can talk to the volunteers at the Teahouse.

Delivered via the Page Curation tool, on behalf of the reviewer.

Boleyn (talk) 19:57, 24 November 2018 (UTC)

Manalisco moved to draftspace
An article you recently created, Manalisco, does not have enough sources and citations as written to remain published. It needs more citations from reliable, independent sources. (?) Information that can't be referenced should be removed (verifiability is of central importance on Wikipedia). I've moved your draft to draftspace (with a prefix of " " before the article title) where you can incubate the article with minimal disruption. When you feel the article meets Wikipedia's general notability guideline and thus is ready for mainspace, please click on the "Submit your draft for review!" button at the top of the page. Best, Barkeep49 (talk) 23:16, 24 November 2018 (UTC)

Draft:Manalisco


Hello, Maxlath. It has been over six months since you last edited the Articles for Creation submission or draft page you started, Draft:Manalisco.

In accordance with our policy that Wikipedia is not for the indefinite hosting of material deemed unsuitable for the encyclopedia mainspace, the draft has been deleted. If you wish to retrieve it, you can request its undeletion by following the instructions at this link. An administrator will, in most cases, restore the submission so you can continue to work on it. —&thinsp;JJMC89&thinsp; (T·C) 22:16, 4 July 2019 (UTC)