User talk:EpochFail/Archive 2

ro.wp research project
Hi Aaron. We've talked on IRC at the end of February about a research project I was planning regarding the diacritics in ro.wp. I've now completed the draft of the page here and I was hoping you could take a look and let me know what you think. I also plan to notify the Romanian community tonight so I can get some feedback over the weekend.

One item that was left somewhat unfinished on the IRC was how to test the scripts. I was hoping that the WMF might have some browser simulators (I definitely remember discussing about something like this with someone in Mexico City), but since you said there is nothing available, perhaps you can suggest an alternate testing method?--Strainu (talk) 20:36, 10 March 2016 (UTC)
 * The write-up looks great. I'll ping all the people who I know that might be interested.  One of the WMF product teams that does more web development might be able to help with a browser testing environment.  I'll ask about that and direct anyone to your study description.  --EpochFail  (talk &bull; contribs) 20:48, 10 March 2016 (UTC)

Editor of the Week&thinsp;: nominations needed!
The Editor of the Week initiative has been recognizing editors since 2013 for their hard work and dedication. Editing Wikipedia can be disheartening and tedious at times; the weekly Editor of the Week award lets its recipients know that their positive behaviour and collaborative spirit is appreciated. The response from the honorees has been enthusiastic and thankful.

The list of nominees is running short, and so new nominations are needed for consideration. Have you come across someone in your editing circle who deserves a pat on the back for improving article prose regularly, making it easier to understand? Or perhaps someone has stepped in to mediate a contentious dispute, and did an excellent job. Do you know someone who hasn't received many accolades and is deserving of greater renown? Is there an editor who does lots of little tasks well, such as cleaning up citations?

Please help us thank editors who display sustained patterns of excellence, working tirelessly in the background out of the spotlight, by submitting your nomination for Editor of the Week today!

Sent on behalf of Buster Seven   Talk  for the Editor of the Week initiative by MediaWiki message delivery (talk) 06:18, 22 April 2016 (UTC)


 * Hey Buster7!  Thanks for the ping.  I think I might be able to help this project quantitatively.  E.g. we can find highly productive editors who haven't received a thanks or a barnstar in a long time.  See my work building robust measures of value-adding activities in Wikipedia.  Sound interesting to you? --EpochFail  (talk &bull; contribs) 15:44, 22 April 2016 (UTC)

ORES diffs
Is there any way to obtain the diffs used to train the ORES models? Thanks, Esquivalience  t 01:10, 29 May 2016 (UTC)
 * Sure! It depends on how you are hoping to work with them though.  We don't have a dump of diffs you can just download and use.  We have several features that do not come directly from the diffs themselves and we do some word frequency comparisons that do not correspond directly to a diff.  So if I get a better sense for what you are looking to do, I'll try to help how I can. --EpochFail  (talk &bull; contribs) 10:56, 29 May 2016 (UTC)

ORES and article importance feature selection
Article importance is as important as quality and popularity for ranking the priority of WP:BACKLOG work and general article quality improvements. (1) Would you please tell me the process by which ORES wp10 article quality features were derived, and what they all mean, if you know the meanings? (2) Can we use article importance features like fiction vs. non-fiction, historical vs. contemporary topics, pop culture vs. science, applied vs. theoretical science, or quantification of the number of people or productive hours of life involved with the topic, for initial approximations of effective article importance features? (3) Can queries structured on large numbers of category memberships like https://quarry.wmflabs.org/query/1337 seed topic features for article importance? EllenCT (talk) 22:07, 15 June 2016 (UTC)
 * Hi EllenCT. I'll borrow your numbering scheme to break up my response.
 * (1), see http://www-users.cs.umn.edu/~morten/publications/wikisym2013-tellmemore.pdf. This is User:Nettrom's work.  We've expanded on the feature set in minor ways since then, but we've kept the spirit of "actionable" features alive.
 * (2), I think we should break down importance predictions by Wikiproject. User:Harej has done a little bit of looking into this.  It'll be complex to extract link graph features that are specific to a WikiProject, but we can do it if we find that it is important.  E.g. one feature could be "the number of within-wikiproject pages that link to this page".  In this case, we'd ask the model to predict how important a page is to a particular wikiproject.
 * (3) We could definitely do something like this. We probably wouldn't want to use Quarry directly in order to do this though.  We might need an intermediate service that can query the DB directly (or periodically with caching) and provide basic data to ORES or a similar service so that real time results can be gathered with decent performance.
 * --EpochFail (talk &bull; contribs) 22:42, 15 June 2016 (UTC)
 * Thank you! Slide 39 et seq. from may also contain a proto-ontology likely to perform well as article topic features for importance ranking. I can see how WikiProject importance decouples from aggregate global importance, and how the latter may not contain as much information, but if it's just a scalar used to rank, I don't know. I wonder if expert judges outperform volunteers on inter-rater agreement for these kinds of topic features. EllenCT (talk) 14:23, 16 June 2016 (UTC)

Improving POPULARLOWQUALITY efficiency
Would you please have a look at this discussion on the talk page for WP:POPULARLOWQUALITY? Is there a way to have some measure of article popularity replaced in the deprecated Quarry database's pageviews field, or a new field? I have a feeling that iterating over a list of most popular looking for the top N wp10 stub predictions and sorting them by their wp10 start class prediction confidence might be easier than downloading 24 hours of full dumps at a time. EllenCT (talk) 15:02, 16 June 2016 (UTC)
 * Hi EllenCT. Have you seen the Pageview API?  It might serve this need nicely. --EpochFail  (talk &bull; contribs) 20:25, 20 June 2016 (UTC)
 * Yes; sorry, it only provides the top 1000 articles, which typically contain only a handful of articles ORES wp10 predicts are stub-class. I need the top 200,000 to get about a hundred stub-class articles, which I intend to sort by aggregating their start-class confidence and pageview-based popularity. I would love to include a global importance measure. I can't use per-WikiProject measures of importance.
 * When I asked for the Pageview API to be extended to provide the top 100,000 I was told that could not be done because it would leak personally identifiable information, which is completely preposterous. Much finer-grained pageviews enabling discovery of the full popularity ranking of all articles is released every hour. Could you please explain this to and reiterate my request? EllenCT (talk) 20:54, 20 June 2016 (UTC)
 * You can file a phabricator task and tag it with "analytics" and the team can look at it. I cannot think of any privacy issues from the top of my head but actually adding 99.000 new rows per project per day per access point (mobile-app, desktop, web) is actually not as trivial as you might think. It's 100.000 *3 *800 of rows every day, two orders of magnitude higher than what we currently have when it comes to storage and loading of data. NRuiz (WMF) (talk) 22:39, 20 June 2016 (UTC)
 * can you please explain what that means in terms of number of kilobytes of additional storage required and additional CPU time? I do not wish to figure out how to file a phabricator request and would ask that you please do that for me. EllenCT (talk) 01:08, 21 June 2016 (UTC)
 * I support in thinking a top 200,000 end-point in the API is not trivial from a storage perspective. Currently the top end-point represents about 1Gb of data per month for 1,000 articles per project (this is a lower bound). Growing it to 200,000 values per project would incur 200 times the storage, meaning 200Gb per month. While this could be feasible without too much traffic and load problems (the top end-point takes good advantage of caching in varnish), from a storage perspective it would need us to reassess the API scaling forecast: We have planned our system to grow linearly in storage for at least one year, so we can't really take the 200Gb / month hit currently.  -- JAllemandou (WMF)  (talk) 09:16, 21 June 2016 (UTC)
 * how would you feel about doing it first, and then deciding whether to store it after a review to decide whether there is any point in storing the larger list? I only want to store subsets like ORES wp10 stub-class predictions (with their start-class confidence) and membership in WP:BACKLOG categories and maybe WikiProject importance, when available ( perhaps with the revision ID and date when the importance was evaluated? Do you think we can create per-WikiProject importance coefficients or other transforms to approximate a global importance useful for ranking cross-topic backlog lists?) I would certainly love to see just that subset stored, and am sure it would be both much smaller and easily well worth it. EllenCT (talk) 12:06, 21 June 2016 (UTC)
 * Unfortunately it's not as simple as "doing it first" :) While computation can easily be done ad-hoc and results provided in files for a one shot trial, having that data flowing through our production system involves storage and serving endpoints, which are problematic. Are you after regular data streams or one shot test? -- JAllemandou (WMF) (talk) 09:49, 22 June 2016 (UTC)
 * something like User talk:EllenCT/Top 594 stub predictions from 20160531-230000 but daily instead of hourly, plus all the articles in at least the WP:BACKLOG categories listed on Community portal/Opentask, with the redirects and disambiguation pages filtered out, and 1000 instead of 594 articles, please? In the future when we have good importance prediction models we can scale each WikiProject's importance by some coefficient for a third ranking score as part of the score normalization and standardization process prior to combining. EllenCT (talk) 12:47, 22 June 2016 (UTC)
 * My understanding is that you are looking for regular pageview-top 1000 stub-predicted articles (one precision, we can't currently filter out disambiguisation nor redirects). While I think such a project would be very useful for various reason, the amount of work required and the current priorities make it not to be picked up any time soon. This task is in our backlog making sure we keep the idea alive. -- JAllemandou (WMF) (talk) 08:57, 24 June 2016‎ (UTC)  signature added by EpochFail  (talk &bull; contribs) 18:07, 24 June 2016 (UTC)

Bad edit but not Vandalism
Hello EpochFail, I am participating in the ORES project, I have a doubt on how to tag edits that are not vandalism and are done in good faith but should be undone since they do not follow policy or stile recomendations. An example would be this edit. The user changed a wikilink into a bare-URL reference, the problem is that Wikipedia itself is not a reliable source for Wikipedia so the edit should be undone. should I tag the edit as damaging even though is not vandalism? What criteria should I apply for other cases?. Regards.--Crystallizedcarbon (talk) 18:26, 12 August 2016 (UTC)


 * Hi Crystallizedcarbon. I think you have the idea exactly right.  You should tag edits like these as "damaging" and "goodfaith".  We're actually intending to use the models to specifically look for these types of edits because goodfaith new users who would respond positively to training and guidance tend to make these types of mistakes.  When you think about "damaging", consider what patrollers would want to review and clean up.  There's a lot of edits that are well meaning (and maybe even not all that bad!) but they still damage the article.


 * Thanks for asking this question. If you wouldn't mind, I'd appreciate if you'd add these notes (and your exampled) to the documentation at es:Wikipedia:Etiquetando/Valorar_calidad.  --EpochFail  (talk &bull; contribs) 18:46, 12 August 2016 (UTC)
 * Thank you for your quick answer. Will do. Regards. --Crystallizedcarbon (talk) 18:50, 12 August 2016 (UTC)
 * Should I add it on the project page itself or on the talk page instead? --Crystallizedcarbon (talk) 18:54, 12 August 2016 (UTC)


 * Crystallizedcarbon I'd add it to the project page and then make a post about it on the talk page. It might be nice to kick off a conversation to check if anyone else has a question. --EpochFail  (talk &bull; contribs) 18:55, 12 August 2016 (UTC)
 * Good idea.--Crystallizedcarbon (talk) 18:56, 12 August 2016 (UTC)
 * ✅ --Crystallizedcarbon (talk) 19:38, 12 August 2016 (UTC)

RfC: Protect user pages by default
A request for comment is available on protecting user pages by default from edits by anonymous and new users. I am notifying you because you commented on this proposal when it was either in idea or draft form. Funcrunch (talk) 17:34, 31 August 2016 (UTC)

A cup of coffee for you!
I should just ask - where can I find numbers that are up-to-date at the high concept yearly level for demographics of Wikimedia project contributors? I need: for enwp and the other Wikipedias collectively, and also numbers for Wikimedia projects in total. Also, I am increasingly feeling a need to confirm which sort of numbers should be recognized as part of our culture of outreach and discussion. I am saying numbers regularly enough to justify checking in with others about consistent messaging. Does anyone in research define the press kit? Do we have a press kit? Thanks.  Blue Rasberry  (talk)  15:58, 9 November 2016 (UTC)
 * 1) editors
 * 2) active editors
 * 3) highly active editors
 * Fixed it in m:Special:Diff/16049390. It looks like meta only understands the "m" interwiki prefix.  Thanks for calling out the issue :) --EpochFail  (talk &bull; contribs) 16:00, 9 November 2016 (UTC)
 * Bluerasberry, Hmm.. I don't know about a press kit.  I don't really work in the day-to-day reporting needs.   what's our current recommendation here?  I'd guess that it's to use the current wikistats definitions/data.  --EpochFail  (talk &bull; contribs) 16:14, 9 November 2016 (UTC)
 * I would look as you direct me. In NYC increasingly we are asked for numbers and saying all sorts of things in public. If there were recommended messaging then we would adopt it. Each Wikipedia contributor represents the entirety of the movement to the people with whom they talk, and I would like the people around me to appear at their best and as they wish to be.  Blue Rasberry   (talk)  16:46, 9 November 2016 (UTC)
 * Yes, right now it's a bit hard to get this information. We have wikistats and other dashboards that often use different definitions.  The big work for this year is to standardize this and make a single place where you can go and find these kinds of numbers you can depend on.  Until then, [very [active ]]editors are on https://stats.wikimedia.org/EN/TablesWikipediaEN.htm and similar pages.  And some other numbers like pageviews and uniques can be found in trends on https://analytics.wikimedia.org/dashboards/vital-signs/ and broken down geographically at https://stats.wikimedia.org/wikimedia/squids/SquidReportPageViewsPerCountryOverview.htm .  It is my great passion to make this better, and I promise to work really hard on it :) Milimetric (talk) 17:56, 9 November 2016 (UTC)
 * Bluerasberry: ^ --EpochFail (talk &bull; contribs) 18:13, 9 November 2016 (UTC)
 * Thanks both of you. This answers my question. Also, if it ever happens that you need community support or comment on this kind of reporting, write me, and I will share your request with the Wikipedians in New York City and elsewhere. We get asked all kinds of questions and when you back us up with good information, then that makes us better able to present Wikimedia projects accurately, effectively, and with confidence. People often ask about these things and are interested to know.  Blue Rasberry   (talk)  19:37, 9 November 2016 (UTC)

A "deleted" model for ORES
Now that the ORES beta feature has been adopted by 6,000 editors to assist them in recent changes patrolling, I believe that it would be beneficial to extend ORES to new pages patrolling. While counter-vandalism is important, new pages patrolling is critical, being our first line of defense against cruft, vandalism, and garbage. Unfortunately, valid creations are lumped in with a sea of invalid creations that will inevitably be deleted, and that is where I believe ORES can step in. As a start, Special:NewPages or Special:NewPagesFeed can implement the "damaging" or "reverted" model, which will help catch articles that meet WP:CSD (obvious vandalism and hoaxes).

However, I propose the creation of a new "deleted" model that, given a diff, assigns a probability that it will be deleted. I believe that this will be easy to train, not requiring Labels: there are plenty of deleted articles and plenty of valid articles in mainspace. To go further, the model should also give the probability that it falls under certain CSD criteria, which can help speed up the process. Again, this is easy: administrators leave a summary containing the CSD criterion that an article was deleted under. However, such a model can have significant positive effects on new pages patrolling. Esquivalience (talk) 01:14, 3 November 2016 (UTC)


 * Hi Esquivalience. I think that this is an interesting proposal.  It comes up from time to time.  I think it's a good idea, but we need something more nuance than a deletion prediction system.  It turns out that there are some questions an AI can answer by processing draft content (is this an attach page?  Is it spam?  Is it blatant vandalism?) and some that it can't (is this topic notable?).  So I've been working on modeling the bits that are inherent in content of the draft.  I'm pretty close to having something that is useful too.  See  m:Research:Automated classification of draft quality and a related proposal, Fast and slow new article review.  What do you think about moving in this direction? --EpochFail  (talk &bull; contribs) 15:01, 3 November 2016 (UTC)
 * Any move towards implementing ORES in the new pages patrol process would be great! One note about detecting non-notable articles: although it would not be possible to detect non-notable articles, maybe it would be possible to do the next best thing and detect articles that have no claim of significance (A7), perhaps using Bayesian classifiers on $n$-grams. Such a tool would help tackle the backlog of A7 articles. Esquivalience (talk) 03:13, 14 November 2016 (UTC)
 * I think that is possible. It's not something I've looked into very much.  It seems like we might be able to use a PCFG strategy to identify sentences that state importance.  I'm working on building PCFGs for spam, vandalism, and attack pages.  (see T148037)  We could build a similar model that identifies and highlights a sentence that is most likely to be stating importance (or the lack of such a sentence) in a review interface.  I've created a new task (T150777) for looking into this. --EpochFail  (talk &bull; contribs) 17:10, 15 November 2016 (UTC)

Inline notes
Hi there

I am trying to use your gadget. It works fine when I try to edit a frase, but I get nothing when I try to put an Inline note.

I hold the ctrl key and press the secundary button on the mouse, but nothing (working from a win 10 on a chrome browser, or with firefox, not internet explorer as expected).

Hope there's something I could do.

Amglez (talk) 09:36, 21 November 2016 (UTC)


 * Hey there, Amglez. Which gadget are you referring to? --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:01, 21 November 2016 (UTC)


 * Hi, User:EpochFail. I am a newie, so be very slowwwww, please. The Wikignome gadget.
 * By the way, I am trying to use it on my own (personal) wiki. I do not get what I need to use it; but I understand I need to copy the actual code as I cannot import your importScript("User:EpochFail/wikignome.js") on a diferent domain. Would you help me please? — Preceding unsigned comment added by Amglez (talk • contribs) 20:00, 22 November 2016 (UTC)


 * Hi Amglez. Does the wikignome script work OK for you on English Wikipedia?  FWIW, most of the code is at User:EpochFail/wg_system.js.  You'll need that too if you want Wikignome to work on your local wiki.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull contribs) 20:11, 22 November 2016 (UTC)


 * Wikignome script works fine when editing inline with firefox and chrome, and you can edit notes allright once you have make the note manually. It won't show a dialog for creating a new note with crl - secondary mouse botton, as I had understand it would. As your advice for making it work on my wiki, I will try it next and be back. Thanx Amglez (talk) 11:04, 26 November 2016 (UTC)

Summer of Research 2011
Hi, You were part of the team that provided the above research. Most of the results of that action were extremely valuable and were partly what triggered my volunteer collaboration with the WMF to seek for and develop solutions for correctly patrolling new pages. I am wondering if it would be possible to get the 2011 researach updated, or to let me know if that has already been done. All that would be needed are just some of the graphs and charts updating using the same formulae again. I'm contacting you because you are one of the few employees left from that era and I would assume you would know who to ask, or at least let me know who I can ask. --Kudpung กุดผึ้ง (talk) 09:23, 27 November 2016 (UTC)


 * Hi Kudpung. I saw that you made a duplicate post at User talk:Jtmorgan, so I'll respond there. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:58, 28 November 2016 (UTC)

Meetup
Hello. Here's an event happening soon. Might you be able to make it? Jonathunder (talk) 14:44, 11 December 2016 (UTC)


 * Hi Jonathunder! I might be able to make it.  I'm definitely overdue for a local meetup.  I put on my calendar and will try to make it work. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:47, 12 December 2016 (UTC)

Editor of the Week seeking nominations (and a new facilitator)
The Editor of the Week initiative has been recognizing editors since 2013 for their hard work and dedication. Editing Wikipedia can be disheartening and tedious at times; the weekly Editor of the Week award lets its recipients know that their positive behaviour and collaborative spirit is appreciated. The response from the honorees has been enthusiastic and thankful.

The list of nominees is running short, and so new nominations are needed for consideration. Have you come across someone in your editing circle who deserves a pat on the back for improving article prose regularly, making it easier to understand? Or perhaps someone has stepped in to mediate a contentious dispute, and did an excellent job. Do you know someone who hasn't received many accolades and is deserving of greater renown? Is there an editor who does lots of little tasks well, such as cleaning up citations?

Please help us thank editors who display sustained patterns of excellence, working tirelessly in the background out of the spotlight, by submitting your nomination for Editor of the Week today!

In addition, the WikiProject is seeking a new facilitator/coordinator to handle the logistics of the award. Please contact if you are interested in helping with the logistics of running the award in any capacity. Remove your name from here to unsubscribe from further EotW-related messages. Thanks, Kevin ( aka L235 ·&#32; t ·&#32; c) via MediaWiki message delivery (talk) 05:19, 30 December 2016 (UTC)

Your DOB
Hi Mr. Halfaker (or Aaron if I can call you that), as you may have noticed, the article about you says you were born on December 27, 1983. However, I couldn't find a source that supports this being your date of birth. Could you confirm that you were born on this date (e.g. maybe by tweeting it, like did)? That way I can cite a source for it in the article. Everymorning (talk) 20:31, 31 January 2017 (UTC)
 * Hi Everymorning. Sure!  See https://twitter.com/halfak/status/826529576906059780.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 20:37, 31 January 2017 (UTC)

Spoofing by machines
Hi Aaron, I'm told that you might be interested in tools to detect edits made by machines that have been programmed to spoof human interaction. I am too; my approach involves using subtle points of language knowledge to tell the difference. What approach were you thinking of using? - Dank (push to talk) 19:06, 31 January 2017 (UTC)
 * Hi Dank! It turns out that I've done some work that might be relevant to your interests around bot detection in Wikipedia.  See [1] and [2] for some work I did tracking some strong regularities in the temporal rhythms of human activity.  It turns out that, when looking at the inter-activity time of an "editor" in Wikipedia or OpenStreetMap, bots stick out in some very obvious ways due to their inhuman temporal rhythms.  When I was doing the cited research, I was able to use the disruptions in temporal rhythms to indentify the presence of bots in the dataset.  I've been thinking about using this insight to build a generalized bot detection system, but I haven't had time to look into that.
 * Geiger, R. S., & Halfaker, A. (2013, February). Using edit sessions to measure participation in wikipedia. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 861-870). ACM. http://stuartgeiger.com/cscw-sessions.pdf
 * Halfaker, A., Keyes, O., Kluver, D., Thebault-Spieker, J., Nguyen, T., Shores, K., ... & Warncke-Wang, M. (2015, May). User session identification based on strong regularities in inter-activity time. In Proceedings of the 24th International Conference on World Wide Web (pp. 410-418). ACM. http://arxiv.org/pdf/1411.2878
 * --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 19:54, 31 January 2017 (UTC)
 * Great, that's an elegant, and probably effective, approach to the problem. I'll shoot some code over to you when I have some to share. - Dank (push to talk) 21:18, 31 January 2017 (UTC)
 * Dank, sounds great. One more thing to share.  I built a library to make session analysis a bit easier.  See https://pythonhosted.org/mwsessions/ and mw:mediawiki-utilities.  This might be a good opportunity to build in an module for measuring the distance between different distributions of inter-activity timing. :D --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 22:34, 31 January 2017 (UTC)

The way forward with ORES vs NPP
, I watched you hour-long video with great interest. Thank you so much for listenng equally to my hour-long rant.

Whilst I welcome any form of AI that will help preserve what is now a seriously dwindling public respect for the quality of Wikipedia content, before deploying (or even developing) ORES for Page Curation, we need to establish why the patroller community is largely resistant to adopting the New Pages Feed and its Page Curation Tool as the default process for controlling new intake. The reasons are actually quite clear but on its own admission the Foundation no longer regards addressing them as a priority.

One important way to address this and significantly reduce the fire hose of trash is to educate new users the instant they register, on what they can and cannot insert in the Wikipedia. A proper welcome page has never been developed and a 2011 attempt by to create one (Article Creation Work Flow)  as part of the Page Curation process was thwarted by internal events within the WMF. This was the other half of the Page Curation project which was begun by the Foundation in answer to the community's overwhelming demand for WP:ACTRIAL which might now soon become a reality

AI is not a panacea - it should assist but not seek to replace the dysfunctional human aspect of the triage of new pages, or be a palliative for the improvement of the parts of the Curation GUI that the Foundation will not prioritise. New Page Patrolling is the only firewall against unwanted new pages, not only is it now a very serious critical issue, but it should be the Foundation's single top priority before anything else of any kind. Kudpung กุดผึ้ง (talk) 00:32, 4 February 2017 (UTC)

A cup of coffee for you!

 * Hey Bluerasberry! The best reference that I have handy is m:Research:Surviving new editor.  I think that the 3-month "trial" plots should jibe with the 3-month retention measure you describe.  Let me know if you have questions.  This retention measure is pretty easy to replicate in https://quarry.wmflabs.org --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 01:37, 1 March 2017 (UTC)

Yes this is what I wanted to see. Thanks.  Blue Rasberry  (talk)  15:01, 3 March 2017 (UTC)

Are you involved with this?
Wikimedia Strategy 2017

It may be a good place to share your AI vision for MediaWiki/Wikipedia.

I just put my two cents in. See Wikipedia talk:Wikimedia Strategy 2017. The Transhumanist 09:02, 24 March 2017 (UTC)


 * Thanks for the ping and thanks for your post there. I've been somewhat ignoring the strategy process because it's yet another thing that to pay attention to and I've had to juggle a lot of things recently.  I'll make sure to go make a statement there too.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:39, 24 March 2017 (UTC)

WikiProject Council/Proposals/Wiki Brain
Something else I just ran across.

What is the feasibility of the development of a Wikipedia-based question answering application? The Transhumanist 08:15, 23 March 2017 (UTC)


 * I wasn't familiar with this. There's something that seems to be unrelated called "wikibrain" too.  https://shilad.github.io/wikibrain/  This is coming out of the group who has been looking into challenging the article-as-concept assumption. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:43, 24 March 2017 (UTC)


 * I like that they made their model available as an extension to Wikibrain. And I like that Wikibrain parses Wikipedia data. Though, yet another programming language to learn (Java). *Sigh* The Transhumanist 19:49, 24 March 2017 (UTC)

St. Cloud, April 15, 2017: Wikipedia as Social Activism
Please come and join if you can!

Shaded0 (talk) 22:51, 29 March 2017 (UTC)

Automating topic classification on Wikipedia
I noticed your involvement and desire to integrate AI into Wikipedia, and I was wondering if I could pick your brain.

I'm interested in technologies for building, maintaining, and expanding navigation taxonomies for browsing Wikipedia, including writing scripts, but especially AI (machine learning, natural language processing, and automatic taxonomy construction).

I would like to leapfrog the awkward and content-challenged category system.

I've reached the limit of what I can do manually. See Portal:Contents/Outlines.

My goal is to create 10,000 outlines. I estimate that would provide a fairly robust coverage of Wikipedia's content. But even at a rate of producing 100 outlines per year manually, that would take me 100 years.

I've tried recruiting editors for the task, and have found that this is not feasible or scalable. At one point, it took half my editing time to build and assist a team of 9 editors, and I still did more in the other half of my editing time than all of them combined.

But there are so many outlines now, that it would be more than a full-time job just to maintain and update the ones we have, let alone create more new ones. So, the outlines are becoming dated. And due to Wikipedia's ever-expanding coverage on everything, updating an outline fully takes about as much effort as creating it did in the first place.

Doing the math... We have over 700 outlines, and assuming an editor working on them full-time could service about 100 of them per year, it's over a 7-year job just to update them all once. And under that approach, each outline only gets overhauled once every seven years. Without creating any new ones.

And you see my dilemma.

So, I now turn to automation.

I'm currently learning perl and JavaScript, and while these can be used to build tools to assist in topic list maintenance, creation, and display, they will not be able to do enough of the work to make the project scalable.

To be adequately scalable, with primarily one person working on the project, it would have to reach 99% automation, with a high quality level. So I've decided to take the plunge into deep learning, without abandoning the development of the more basic tools just mentioned.

But of course, there is another problem...

Outlines are evolving.

They started out as bare topics lists, with content like this:


 * Pawn structure
 * Backward pawn
 * Connected pawns
 * Doubled pawns

But, clicking on entries just to see what they are about or what they mean, is time-consuming. So, to assist in topic selection (like in menus at restaurants), annotations describing them started to be added, like this:


 * Pawn structure – describes features of the positions of the pawns. Pawn structure may be used for tactical or strategic effect, or both.
 * Backward pawn – pawn that is not supported by other pawns and cannot advance.
 * Connected pawns – pawns of the same color on adjacent files so that they can protect each other.
 * Doubled pawns – two pawns of the same color on the same file, so that one blocks the other.

Now the outlines are becoming a hybrid of topic outlines and sentence outlines. (And more involved to build.)

This has caused confusion among some editors unfamiliar with outlines or their purpose, who conclude that they are essentially prose articles that content fork the root article. (Outlines actually go beyond the scope of their corresponding root articles, as they attempt to classify all of the coverage on their respective subjects across all of Wikipedia). This has prompted a number of AfD's of serviceable outlines, of which I've caught all but one so far. Outline of cricket wasn't so lucky.

At the same time, the annotations obscure the topics they are describing, making it harder to browse than the bare lists the outlines started out as. The bare list nature of Outline of the Bible helped save it from its AfD. Some people prefer the bare list format.

Automating the development of outlines shall include automatic annotation of their entries. But this will exasperate the problem just explained, necessitating a preventive measure (development of a productivity tool) before I can move on to the good stuff...

So, in order to provide the benefits of both bare and annotated lists, I started working on a script to hide/show annotations via toggle. See User:The Transhumanist/anno.js. (Try it out on Outline of chess.) So you could browse a bare list, and when you need it to be a menu, just push the magic button.

It works... sort of.

It hides the annotations, and even brings them back when you hit the hot-key. But the reader is jolted away from what he was reading, because as content is removed or restored, the text relative to the viewport moves! Unless you happen to be at the beginning of the annotated entries.

I find myself stuck on this particular problem, and further overwhelmed at the prospect of creating AIs.

Being that you are a consummate programmer and a Wikpedia progressive, I was hoping you could help me get unstuck. Would you please:


 * 1) Show me how, or point me in the right direction, to fix the viewport problem. (Or refer me to somebody who can).
 * 2) Explain how AI could be applied to building topic taxonomies made up of title links and section links (for those topics that don't have their own article, but do have their own section in one), so I know what to focus on. And provide any tips or observations on how I could approach this project.
 * 3) If you have time, please take a look at automatic taxonomy construction to see if there is anything you can correct or add.

By the way, any comments on any of the above would be most appreciated.

I have been working on the outlines for about 10 years, and plan to continue to do so. Any guidance you can provide to enable me to build state-of-the-art tools for improving outlines will not be in vain.

I hope this wasn't TLDR.

Sincerely, The Transhumanist 04:18, 8 March 2017 (UTC)


 * The Transhumanist, this is a fascinating problem space and your summary is very useful to understand where you are coming from. I've got a lot of other plates spinning right now, but I'll keep reading up on outlines from what you've linked me here and I'll ping you again when I've got a sense for what you're looking for.  I have two notes for you in the meantime:
 * Re. programming stuff: Javascript is a great investment.  Perl, I'm a little bit more skeptical of.  If you're still just starting out, I'd highly recommend Python as there's a vibrant community of wiki-developers and researchers who use Python.  See mw:mediawiki-utilities for a set of libraries that make doing wiki stuff easier.
 * Re. viewport movement: You might be able to address the anno.js issue by using <tt>visibility: hidden</tt> rather than <tt>display: none</tt> to hide the annotations. I'll need to check out the behavior more carefully to be sure, but thought this might set you on a good path in the meantime.  https://stackoverflow.com/questions/133051/what-is-the-difference-between-visibilityhidden-and-displaynone  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:26, 8 March 2017 (UTC)
 * Thank you for the tips. Replacing annotations with white space would be a good interim solution.  But it would be awkward, as the list items would appear haphazardly spaced. The long-term solution would still be to reset the viewport location.  I'm pretty sure there is a way to do it, and a way to calculate the distance of the viewport from the top of the page.  I just can't find any documentation on how to do it. Any and all solutions would be appreciated.  Thanks!  I look forward to your further input. I've got over a year invested in perl, but I will definitely start studying Python.  I've heard it has some good natural language processing libraries.  The Transhumanist 19:29, 8 March 2017 (UTC)

Some links and elaboration concerning outlines

 * Outline (list) – I've just revamped this article to provide an overview of the various types of outlines, so you can see how Wikipedia outlines fit in.
 * Outlines – explanation of Wikipedia outlines, and guidance on building them.

I hope these help.

One thing to keep in mind about outlines intended for processing is their parsability. A true outline format is formally and rigidly hierarchical. One problem with Wikipedia's outlines is that they drift from outline format, due to editor error and obliviousness.

Something on my wish list is a conversion program that converts Wikipedia outline format to an outliner program format, and vice versa. Or better yet, an outliner that includes an import/export feature for Wikipedia outlines.

What is the relationship between outlines and item lists? Outlines are structured lists, which is another way of saying that outlines are collections of lists arranged as a tree. The branches of outlines are lists. Lists on Wikipedia (articles with the title "List of") are branches of outlines (except tables, which aren't lists at all, and are mislabeled articles that compete for list titles -- tables are matrices, in which each column is a list -- don't get me started on tables!). It is not necessary to include the entire contents of a list in an outline, as you can summarize it and provide a link to the whole list.

A script I'm writing is User:The Transhumanist/OLUtils.js, which will be a collection of utility functions for working on outlines. Currently, it includes a red link stripper that I am developing to assist in building outlines. So, if you use a template to build an outline of a city, where the template includes every conceivable article name that might have a city's name in it, the stripper will be able to remove the extraneous red links it creates. The red link stripper will enable me to make outlines of a standardized nature (for cities, counties, provinces) much faster. Removing red links by hand is tedious and time-consuming.

But, it would be so much nicer to be able to simply say to your computer "build me an outline on Benjamin Franklin", and have a fully developed and referenced outline an instant later, linking to all of the articles (and article sections) on Wikipedia about Ben, with annotations describing the subject of each link. Hence my dream of 10,000 outlines. I'm hoping AI will serve as a wish-granting genie.

That's the top-down approach.

Then there's the bottom-up approach. You come across a link on Wikipedia. You click on the link to send it to a topic-placer script, which then checks to see if the topic is in an outline (or list). If not, it figures out which outline(s) (or lists) it belongs in, where it should go, and then inserts it there. This approach taken to its limit would be a program that takes every article title from Wikipedia, and places each of them in either an outline or an item list (list article), creating relevant pages if they don't yet exist. Note that all the list article titles would be placed in either an outline or a list of lists, and that all lists of lists (just their titles) would be placed in outlines. That would result in a complete navigation system of Wikipedia, in which all articles were included. It would be nice to enhance that to include all article section titles as well.

If you have any questions, please feel free to ask. The Transhumanist 21:48, 9 March 2017 (UTC)

P.S.: MadScientistX11 just did a rewrite of automatic taxonomy construction, vastly improving upon what was there before. The Transhumanist 22:18, 9 March 2017 (UTC)
 * Hey The Transhumanist. I finally made it through your verbose (but awesome) writeup and had a look at a few examples.  Regretfully, I'm not sure how much I can help, but let me start by throwing out some ideas.  So there's some researchers who have been deconstructing the article-as-concept assumption by looking at sub-article references.  Essentially, they are trying to identify which articles represent sub-articles of other articles.  E.g. United States and History of the United States.  I know they are looking for automatic ways to identify these and there's been some discussion about capturing this structure in Wikidata  See their paper here: http://brenthecht.com/publications/cscw17_subarticles.pdf  Let me know if you are interested in what they are working on and I'll try to pull them into this discussion.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 13:53, 16 March 2017 (UTC)
 * I'm in the process of digesting the paper you cited, and will have a reply with more substance once I'm through it. In the meantime, here are some questions and thoughts for you to consider... I'm new to machine/deep learning, and so I'm nearly oblivious to the terminology and the structure of the technology's implementation. What types of things can a deep learning program learn? And how must the learning content be framed in order for it to learn it?
 * With respect to outlines, I want a program that builds and maintains outlines automatically, from scanning/analyzing articles in the database to determine which ones are subtopics of the subject being outlined, to arranging the subtopics in the outline according to their relationships to each other, to adding an annotation for each entry.  And I want the program to get better at building outlines over time, via training.  (I don't have a clue how this is accomplished, and so some pointing in the right direction would be a big help. What would be a good place to start?)
 * For example, one of the components would be automatic annotation. For each bare entry (each entry starts out as just a topic name), to annotate it a human editor goes to the article on that topic, excerpts a description from its lead, then edits it to fit the context of the entry, and inserts it into the outline after the topic name separated by an en dash. I'd like a computer program to do all that, and be trained to do it better as it goes along. What would the structure of such a program be? How would the task/problem be specified in the program?  And in what form would the feedback loop be?
 * I'd like to understand the general approach, so that I can put all the reading I have ahead of me into perspective a little easier. Right now, it's like reading Greek! I look forward to your reply. The Transhumanist 19:57, 24 March 2017 (UTC)
 * P.S.: I ran across the term "epoch", as applied in your user name: "During iterative training of a neural network, an Epoch is a single pass through the entire training set, followed by testing of the verification set." -TT


 * It's still pretty much Greek, though fascinating. The journal article you recommended describes work done on the WikiBrain framework. So I started looking into that, and read this article: http://www.brenthecht.com/publications/WikiBrain-WikiSym2014.pdf
 * From what I gather so far, it appears to be a sophisticated search engine using semantic relatedness algorithms. In the example provided, they first had WikiBrain return the articles most closely related to the term "jazz", then they did the same for movies, and then did a "most similar" op to come up with a list of jazz movies.
 * But there must be more to it, for it makes this claim:
 * "WikiBrain empowers developers to create rich intelligent applications and enables researchers to focus their efforts on creating and evaluating robust, innovative, reproducible software."
 * What do you make of WikiBrain? The Transhumanist 01:08, 4 April 2017 (UTC)


 * Hey The Transhumanist! I think WikiBrain is a great project.  I really value that the researchers have been working hard to get WikiBrain working on WMF infrastructure so that it will be easier for Wiki developers to work with.  Regretfully, we're currently blocked on getting a decent allocation of disk space.  It turns out that WikiBrain needs a single large harddrive allocation in order to store and use a big index file to work from.  See T161554.  The task is pretty recent, but I've been trying to work out this harddrive allocation problem for years for this project and some related ones.


 * I just realized I never responded to your question about my username. :)  I'd come up with the name based on this comic https://xkcd.com/376/.  I do lots of date-math and there's this fun thing called a unix timestamp that is based on an "epoch" on Jan 1st 1970.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 13:44, 4 April 2017 (UTC)

Outline of machine learning
I've made the best start I can, listing everything I could find on Wikipedia pertaining to this subject. There are certainly gaps in coverage that you would be able to spot that my untrained eyes would not. The Transhumanist 22:38, 4 April 2017 (UTC)

May I pick your brain, please?
Everything you throw my way opens up new vistas, which inspire yet more questions. Here's my latest slew of queries for you... The Transhumanist 22:16, 7 April 2017 (UTC)


 * Hey The Transhumanist, I only wish I had more time to chat. You bring a fun discussion.  I'm going to break up all of your questions  so that we can track them a bit easier.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)

What are the most interesting things you have built with machine learning?
What are the most interesting things you have built with machine learning? What and how do those "learn"? The Transhumanist 22:16, 7 April 2017 (UTC)


 * I haven't actually built that *interesting* of an AI before. Really, my work is at the intersection of social practices (like all of the different types of work humans put into making Wikipedia productive and sustainable) and technology.  I think there's a *huge* untapped potential for simple AIs to support our work, so I've invested extensively on simple en:supervised learning with well known methods.  So I can talk a lot more about evaluation of classifiers than I can about learning/optimization methods.  So, I'm going to interpret your question very broadly.


 * I think the coolest ML thing I have built is the injection cache that we use in ORES. Check this out.
 * Let's score Special:Diff/774447266. https://ores.wikimedia.org/v2/scores/enwiki/damaging/774447266 (92.93% damaging)
 * Cool. But what about this makes it damaging?  Let'd have a look at the features that are used when making a prediction.
 * ORES let's you ask for the features used in scoring
 * Some of these are hard to read, but others are more clear. We can see <tt>"feature.revision.user.is_anon": true,</tt>.  I wonder what ORES would think if this were a registered editor?
 * https://ores.wikimedia.org/v2/scores/enwiki/damaging/774447266?feature.revision.user.is_anon=false&feature.temporal.revision.user.seconds_since_registration=1209600 gives us (85.52% damaging)
 * So we can tell that the fact that the editor is anon is part of why ORES thinks this edit needs review. This is just one of the cool things that you can do with this the injection system.  E.g. I'm working with WikiEd to implement edit recommendations using the article quality model and this strategy. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)
 * User:WikiEd hasn't contributed since 2008. Was that a typo? The Transhumanist 22:43, 9 April 2017 (UTC)
 * Sorry, I meant WikiEdu. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 00:23, 11 April 2017 (UTC)

Is there an overview of resources?
JavaScript the Right Way distills the JavaScript field and subject down to their key elements and resources. What can you recommend that is similar in scope and function for machine learning and natural language processing? The Transhumanist 22:16, 7 April 2017 (UTC)


 * Nothing like that I can think of for python. But http://scikit-learn.org/stable/ is a great reference for doing machine learning in python.  The docs are almost like a textbook. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)


 * Doesn't WikiBrain use Java? Where does Python fit in? The Transhumanist 10:52, 17 April 2017 (UTC)


 * Yup. That's right. WikiBrain is Java.  ORES is python.  I think that Python's machine learning libraries are better than Java's.  They have better documentation and a better community.  But that's just my (limited) opinion.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:50, 17 April 2017 (UTC)

What's going on with WikiBrain?
What is your experience with WikiBrain? The Transhumanist 22:16, 7 April 2017 (UTC)


 * Almost no direct experience, but I've seen a lot of what it can do. E.g. http://spatialization.cs.umn.edu/  I know the WikiBrain folks have some cool new views into the concept space of Wikipedia coming out soon.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)
 * Thank you for the crumb. I'm hungry for more! What else can it do? Can you provide links to the "cool new views", or at least some search terms? I don't think you realize it, but you drop tantalizing hints that make a person drool in anticipation; but that are so vague that google is of no help to explore them. It's like saying "I've got a secret!" Please: do tell. I googled "cool new views into the concept space of Wikipedia", and it came back with nothing relevant. The Transhumanist 23:23, 9 April 2017 (UTC)
 * Hmm. It's not exactly a secret, but I know that the same folks have some new spacial/relatedness visualizations.  I've just seen bits and pieces in meetings.  They'll likely do a public release soon.  I'm focused on getting their indexing system (WikiBrain) running on Wikimedia infrastructure so we work with it directly.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 00:27, 11 April 2017 (UTC)

Hosting problems at WMF Labs

 * I just wish we could host their stuff on WMF Labs :(. --<span style="display:iinline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 18:55, 8 April 2017 (UTC)
 * Were requests made? Were they turned down? The Transhumanist 23:06, 9 April 2017 (UTC)
 * There's a complex set of priorities involved in managing something complex like Labs. There's just too much to do and not enough people.  I've been working with the labs folks and making sure they are aware of the needs of WikiBrain.  I've worked with them to look into alternatives and there's nothing great -- we really need the disk to be local.  Money and resources for Labs are primarily directed towards keeping the current things online and making them more robust.  As you can imagine, "Please make this special thing for me which will eventually make your job more difficult", doesn't come across as something that ought to be prioritized.  I don't blame them.  So I've been learning about some of the investments Labs folks have already decided against (E.g. wikitech:Labs labs labs/Bare Metal) and which ones they making now (e.g. Kubernetes) looking for a way that implementing what we need won't require much work or complication.  So far, this isn't on the horizon yet.  BUT just talking to you about your interest in this gives me cite-able evidence that's not just me -- others want WikiBrain too.  :)  So there's that. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 00:37, 11 April 2017 (UTC)

Wikipedia-based intelligent applications?
WikiBrain: Democratizing computation on Wikipedia states "WikiBrain empowers developers to create rich intelligent applications and enables researchers to focus their efforts on creating and evaluating robust, innovative, reproducible software." What examples of Wikipedia-based intelligent applications do you know of that WikiBrain was used to create? The Transhumanist 22:16, 7 April 2017 (UTC)


 * See above. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 18:55, 8 April 2017 (UTC)

What potential does WikiBrain have?
What potential does WikiBrain have? Above and beyond the current applications created with WikiBrain, what else could it be used to build? The Transhumanist 22:16, 7 April 2017 (UTC)


 * The options are vast. Right now, it seems the team is focusing on semantic relatedness measures.  Just those have great potential for enabling cool new technologies.  But also, WikiBrain can be used to generate all sorts of useful indexes of wiki relationships.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)
 * (Drooling again). Like what? Please elaborate on your tantalizing hints "cool new technologies" and "all sorts of useful indexes of wiki relationships". The Transhumanist 23:35, 9 April 2017 (UTC)
 * Here's your best reference: https://shilad.github.io/wikibrain/ --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 00:39, 11 April 2017 (UTC)

What are the dream applications developers are striving for?
What are the dream applications developers are striving for? The Transhumanist 22:16, 7 April 2017 (UTC)


 * Ping Shilad and Brenthecht :) --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 18:55, 8 April 2017 (UTC)

VM allocation limit
The reference you provided on WikiBrain's memory allocation problem mentioned that its memory-mapped files run in the range of 200GB, but that virtual machines on Wikimedia Labs are granted a max of 120GB. You stated above that "Regretfully, we're currently blocked on getting a decent allocation of disk space." No doubt most developers could spare 200GB at home. How much faster would a WM Labs VM be, and what other benefits would that have over doing it at home?

Considering the rate at which technology is advancing, 200GB wouldn't hold us for long. What would be a reasonable memory allocation limit that developers would not bang their heads on in a mere year or two? The Transhumanist 22:16, 7 April 2017 (UTC)


 * It really depends on the use-case. For memory-mapped files like those that WikiBrain uses, data must be accessed directly and quickly.  That means we want the disk directly available to the VM.  For less time-sensitive or database-oriented applications, labs has shared resources.  I honestly think that we could do a lot with 500GB VMs for a long time. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)


 * But WikiBrain can be installed on your desktop at home, and can work from Wikipedia's data on your own hard drive, right? How much slower would that be? The Transhumanist 07:51, 17 April 2017 (UTC)


 * It would certainly be a bit cumbersome, but you're totally right. You could work with WikiBrain locally, but I'd rather we had a centrally installed, high performance WikiBrain API that people could use without worrying about installing it locally and maintaining the index building processes. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:52, 17 April 2017 (UTC)

Taxonomy scoping
With respect to the "article-as-concept assumption" problem, the researchers you mentioned seem to be making a similar assumption in where they've drawn their cut-off line. For example, Mathematics coverage on Wikipedia exceeds 30,000 articles on mathematics. That is, mathematics has over 30,000 subtopics represented by article titles, and perhaps 100,000 or more topics represented as subsection headings. I want software to gather up and include all of the subtopics in a multi-webpage Outline of Mathematics, which would grow to include (as a set) many math outlines (Outline of geometry, etc.) and lists (e.g., List of q-analogs) linked together as a navigation network on mathematics. How would you set the degree of scope with WikiBrain? The Transhumanist 22:16, 7 April 2017 (UTC)

(Note that outlines are but a component of Wikipedia's list-based navigation system. Outlines are a type of list. They are trees of lists, and outlines and unbranched lists can fit together to make a complete topic navigation system for Wikipedia, in which all topics included on Wikipedia would be listed, whether those topics each have their own article or are embedded as a section (heading) in another article.)


 * I'm not sure what you're asking but I think I understand what you are trying to do. Honestly, I'm not sure where else to direct you when it comes to ontology-learning.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)
 * How does the algorithm know when to stop building the tree? The researchers' application stopped long before it found 30,000 subtopics. How did it know when to stop looking?  If it was told to find all the subtopics, how would it know if 30,000 subtopics was all there is?  There's a list maker in WP:AWB, with a feature to build lists from Wikipedia's category system, which can apply recursion in gathering links.  If you set the levels too high, it somehow breaks out of the current subject and gathers all the links in the system.  It doesn't know the boundaries of the subject.  How would one set the boundaries of the subject in WikiBrain or any data extraction application? The Transhumanist 23:02, 9 April 2017 (UTC)
 * Step 1: Formalize the problem. E.g. I'm at node [i].  My parent is node [j].  I have children [c1, c2, c3, ...].  I have an internal status (like distance from root).  I must decide the optimal time to stop going down through categories.
 * Step 2: Apply an optimization strategy. E.g. choose a distance threshold, observe the results, adjust until best preferred behavior.
 * Step 3: Consider whether or not there might be better optimization strategies or better formalizations.
 * Step 4: ITERATE!
 * Without studying the problem, I bet you could get a nice stopping criteria based on network centrality. E.g. use Page rank to compute the authority of each category (flattened directionality), and then define a stopping criteria that only continues when authority goes down (or stays roughly the same).  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 00:48, 11 April 2017 (UTC)
 * I see. The researchers define (a somewhat arbitrary) boundary for a subject, such as the number of levels to include in its topic tree, and refrain from adding deeper levels. From the viewpoint of standardizing subject scope for comparison purposes, cutting a subject off at great-grandchildren seems reasonable.  Those would be level-4 headings (H4).  Level-5 headings don't look any different in the Vector Skin, and therefore introduce ambiguity. To build the levels, first you apply algorithms to identify/verify domain members, and then again for parent/child relationships between those topics. The idea is to have subjects of equal scope between language versions of WP, so that any statistics you derive from them match context. Then you can say "English Wikipedia is missing the following geology subtopics, while the Spanish version lacks these other ones", while properly assuming that "geology" means the same thing in terms of scope for both encyclopedias. Meanwhile, you keep in mind that your conception of geology may not reflect the scope of the geology articles of either of those Wikipedias. It is important then to differentiate between "article" and "subject", with the clear understanding that subject scope is being defined by the researchers rather than by the editors of either encyclopedia. The Transhumanist 12:39, 14 April 2017 (UTC)

Where is the best place to look for the basics (terminology, structure, etc.) on machine learning?
You mentioned the possibility of pulling those "article-as-concept" researchers into the conversation. Before I can talk to them and have any hope of understanding what they say, I need to get up to speed with the jargon and basic structure of machine learning applications. Any guidance you can provide, by pointing to learning materials, would be a big help. (Is there anything analogous to JavaScript the Right Way for machine learning/natural language processing?) The Transhumanist 22:16, 7 April 2017 (UTC)

Biggest concern right now = stuck on viewport problem
To push forward development, right now, I need to talk to experts on JavaScript, for guidance on the viewport problem. If you know any, introducing me to them would be a great help. Who do you know? The Transhumanist 22:16, 7 April 2017 (UTC)

Have you come across or remembered any leads that can help me with the viewport resetting problem? I need to define an anchor in the text, and later reset the viewport so that the anchor is in the same location that it was previously within the viewport.

I look forward to your replies.

Sincerely, The Transhumanist 22:16, 7 April 2017 (UTC)


 * I recommend joining and .  There are lots of javascript experts there.  Try to have as clean of an example of the problem as you can ready for their review. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:55, 8 April 2017 (UTC)


 * Thank you. Your suggestions look like they will be very helpful.  I'll keep you posted on my progress. The Transhumanist 21:30, 8 April 2017 (UTC)

XML log processing
Thanks for working on this... yep, I'll definitely be performing testing (I'm searching through the deletion and move logs for a list of about 20,000 pages as part of my extended essay on Wikipedia page histories). Let me know when things are ready for testing.Codeofdusk (talk) 18:51, 2 May 2017 (UTC)

MWXML traceback
My script ran for a while, but then it tracebacked...Codeofdusk (talk) 00:41, 4 May 2017 (UTC)
 * Looks like we addressed this via IRC. I'll be looking into the wayward field later. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:53, 5 May 2017 (UTC)

Tracking down recent editors
I'd like to contact recent editors (say, in the past month) from the users listed at:

Category:User js-4

Many of the users listed here haven't logged in for years.

Any ideas? The Transhumanist 11:43, 5 May 2017 (UTC)


 * Hi The Transhumanist. Quarry is awesome for stuff like this.  Here you go :)   https://quarry.wmflabs.org/query/18396 --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 16:05, 5 May 2017 (UTC)
 * Perfect. I ran it again for the other cats, to populate a list of programmers at WikiProject_JavaScript.  Thank you for the script! The Transhumanist 23:17, 6 May 2017 (UTC)

Recent Research submission
Great overview of the bot wars paper and media coverage. I put a few suggestions here; please take a look if you have time. I'd like to publish this evening if possible; my apologies for the late commentary and the late publication. -Pete Forsyth (talk) 17:36, 15 May 2017 (UTC)


 * Forgot to say, but this was ✅ yesterday. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 16:22, 16 May 2017 (UTC)

A cup of coffee for you!

 * Hey Bluerasberry, sorry to not get back to you sooner. This isn't easy for me, but I'd like to look into it.  No promises, but I'll ping here if I do get a chance.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 16:26, 19 May 2017 (UTC)
 * Copy, thanks. If you can provide the data, then I can commit to work it with Consumer Reports statisticians, organize a conversation around it with the WikiProject Medicine community, and submit an article of the findings to The Signpost. That would be cool, but also, that does not have an obvious high or lasting impact, so use your time wisely.
 * If you can think of any compromise then let me know. One compromise that might be easier is that if we could meet by phone and you explain the problem to me over 15 minutes, then I could take notes of what you said and publish in The Signpost or elsewhere about the state of wiki research and what hopes you have for the future.
 * Of course also you know your own time and availability and sometimes saying no or just letting time pass is the best option. I do not wish to put you out of your way, and even if we do not do this now, I expect that eventually getting this data will be easier so the outcomes will come soon enough anyway. Thanks.  Blue Rasberry   (talk)  16:53, 19 May 2017 (UTC)


 * Sorry for talk page stalking this thread! (Not terribly sorry, though) Here is the Wikidata query to get English labels and links to enwiki (since sometimes the names don't match) for all instances of . There's a couple of thousands of these. You want number of views per day for all of 2016 for these,  Blue Rasberry  ? Cheers, Nettrom (talk) 17:50, 19 May 2017 (UTC)


 * Yes yes this seems close to solving the problem. That's really interesting - I still have not learned the language of that Wikidata query service but now that you have written this out it is obvious to me how I could change the terms in it to do my own query about any number of things.
 * I just downloaded these results. That gave me 2174 items called "druglabels". This source claims that in the entirety of its existence, the Food and Drug Administration has approved about 1,453 drugs, so the number here seems plausible. While Massviews cannot give me a year's worth of data for all 2000 items (I tried, it keeps crashing) I can break this into sets of 400 which massviews will accept and give me csv data. I think my issue is resolved - this is what I needed to (1) get a list of drugs for which there are English Wikipedia articles and (2) get traffic data for every day in 2016 for each of those 2000 English Wikipedia articles. There are so many ways that I could vary this query. Thanks, both of you, thanks Nettrom. I am very pleased with this. I am discussing this with others right now.  Blue Rasberry   (talk)  18:46, 19 May 2017 (UTC)

Nomination for deletion of Template:SRRFA
Template:SRRFA has been nominated for deletion. You are invited to comment on the discussion at the template's entry on the Templates for discussion page. Jc86035 (talk) <span style="display:inline-block;margin-bottom:-0.3em;vertical-align:-0.4em;line-height:1.2em;font-size:80%;text-align:left">Use &#123;&#123;re&#124;Jc86035&#125;&#125; to reply to me 16:16, 28 May 2017 (UTC)

Snuggle
Hi EpochFail, I recently discovered your Snuggle-tool and since I am a bit active on the dutch wp and wikidata, I wondered if it would be useful to start a Snuggle at Wikidata, and maybe also on the dutch wp. Do you think it would be useful to deploy it on wikidata? QZanden (talk) 23:04, 4 May 2017 (UTC)
 * Hi QZanden! I'm stoked that you discovered Snuggle and are interested in seeing it expanded.  To tell you the truth, getting Snuggle working in other wikis was the original reason I started working on m:ORES.  Regretfully, I'm 100% overworked with just keeping up with ORES.  If you'd be interested in doing some engineering work, I'd be happy to advise you toward setting up Snuggle for Wikidata and/or nlwiki.  Do you have any programming/data mining experience?  I'd be happy to help you gain the experience if you're willing to put in some work. :)  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:56, 5 May 2017 (UTC)
 * Hi EpochFail, I am not familiar with any type of programming, but I would like to learn it! Too bad, I am now in a very busy period, my final exams at high school are soon beginning. But after that I have maybe some time left. QZanden (talk) 20:08, 6 May 2017 (UTC)
 * QZanden, Understood! If only we could invent a cloning machine and do all the things we wanted to!  Well, we'd probably have more problems then.  Anyway, let me know when you do find time and please feel free to pitch the project to others.  I'd be happy to work with whoever wanted to pick it up.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 19:16, 8 May 2017 (UTC)
 * Hi EpochFail, I have nearly finished my exams, Monday is my last exam. After that I have some time to look at this new project. I hope you have also some time! And maybe a faster way of communication, email of irc? I'll be back on Tuesday! QZanden (talk) 20:16, 18 May 2017 (UTC)
 * Hi EpochFail, I'm back so we can start with the snuggle-wikidata project. QZanden (talk) 23:47, 24 May 2017 (UTC)
 * Hi QZanden! I'd love to get started now, but I'm in the middle of a workshop hackathon (WikiCite'17) so I'm mostly AFK until May 31st. However, in the meantime you can get set up on tool labs.  Can you register an account at https://wikitech.wikimedia.org?  If you run into any trouble, anyone in  can help (when I'm online, I'm "halfak" in that channel).  Once you have that account set up (might take a day or two to do approvals -- not sure if that's still the case though), we'll want to gather a set of templated messages that Snuggle users will want to send to good-faith newcomers.  We'll also need a list of warning message types that we'll expect to see on newcomers talk pages so we can start telling Snuggle how to tell what kind of message they are.  Hopefully, that will help you make some progress before I get back from my travels :)  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 12:31, 25 May 2017 (UTC)
 * Hi EpochFail, yes, I got an account at wikitech, but couldn't find out where to look for the code of Snuggle. is it just at Snuggle?

PS, I also do have an account at github, if that is necessary. QZanden (talk) 00:19, 26 May 2017 (UTC)
 * QZanden, the code lives at https://bitbucket.org/grouplens/snuggle, but I think we should move it from mercurial/bitbucket to git/github because that's what the cool kids are working in these days. I'll see if I can get the history ported between formats today. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 10:39, 27 May 2017 (UTC)
 * Aha! I'd already done that.  So maybe we can find all of the references to the old repo and squash those. https://github.com/halfak/snuggle --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 10:41, 27 May 2017 (UTC)
 * Ha EpochFail, I already forked it to my own repository at https://github.com/QZanden/Snuggle/blob/master/wikidata.mediawiki and opened a pull-request with everything changed from wikipedia to wikidata. QZanden (talk) 15:59, 27 May 2017 (UTC)

QZanden, great progress so far. I want to work through showing you how to set up a Snuggle server and see if we can get one running for Wikidata. Do you do IRC? I'm usually hanging out in. If you ping me there we can talk about how to get set up. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 17:54, 6 June 2017 (UTC)
 * Ha EpochFail, yes I know how to log on to an IRC channel. What times are you mostly active? QZanden (talk) 22:23, 7 June 2017 (UTC)

Offline resources hackathon in August
Hello - I thought you might be interested in a hackathon we're doing right after Wikimania (Montreal) in my home town of Potsdam, New York, about 2.5 hours from Montreal. The goal is to improve our ability to put together collections of Wikipedia articles (and other open content) for offline use. There will be several of us who have worked with quality and importance criteria at the meeting. It runs for four full days (August 13-17) and the details are at http://OFF.NETWORK. If you want to attend, just let me know. Cheers, Walkerma (talk) 02:33, 25 June 2017 (UTC)

Reddit
Hello Aaron,

I just wanted to mention that I read the Reddit conversation that you had at the beginning of June, and came away from it with a better understanding of the valuable work you do to help fight vandalism. I do not have the technical skills to understand it all in depth, but I got the overall concepts and just want to thank you and your colleagues for what you do. <b style="color:#070">Cullen</b><sup style="color:#707">328  Let's discuss it  06:55, 30 June 2017 (UTC)


 * Hi Cullen328. Thank you very much for this message.  I'm really glad that discussion format made the work more approachable.  Ping Ladsgroup, Adamw, He7d3r, WhiteCat, Danilo.mac, Yuvipanda, Aetilley, Tgr, DarTar, and Glorian_WD.  Y'all deserve some of this praise. :)  If you haven't seen it, here's the reddit conversation in question: https://www.reddit.com/r/IAmA/comments/6epiid/im_the_principal_research_scientist_at_the/ --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:57, 30 June 2017 (UTC)
 * Ack! Forgot Nettrom. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:03, 30 June 2017 (UTC)

Nomination for deletion of Template:Recruiting participants
Template:Recruiting participants has been nominated for deletion. You are invited to comment on the discussion at the template's entry on the Templates for discussion page. Ten Pound Hammer • (What did I screw up now?) 18:29, 1 October 2017 (UTC)

Meta learning (computer science)
This article states: "the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood."

Is that true? Is the whole field of machine learning just shooting in the dark?

So, you just try a bunch of different algorithms to see which one produces the best results on a data set? The Transhumanist 02:03, 29 October 2017 (UTC)
 * Yeah, actually I'd say that's a good description of practical machine learning. Though I wouldn't say it is "dark".  We know why learning happens.  We know how to test that learning has happened.  But any trained model is pretty difficult to "interpret".  Assessing the effectiveness/fitness of a machine learned model is arguably far more important than assessing the learning strategy.  Most different learning strategies will result in similar effectiveness/fitness with some sensitivities to hyper-parameters (they configure *how* learning happens).  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:45, 30 October 2017 (UTC)
 * So, how do you tell it what you want it to learn how to do? Like instead of doing push ups, to kick a ball through a goal? The Transhumanist 05:37, 1 November 2017 (UTC)
 * The Transhumanist, you give the learning algorithm examples of good judgement and it tries to replicate that. E.g. give it a set of edits labeled as "vandalism" and "not vandalism".   --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:36, 1 November 2017 (UTC)

Update
Long time, no see.

I thought you might be interested in what I've been up to since the last time we communicated...

These are the user scripts I've created so far for working with outlines, and they actually work (more or less):


 * User:The Transhumanist/OutlineViewAnnotationToggler.js – this one provides a menu item to turn annotations on/off, so you can view lists bare when you want to (without annotations). When done, it will work on (the embedded lists of) all pages, not just outlines. Currently it is limited to outlines only, for development and testing purposes. It supports hotkey activation/deactivation of annotations, but that feature currently lacks an accurate viewport location reset for retaining the location on screen that the user was looking at. The program also needs an indicator that tells the user it is still on. Otherwise, you might wonder why a bare list has annotations in edit mode, when you go in to add some. :) Though it is functional as is. Check it out. After installing it, look at Outline of cell biology, and press . And again.
 * User:The Transhumanist/RedlinksRemover.js – strips out entries in outlines that are nothing but a redlink. It removes them right out of the tree structure. But only end nodes (i.e., not parent nodes, which we need to keep). It delinks redlinks that have non-redlink offspring, or that have or are embedded in an annotation. It does not yet recognize entries that lack a bullet (it treats those as embedded).
 * User:The Transhumanist/StripSearchInWikicode.js – this one strips WP search results down to a bare list of links, and inserts wikilink formatting for ease of insertion of those links into lists. This is useful for gathering links for outlines. I'd like this script to sort its results. So, if you know how, or know someone who knows how, please let me know. A more immediate problem is that the output is interlaced with CR/LFs. I can't figure out how to get rid of them. Stripping them out in WikEd via regex is a tedious extra step. It would be nice to track them down and remove them with the script.

It is my objective to build a set of scripts that fully automate the process of creating outlines. This end goal is a long way off (AI-complete?). In the meantime, I hope to increase editor productivity as much as I can. Fifty percent automation would double an editor's productivity. I think I could reach 80% automation (a five-fold increase in productivity) within a couple years. Comments and suggestions are welcome. The Transhumanist 09:59, 26 October 2017 (UTC)


 * Hi Transhumanist! Thanks for sharing your work.  It looks like you have some nicely developed scripts.  I think the hardest part about getting an AI together to support this work is getting good datasets for study and eventually training AIs to do the work of outline generation.  It seems like you are doing a lot of that work now.  Can you get me a set of high quality outlines that you think could represent a gold standard?  It would be interesting to publish those outlines as a dataset and invite some researchers who are looking at this problem to study them. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 14:28, 27 October 2017 (UTC)


 * Isn't how an outline is constructed as important as providing the end result? Without a procedure, how else would the AI produce the target?


 * Concerning the outlines themselves, there are different kinds. Country outlines, province outlines, state outlines, city outlines, outlines about fields (geology, philosophy, mathematics, etc.), and outlines about material things (sharks, wine, water, etc.).


 * There are very few fully developed outlines, if any. Certainly no gold standards.  I would hate to propagate errors.


 * How many of a particular type would it take to provide a sufficient data set?


 * Could such a data set be used to teach an AI to complete already existing outlines? Or better yet, produce an entirely new improved set?  Most outlines are partially completed, and all outlines are ongoing works in progress, as knowledge continually expands, and outlines must expand with it.


 * Is training based on the way humans build outlines? How could that be done without recording what it is that humans do when they are building the outlines? The Transhumanist 04:00, 28 October 2017 (UTC)


 * Transhumanist. Hey just responding quickly, so I'll give you brief thoughts.  We'd probably want ~100 good examples, but maybe we could work from 10.  I think it would be totally possible that non-perfect examples could then be extended through an indexing strategy.  Regardless imperfections are OK because most learning strategies are robust to that and we'll be able to re-train models as the quality of the training set improves.  We've done a bit of filling in missing information with WikiProjects -- by seeing what they currently tag, we've been able to figure out some articles they probably want to tag.  Training the AI is less about showing it what humans do and more about showing it the results of human judgement and allowing it to develop an algorithm that replicates the outcome.  See, an AI can scan entire corpa, perform text similarity measures, etc. in ways that humans can't.  So we'd probably want to design the sources of decision-making signal to take advantage of machine potential.  Then we'd use the good examples to train it to make good use of the signal and to test how well it does.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 17:22, 28 October 2017 (UTC)
 * I have some questions for you:
 * What is an indexing strategy? I'd like to read up on it. Where would be the best places to start for getting up to speed on that and learning strategies?
 * For the WP Projects, does the AI suggest any articles that do not belong? What is the current ratio between its hits and misses?
 * How do you set the context for the AI to produce the desired output? The WP corpus represents a lot of human judgment. How do you tell the AI which human decisions produce outlines? Or do you have to?
 * On one end, you have the WP corpa. On the other you have the training set. The algorithms connect the two. Is it as simple as pointing the algorithms at those 2 things?  Or is there more to it?  How do you go from those to producing new outlines? How do you go from those to improving the outlines you have?
 * Note that outlines are not produced merely from the WP corpus. The further reading sections, references, and external links sections pertain to the Web-at-large and academia-at-large. How would you and an AI tackle those?
 * Where in Outline of machine learning does Training, test, and validation sets go? The Transhumanist 20:11, 28 October 2017 (UTC)
 * Was this too many questions for you? :) The Transhumanist 13:05, 3 November 2017 (UTC)
 * Hey! It is a lot of questions. :)  But really, I'm late to respond because I'm traveling and overloaded.  I'll get back to you after I get back to normal on Nov. 20th.  Sorry for the delay, Transhumanist. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:44, 9 November 2017 (UTC)
 * I look forward to it. By the way, I have a further update: StripSearch.js has been upgraded to operate via menu item to turn the stripping of details from searches on/off. It remembers its status, so that it continues to perform the same function across all searches. I'm now working on a version called SearchSuite.js that will provide this and a number of other functions via menu item, including alphabetical sorting of search results. The Transhumanist 19:59, 9 November 2017 (UTC)

Barnstar
You are most kind. I can only quote Samuel Barber: "I just go on doing, as they say, my thing. I believe this takes a certain courage." Thanks very much for the lovely words and sentiment - and happy editing! -- Ser Amantio di Nicolao Che dicono a Signa?Lo dicono a Signa. 06:12, 14 December 2017 (UTC)

"tis the season...."
<div style="border-style:solid; border-color:#FF4646; background-color:#F6F0F7; border-width:2px; text-align:left; padding:0.5em 0.5em 0 0.5em; border-radius: 1em; box-shadow: 0.1em 0.1em 0.5em rgba(0,0,0,0.75);;" class="plainlinks">Happy Holidays text.png Hello EpochFail: Enjoy the holiday season, and thanks for your work to maintain, improve and expand Wikipedia. Cheers, &#8213; Buster7  &#9742;   22:16, 20 December 2017 (UTC)


 * Spread the WikiLove; use {{subst:Season's Greetings1}} to send this message

Seasons' Greetings
...to you and yours, from the Great White North! FWiW Bzuk (talk) 22:37, 23 December 2017 (UTC)

How would you go about reviving this page?
List of Wikipedians by number of recent edits

I look forward to your reply.

Sincerely,   The Transhumanist 08:24, 12 February 2018 (UTC)

P.S.: please ping me if/when you reply. Thank you.


 * Hi The Transhumanist. It's not something I could invest much time into.  It seems like you might want to find someone to build a bot to run a query and periodically edit the page.  I can help with drafting the query if you like. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 21:55, 13 February 2018 (UTC)
 * That would help a lot. Thank you.   The Transhumanist 04:43, 14 February 2018 (UTC)


 * What would the query look like?   The Transhumanist 17:39, 24 February 2018 (UTC)

Are you a genie? I have a wish...
What would it take to automatically create city outlines of the quality of Outline of Dresden?

I'd like to create 1000 of these at this or better quality level, by the end of the year. (I think I could produce them manually working full time for that long, but screw that! I would like my computer to do it for me.) I look forward to your comments and advice. The Transhumanist 18:55, 24 February 2018 (UTC)

AutoAssessBot proposal
Hi. Looking for your advice. I have made a sort-of specification for this at Bot requests/AutoAssessBot, and tried to cover off all the comments on it. I am waiting feedback from User talk:MSGJ on the WPBannerMeta parameters, which may need tweaking, but think otherwise I have taken it as far as I can. It does not seem all that difficult to me. Is there more I should add? How do I make this a formal proposal? Thanks, Aymatth2 (talk) 11:03, 18 April 2018 (UTC)
 * Hi Aymatth2, I'm no expert on making BAG proposals, but your work looks great to me. I'll pitch the work at the upcoming Wikimedia Hackathon, so you might have some collaborators show up. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 23:40, 18 April 2018 (UTC)
 * Thank you very much. I hope it flies, and will do all I can to help. Aymatth2 (talk) 00:34, 19 April 2018 (UTC)
 * Good idea. --Gian (talk) 06:53, 18 May 2018 (UTC)

Help summarizing "appropriating ORES"
Hi Aaron, I'm working on the research report for upcoming issue of Signpost. Can you explain "three key tools that Wikipedians have developed that make use of ORES"? What are the three tools? I flipped through the slide deck and I think one is the Spanish Wikipedia bot, but I can't tell if it's been turned on with ORES support or not. And I'm really not sure what the other two users are called. Thx ☆ Bri (talk) 21:04, 28 June 2018 (UTC)
 * Hi Bri! Could you be asking about the presentation I gave at the May Wikimedia Research Showcase? mw:Wikimedia Research/Showcase  In that case, I focus on Wikidata's damage detection models (the tool is wrapped up in filters in Recent Changes), Spanish Wikipedia's PatruBOT issues (using the Spanish Damage detection models we have), and finally User:Ragesoss's reinterpretation of our article quality models for the WikiEdu tools.  The video that is linked there and the slide deck go into more details, but I'd be happy to discuss any specific questions you have.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 21:37, 28 June 2018 (UTC)
 * Yes, that's it. I think I can just use the summaries as you wrote them. Can you tell m where to find the WikiEdu tool(s)? I'm not familiar. ☆ Bri (talk) 21:55, 28 June 2018 (UTC)
 * Bri: Programs & Events Dashboard and Wiki Education Dashboard are the tools that use ORES data in assorted ways. For example, on this course page, click 'article development' to see a visualization of ORES scores over time for a single article, and click 'Change in Structural Completeness' at the top to see a visualization of how the distribution of ORES scores shifted from the start of the course to the end.--ragesoss (talk) 17:48, 3 July 2018 (UTC)

Extended essay
Hello Aaron Halfaker... or is that Fifth-hectare? Back in May 2017, I requested modifications to your Mwxml library so I could programmatically read Wikipedia log dumps. I used your library during the research for my extended essay, for which I received the marks today! A few people on Wikipedia had expressed interest in its findings, so it's here. You were mentioned, thanks again for your help! Codeofdusk (talk) 04:55, 7 July 2018 (UTC)
 * I'm really glad it was helpful! Thank you for sharing your work :) --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 19:08, 9 July 2018 (UTC)

Querying for pages
Hi Aaron, I'd like to query the database using part of the page name and date of creation. Here I look for pages that has 'Obama' in their names.

SELECT * FROM page WHERE page_title like '%Obama';

I can't find the column responsible for this in the 'page' table. How can I specify the date of creation? Thank you. Sillva1 (talk) 22:31, 2 August 2018 (UTC)
 * I was able to write the query.

SELECT * FROM page, revision

WHERE page.page_id=revision.rev_page and revision.rev_parent_id = 0 and page.page_namespace=0 and page.page_is_new=0

and page.page_title like '%Bisciglia' and revision.rev_timestamp=20080331235128;

I still have a problem specifying, say only the year and month. I'd like to search using a portion of the timestamp. How can I achieve that? Sillva1 (talk) 14:10, 3 August 2018 (UTC)


 * Hi Sillva1, try something like this: https://quarry.wmflabs.org/query/28742 --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 14:14, 3 August 2018 (UTC)
 * Oh, it was like 'xxxx%' that was missing in the timestamp. Thank you. Sillva1 (talk) 14:17, 3 August 2018 (UTC)

Peace Dove Christmas
Happy Holidays. &#8213; Buster7  &#9742;   20:26, 18 December 2018 (UTC)

ORES and readability formulas
Hi, I left a message with but figured I would say hello here, too. On ORES, it mentions "articlequality model doesn't evaluate the quality of the writing." I wanted to suggest using a readability formula. You can use one just on the lead, or sample throughout the article for speed (for example, just choosing the first sentence in each section).

Seahawk01 (talk) 01:46, 1 December 2018 (UTC)


 * Hi Seahawk01! I'm sorry to miss your message.  I'm just noticing it now.  :|  We've done some experiments with readability measures, but they didn't give us much signal.  It turns out that FA's get some of the worst readability scores on the wiki!  Regretfully, "readability" measure like Fleish-Kincade are really only measuring sentence complexity.  Good articles tend to have a relatively high sentence complexity whereas stubs have very simple sentence complexity.  When we say "readability", I think we are talking about how hard it is to read a text.  I'm not sure that there are any good metrics for measuring that directly.  Thanks for the suggestion!  Are you using ORES for your work?  I'd be very interested to know how.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 20:37, 18 December 2018 (UTC)

Please participate to the talk pages consultation
Hello

Our team at the Wikimedia Foundation is working on a project to improve the ease-of-use and productivity of wiki talk pages. As a Teahouse host, I can imagine you’ve run into challenges explaining talk pages to first-time participants.

We want all contributors to be able to talk to each other on the wikis – to ask questions, to resolve differences, to organize projects and to make decisions. Communication is essential for the depth and quality of our content, and the health of our communities. We're currently leading a global consultation on how to improve talk pages, and we're looking for people that can report on their experiences using (or helping other people to use) wiki talk pages. We'd like to invite you to participate in the consultation, and invite new users to join too.

We thank you in advance for your participation and your help.

Trizek (WMF), 08:37, 11 March 2019 (UTC)

Please participate to the talk pages consultation - link update
The previous message about the talk pages consultation has a broken link.

The correct link has been misinterpreted by the MassMessage tool. Please use the following link: Wikipedia:Talk pages consultation 2019.

Sorry for the inconvenience, Trizek (WMF), 08:48, 11 March 2019 (UTC)

Have to say ...
FANTASTIC user name. Wish I would have thought of that way back when. That's all, carry on. — Ched (talk) 10:06, 1 October 2019 (UTC)
 * Ched, Cheers! I do a lot of date math in my programming, so https://xkcd.com/376/ really resonated with me.  :)  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 16:36, 1 October 2019 (UTC)
 * lol - love the xkcd stuff, but I hadn't seen that one. Cheers. — Ched (talk) 18:01, 1 October 2019 (UTC)

researcher flag
Hello EpochFail, in 2011 a special flag (researcher) was added to your account for a project. This flag is for accessing certain histories of deleted pages. Are you still working on this project and require this continuing access? Please reply here or on my talk page. Thank you, — xaosflux  Talk 01:02, 5 December 2019 (UTC)
 * Hi Xaosflux! I am still using the flag.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:27, 5 December 2019 (UTC)
 * Thank you - no change for you! — xaosflux  Talk 15:47, 5 December 2019 (UTC)

Good luck
<div style="border-style:solid; border-color:blue; background-color:#fff; border-width:2px; text-align:left; padding:8px; " class="plainlinks"> 豊かな十年へようこそ/WELCOME TO THE D20s Miraclepine wishes you a Merry Christmas, a Happy New Year, and a prosperous decade of change and fortune. このミラPはEpochFailたちのメリークリスマスも新年も変革と幸運の豊かな十年をおめでとうございます！ フレフレ、みんなの未来！/GOOD LUCK WITH YOUR FUTURE! ミラP 04:46, 25 December 2019 (UTC)

You've got mail
Hi Epoch, I just joined your Edit Labeling project and have sent you a confidential email.

I'll be grateful to hear back from you.

Thanks!

Scripts++ Newsletter – Issue 15
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius: 1em; box-shadow: 0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past month (April 2020).

Hello everyone and welcome to the 15th issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here
 * Cscr-featured.svg SD0001's find-archived-section, now a gadget, is this month's featured script


 * Gnome-colors-list-add.svg Enterprisey:
 * link-deleted-revs (On error pages for deleted revisions, show a link to the Special:Undelete page)
 * undelete-backlink (On Special:Undelete pages for diffs and revisions, link back to the main Undelete page with the list of revisions)
 * Epicgenius's sortcategories (sorts categories in an article alphabetically)
 * Awesome Aasim:
 * addmylinks (adds a customizable area for links and/or text to the sidebar)
 * editrequest (tool that allows for processing and answering edit requests)
 * DemonDays64's Dumb quotes (replaces smart curly single and double quotes with "dumb" quotes)
 * Guywan's FileMoverHelper (move files and replace their uses in one action)


 * Gnome-colors-view-refresh.svg Enterprisey's reply-link has been updated to fail less, especially around template transclusions.
 * Twinkle released new features, including a new option to disable individual modules, support for stub template nomination at CfD, and integration with the PageTriage extension used to patrol new pages. (See full list of changes)

Open tasks
 * The  resource loader module was deprecated and is no longer needed; its functionality is now available by default. See mw:ResourceLoader/Migration guide (users) for more. Any dependency on it should be removed.

Pending requests
 * A script to reload feeds automatically
 * A script to mark templates in the TFD holding cell as ready for deletion
 * ...and many more, all available at User scripts/Requests



Miscellaneous
 * User scripts/Most imported scripts now also shows the number of active users for each script. It will now be updated by a bot periodically.
 * Twinkle's Morebits library added a new  class to replace the moment library. It can handle custom formatting and natural language for dates, as well as section header regexes. If you were using   or   with   objects, those have been deprecated and should be updated.
 * User:SD0001/find-archived-section was made a gadget. You can enable it from your gadget preferences, in the Browsing section.

Hope everyone stays safe. --DannyS712 (talk) 20:27, 1 May 2020 (UTC)

About the newsletter

Subscription options

Discuss this issue

WikiLoop Battlefield new name vote
Dear EpochFail,

Thank you for your interest and contributions to WikiLoop Battlefield. We are holding a voting for proposed new name. We would like to invite you to this voting. The voting is held at m:WikiProject_WikiLoop/New_name_vote and ends on July 13th 00:00 UTC.

xinbenlv Talk, Remember to "ping" me 05:05, 30 June 2020 (UTC)

Scripts++ Newsletter – Issue 16
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius: 1em; box-shadow: 0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past two months (May and June 2020).

Hello everyone and welcome to the 16th issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here
 * Cscr-featured.svg Ed6767's redwarn is this month's featured script


 * Gnome-colors-list-add.svg Enterprisey: abusefilter-diff-check* abusefilter-hide-search* live-reload* quick-vand-block*
 * abusefilter-mass-test* simple-notifs* User:Enterprisey/rc-thanks
 * DannyS712's RemindMe* Majavah's Aligner
 * BrandonXLF: ShowUserGender* ReferenceExpander
 * Awesome Aasim: oneclickdelete* noeditredlinks.js* editrequest
 * TheTVExpert: searchForSection* scriptDocumentationTabs
 * Alex 21: script-categoriessort.js* script-imageres.js
 * Ed6767's redwarn


 * Gnome-colors-view-refresh.svg Twinkle released new features, including a rollback links on history pages and Special:Recentchanges,
 * the ability to rollback-in-place on Recent changes and Contributions, a new option autoselect warning
 * level, and an XfD log. (Enable in your prefences; See full list of changes)
 * Headbomb's unreliable source detector script now allows users to add their own custom rules. See the documentation at User:Headbomb/unreliable.

Open tasks
 * Recent changes to the vector skin may have broken some scripts and gadgets. Uses of the  selector
 * should be replaced with . Additionally,   should be replaced with , and
 * should be replaced with . See T255718 and T254797 for more.

Pending requests
 * A script to simplify the process of nominating articles for good article reassessment (request)
 * A script to help create navboxes (request)
 * ...and many more, all available at User scripts/Requests



If anyone else would like to contribute to future issues, please comment at Wikipedia talk:Scripts++. --DannyS712 (talk) 20:13, 30 June 2020 (UTC)

About the newsletter

Subscription options

Discuss this issue

Wrong predictions for Central Africa
Hello EpochFail,

Thanks a lot (again) for creating ORES. Seeing from the source code that ORES treats Central Africa differently, I thought you might want to know that articles ORES is turning up for Central Africa are ones that have nothing whatsoever to do with Central Africa (or Africa, for that matter). See User:SDZeroBot/NPP sorting/Geography/Regions/Africa/Central Africa and AfC sorting/Geography/Regions/Africa/Central Africa and their page histories (both of these use the drafttopic model). Might I suggest simply removing this topic from ORES? SD0001 (talk) 15:03, 30 May 2020 (UTC)
 * Hey SD0001! Sorry I didn't get back to you earlier.  I think I can put a work-around together to get this category to work better.  I'll give that a try and see if I can wrap it up in our next deployment.  Thanks for calling attention to the issue.  And thanks for your continued work on AfC sorting :)  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 21:39, 3 June 2020 (UTC)
 * SD0001, we've made some changes to the model that should dramatically improve the quality of Central Africa predictions. I'd be interested to learn if that matches people's experience.  --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 15:04, 30 June 2020 (UTC)
 * Thanks, looks great. Though I haven't received any feedback regarding this, I see from the list now that most of the articles are now Central Africa related. SD0001 (talk) 13:03, 10 July 2020 (UTC)
 * Declaring this ✅! --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:25, 13 July 2020 (UTC)

Announcing WikiLoop DoubleCheck
Dear Wikipedians and contributors, the open source Wikipedia review tool, previously "WikiLoop Battlefield" has completed its name vote and is announcing its new name: WikiLoop DoubleCheck. Read the full story on the program page on Meta-wiki, learn about ways to support this tool, and find out what future developments are coming for this tool.

Thank you to everyone who took part in the vote!

xinbenlv Talk, Remember to "ping" me 18:28, 23 July 2020 (UTC)

Join the RfC to define trust levels for WikiLoop DoubleCheck
Hi , you are receiving this message because you are an active user of WikiLoop DoubleCheck. We are currently holding a Request for Comments to define trust levels for users of this tool. If you can spare a few minutes, please consider leaving your feedback on the RfC page. Thank you in advance for sharing your thoughts. Your opinion matters greatly! María Cruz

MediaWiki message delivery (talk) 02:59, 22 August 2020 (UTC) If you would like to modify your subscription to these messages you can do so here.

Scripts++ Newsletter – Issue 17
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius: 1em; box-shadow: 0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past two months (July and August 2020).

Hello everyone and welcome to the 17th issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here
 * Cscr-featured.svg Yet another Articles for creation helper script ("AFCH"), a gadget used by Articles for creation reviewers, is this month's featured script


 * Gnome-colors-list-add.svg Opencooper's collapseBots declutters talk pages by collapsing comments made by bots
 * SD0001's GAR-helper is a script to easily create a GAR nomination and carry out the necessary related steps
 * SD0001's shortdescs-in-category can be used to view short descriptions of pages in a category
 * WikiMacaroons's Talk to Wiki allows for voice-activated functionality and navigation of Wikipedia
 * WikiMacaroons's AutoShortDesc simplifies the addition of short descriptions to pages
 * IagoQnsi's ipareader adds a button next to IPA pronunciations to have them read out by a text-to-speech program
 * Awesome Aasim's xfdvote makes it easier to participate in XFD discussions
 * BrandonXLF's CitationStyleMarker adds "CS1" or "CS2" beside citations to show their style with an optional toggle
 * GeneralNotability's spihelper a script to help CheckUsers, SPI clerks, and patrolling admins with sockpuppet investigation case pages
 * Evad37's TimestampDiffs adds a link at the end of every comment to the diff in which it was added


 * Gnome-colors-view-refresh.svg Twinkle has a number of improvements including additional redirect category templates
 * Updates to Enterprisey's reply-link include compatibility fixes with Mdaniels5757's markAdmins script and Kephir's unclutter script
 * Evad37's rater is now at version 2.6.0, which updates including OOUI tweaks and the inclusion of banners for inactive projects

Miscellaneous
 * If anyone would like to help contribute to this newsletter, please leave a note at Wikipedia talk:Scripts++

Pending requests
 * A script to simplify addition of "connected contributor" templates to talk pages
 * A script to simplify removal of the "persondata" template on mlwiki
 * ...and many more, all available at User scripts/Requests



Stay safe --DannyS712 (talk) 19:42, 31 August 2020 (UTC)

About the newsletter

Subscription options

Discuss this issue

.documentation in User:EpochFail/HAPPI.css
I'm prepping to move some changes live in Module:Documentation which will "usurp" the class name .documentation as there are no other uses of the class. You might consider tweaking the name in HAPPI if it does not have the same intent as onwiki documentation. --Izno (talk) 18:32, 16 November 2020 (UTC)

New, simpler RfC to define trust levels for WikiLoop DoubleCheck
HI , I'm writing to let you know we have simplified the RfC on trust levels for the tool WikiLoop DoubleCheck. Please join and share your thoughts about this feature! We made this change after hearing users' comments on the first RfC being too complicated. I hope that you can participate this time around, giving your feedback on this new feature for WikiLoop DoubleCheck users. Thanks and see you around online, María Cruz MediaWiki message delivery (talk) 20:05, 19 November 2020 (UTC) If you would like to update your settings to change the wiki where you receive these messages, please do so here.

AI reviews
Hi EpochFail, I don't think we've interacted before but I saw some chat about your AI reviewer and thought I'd volunteer to help out. I have a couple of FACs running which you're more than welcome to use the algorithm(s) against, and I'd be delighted to be involved in anything related to trying to explain why the algorithm says what it says. Just let me know if I can help. The Rambling Man (Hands! Face! Space!&#33;!&#33;) 20:14, 25 November 2020 (UTC)
 * The Rambling Man fantastic! Sumit.iitp is leading the work here.  I'm mostly running support and advising on this project.  Sumit, how much trouble would it be to run against the FACs that The Rampling Man has nominated? --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 01:13, 28 November 2020 (UTC)
 * Hi The Rambling Man! Thanks for your interest here! User:EpochFail Running predictions against a set of FACs that The Rambling Man is driving is not too difficult. I will generate the predictions for all statements in the articles, and select the statements which the AI is most confident about. I will then post them on the talk pages in a day or two. Since I'll be posting them manually, the posting will be one by one on the articles. Sumit (talk) 07:44, 30 November 2020 (UTC)
 * Let me know when you're done in case I miss any notifications! Cheers. The Rambling Man (Hands! Face! Space!&#33;!&#33;) 13:29, 1 December 2020 (UTC)

Scripts++ Newsletter – Issue 18
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius:1em; box-shadow: 0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past three months (September, October, and November 2020).

Hello everyone and welcome to the 18th issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here
 * Cscr-featured.svg SD0001's W-Ping is this month's featured script


 * Gnome-colors-list-add.svg Sportzpikachu's bookmarks adds a bookmarks section to the navigation toolbar
 * SD0001's deleted-metadata-link.js adds links to the deletedrevisions API output for deleted pages
 * SD0001's GAN-helper makes it easier to create a GAN nomination.
 * 1234qwer1234qwer4's XfDstarted scripts add links to today's XfD log pages to the personal toolbar
 * Awesome Aasim's rcpatrol is a tool that can be used to patrol recent changes, including reverting edits and warning users
 * A new JavaScript snippet, MediaWiki:AFC-add-project-tags.js, can be used to add WikiProject tags to drafts at WikiProject Articles for creation/Add WikiProject tags without actually needing to install a user script.
 * Nardog's PlayAudioNow makes audio links playable while staying on the page, regardless of the format
 * GeneralNotability's ip-ext-info.js adds extended info on IPs in a popup with their range, ASN, and ISP
 * TheTVExpert's submitRMTR adds a button to Special:MovePage to submit a technical request at Requested moves/Technical requests
 * TheTVExpert's rmCloser helps with closing and relisting requested moves.
 * Enterprisey's parsoid-round-trip uses Parsoid to convert wikitext to HTML and back, and then shows the result and the difference between the original wikitext and the post-conversion wikitext.
 * Frietjes's infoboxgap assists in renumbering infobox labels/data/classes, so that a new line can be inserted in the middle of the infobox.

Pending requests
 * A request for a script to help with maintaining the lists of vital articles
 * A request for a script to make it easier to archive specific list entries, rather than whole discussion sections
 * ...and many more, all available at User scripts/Requests




 * Gnome-colors-view-refresh.svg Twinkle has made a number of improvements, including using a change tag to identify actions made with it and automatically filing edit requests for protected XfD nominations.
 * SD0001's shortdescs-in-category is now compatible with scripts modifying category listings, such as Equazcion's CatListMainTalkLinks
 * GhostInTheMachine's TalkHelper2 is a newer version of their prior script, TalkHelper
 * GeneralNotability's spihelper updated to 2.2.10, fixing a number of small bugs, automatically tagging globally locked socks as such in the sockpuppet template, and restoring open cases following an SPI history merge.
 * Enterprisey's script-installer gadget has been updated with more internationalization of messages, as well as addition of a user preference,  to allow controlling where new scripts are to be installed.

Open tasks
 * Enterprisey is looking for beta testers for their new section-watchlist user script. See the announcement.

Miscellaneous
 * The second edition of the m:Coolest Tool Award will be taking place in December.
 * Enterprisey's links-in-logs script has now been implemented as part of MediaWiki core, see 640761 and T207562.

My apologies for the delayed issue. As always, if anyone else would like to contribute, help is appreciated. Stay safe, --DannyS712 (talk) 18:20, 1 December 2020 (UTC)

About the newsletter

Subscription options

Discuss this issue

Scripts++ Newsletter – Issue 19
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius:1em; box-shadow: 0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past month (December 2020).

Hello everyone and welcome to the 19th issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here
 * Cscr-featured.svg Enterprisey's script-installer, a gadget that makes it easier to install scripts, is this month's featured script


 * Gnome-colors-list-add.svg Enterprisey's copy-section-link adds popups to section headers which has an appropriate wikilink and external link to the section.
 * DannyS712's FindBlacklistEntry can be used to figure out which line(s) in either the local or global spamblacklist prevent a particular url from being added.
 * Nardog's CodeEditorAssist allows customizing the code editor, including changing the color theme and enabling live autocompletion.
 * Cobaltcigs's IllWill allows searches wikidata.org for foreign-language sitelinks, to populate ill and replace plain red links.


 * Gnome-colors-view-refresh.svg Twinkle has made a number of improvements, most notably full support for expiring watchlist items.
 * Evad37's Signpost Publishing Script is now at version 2.4.2, with changes including a fix for calculating the size of pages with files.
 * Evad37's MoveToDraft is now at version 2.5.1, with changes including:
 * checks for move conflicts and better options if move fails
 * the addition of Drafts moved from mainspace after moves
 * updating the default move log and notification texts to be more in line with WP:DRAFTIFY
 * Enterprisey's cv-revdel now handles failures more cleanly, including reporting failures caused by the spam blacklist.
 * Bradv's Notepad.js script has been updated with new styles and a new launch button.
 * A beta version of XFDcloser is available for testing, and brings a new user interface for dialogs and some preferences for customization.



Open tasks
 * Voting in the m:Community Wishlist Survey 2021 is ongoing until December 21, 2020. There are 15 proposals in the "bots and gadgets" category that may be of interest.

Pending requests
 * A script to help list discussions at WP:ANRFC (request)
 * A script to allow undoing edits despite conflicting intermediary edits (request)
 * ...and many more, all available at User scripts/Requests

Miscellaneous
 * The Watchlist Expiry feature worked on by the Community Tech team has been enabled on Wikipedia. For scripts that include watching or unwatching pages, developers may want to update their code to take advantage of the new functionality. See the documentation on mediawiki.org.
 * As noted in the prior issue, Enterprisey's links-in-logs script has now been implemented as part of MediaWiki core. By my count, this is his third script that was replaced by implementing the code in MediaWiki core or an extension, along with link-section-edits and abusefilter-hide-search. Additionally, his reply-link script is being converted in part to mw:Extension:DiscussionTools. Are there any other scripts that might be worth integrating directly in MediaWiki? Thoughts would be welcome at Wikipedia talk:Scripts++.

As always, if anyone else would like to contribute, help is appreciated. Stay safe, and happy holidays! --DannyS712 (talk) 00:19, 18 December 2020 (UTC)

About the newsletter

Subscription options

Discuss this issue

Minor fixes to userscripts
Hey EpochFail, it looks like you've got some user scripts in use by others that have bare javascript global wg-style variables. These are T72470 deprecated, and while I don't think there's a timeline for their removal, it's been that way for a while. It's usually a straightforward fix, all uses need to use, such as converting   to. There's some more info at ResourceLoader/Migration guide (users). I can take care of cleaning them up for you if you like, or give you a full list if you want to handle it, just let me know! ~ Amory <small style="color:#555"> (u • t • c) 11:56, 29 January 2021 (UTC)
 * I certainly wouldn't mind if you would handle it. If you've got a lot on your plate, I can take care of it if you share a list with me.  Thanks! --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 19:39, 29 January 2021 (UTC)
 * ✅, let me know if anything looks busted or stopped working! ~ Amory <small style="color:#555"> (u • t • c) 00:54, 30 January 2021 (UTC)
 * Thank you u! --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail (talk &bull; contribs) 19:23, 1 February 2021 (UTC)

Scripts++ Newsletter – Issue 20
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius:1em; box-shadow:0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past month (January 2021).

Hello everyone and welcome to the 20th issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here
 * Gnome-colors-list-add.svg Ajbura's anrfc-lister assists with listing discussions at WP:ANRFC to request that they receive a formal close
 * Bradv's endlesscontribs provides for endless scrolling of contributions pages
 * Evad37's livenotifications displays notification alerts and messages in a little popup box, (almost) live as they happen
 * Terasail's Edit Request Tool allows users to reply to, close or remove protected edit requests
 * ToBeFree's clear-watchlist allows emptying your watchlist even if the watchlist is too full to be edited or cleared by conventional methods
 * Dentonius's GlobalRecentChanges allows you to monitor recent changes across various wikis.
 * Yahya's SNA (Start New Article) allows you to start a new article or draft from the navigation bar.
 * Nardog:
 * CatChangeLinker bluelinks the "(diff | hist)" part of category additions/removals on your watchlist.
 * CatChangesViewer lists recent category additions/removals on a category page.
 * Consecudiff adds links to diffs of consecutive edits by the same user on watchlist, history, etc.
 * DiffFontSwitcher allows you to toggle between fonts for diff by clicking a line number.
 * MoveHistory lists the past moves a page has gone through.
 * Novem Linguae:
 * UserHighlighterSimple is a fork of Chlod's UserHighlighter with a simplified color scheme.
 * VisualEditorEverywhere displays the Visual Editor edit tab and section links on pages that don't normally have them.


 * Gnome-colors-view-refresh.svg User:Evad37/TimestampDiffs – fixed compatibility issues with various scripts/gadgets, including Comments in Local Time. For discussion archives, now looks for revisions on the base page.
 * User:Enterprisey/undo-last-edit.js - no longer tries to load on special pages.
 * User:Ahecht/Scripts/pageswap - version 1.4 fixes reading destination from form field if destination is not in article namespace, and fixes self redirects.
 * XFDcloser - version 4 brings a new user interface for dialogs, some preferences for customising XFDcloser, major behind-the-scenes coding changes, and resolves various issues raised on the talkpage. Also, since version 3.16.6 non-admin soft delete closure have been allowed at TfD.



Open tasks
 * As a reminder, the legacy javascript globals (like accessing  without first assigning it a value or using   instead) are deprecated. If your user scripts make use of the globals, please update them to use   instead. Some global interface editors or local interface administrators may edit your user script to make these changes if you don't. See T72470 for more.

Miscellaneous
 * For people interested in creating user scripts or gadgets using TypeScript, a types-mediawiki package (GitHub, NPM) is now available that provides type definitions for the MediaWiki JS interface and the API.
 * A GitHub organization has been created for hosting codebases of gadgets. Users who maintain gadgets using GitHub may choose to move their repos to this organization, to ensure continued maintenance by others even if the original maintainer becomes inactive.

Pending requests
 * A script to ease reviewing Good Article nominations
 * A script to help manage Z number templates
 * ...and many more, all available at User scripts/Requests

As always, if anyone else would like to contribute, including nominating a featured script, help is appreciated. Stay safe, and happy new year! --DannyS712 (talk) 01:17, 3 February 2021 (UTC)

About the newsletter

Subscription options

Discuss this issue

You've got mail!
Shadowrvn728 ❯❯❯  Talk  00:35, 26 February 2021 (UTC)

WikiLoop 2020 Year in Review
Dear editors, developers and friends:

Thank you for supporting Project WikiLoop! The year 2020 was an unprecedented one. It was unusual for almost everyone. In spite of this, Project WikiLoop continued the hard work and made some progress that we are proud to share with you. We also wanted to extend a big thank you for your support, advice, contributions and love that make all this possible.

Head over to our project page on Meta Wikimedia to read a brief 2020 Year in Review for WikiLoop.

Thank you for taking the time to review Wikipedia using WikiLoop DoubleCheck. Your work is important and it matters to everyone. We look forward to continuing our collaboration through 2021!

María Cruz MediaWiki message delivery (talk) 01:35, 25 March 2021 (UTC)

success and warning classes
One or more or your scripts uses the  or   classes. Be aware that the styling for these classes may be removed in the near future. See WP:VPT for a list of scripts. Izno (talk) 18:21, 3 May 2021 (UTC)

Scripts++ Newsletter – Issue 21
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius:1em; box-shadow:0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> News and updates associated with user scripts from the past four months (February through May 2021).

Hello everyone and welcome to the 21st issue of the Wikipedia Scripts++ Newsletter:

Scripts Submit your new/improved script here


 * Gnome-colors-list-add.svg New scripts


 * Alexander Davronov's HistoryHelper helps to copy diff links from the history display of a page
 * Aseleste's Purger makes it easier to purge or null edit a page
 * Awesome Aasim:
 * CatMan adds a category manager to simplify interactions for adding and removing the categories of a page
 * redirectcreator allows for easy creation of multiple redirects
 * Bop34's EasyDesc adds a link to the top navigation bar which allows you to add a short description without opening the edit menu
 * BrandonXLF's NullEdit adds a portlet link to perform a null edit on a page
 * Chlod:
 * FoldArchives collapses archived talk page threads in order to reduce screen space
 * GoToTitle converts the page title into an input field for navigating to other pages
 * UserHighlighter adds highlighting to links to the userpages, talk pages, and contributions of administrators and other user groups as well as tooltips to indicate which groups a user is in
 * GhostInTheMachine:
 * NoEditSummary adds a warning if an edit will be saved without an edit summary
 * SortWhatLinksHere sorts the links shows on Special:WhatLinksHere
 * WatchlistTidy moves the "new updates found" timestamp block from the bottom of the filter block to the top of the filter area on the watchlist
 * GorillaWarfare's curlies simplifies converting curly quotes to straight quotes while in the source editing mode
 * Nardog:
 * AutoSectionLink corrects the  every time you hit "Show changes"
 * AutoTestcases autofills "Preview page with this template/module" with the most relevant /testcases page that exists
 * SortCentralAuthByEditCount sorts the list of local accounts on CentralAuth by edit count
 * Novem Linguae:
 * CiteHighlighter highlights 1800 sources green, yellow, or red depending on reliability.
 * NPPLinks adds WP:BEFORE Google search links, copyvio check, duplicate article check, and other links to the left menu.
 * Ritenerek's goce_nav adds a link to WikiProject Guild of Copy Editors to the navigation menu
 * Suffusion of Yellow:
 * filterDiff: Adds a "Show changes" button to the filter editor.
 * filterNotes: Parses filter notes as wikitext (so links are clickable), and signs and dates new comments for you.
 * filterTest: Adds a "Test changes" button. Opens Special:AbuseFilter/test with what's currently in the edit form, not with what's saved in the database, so you don't have to copy-paste your changes.
 * Terasail's COI Request Tool allows users to reply to, close or remove conflict of interest edit requests


 * Gnome-colors-view-refresh.svg Updated scripts


 * Twinkle has a number of improvements, including that most watchlist defaults now make use of the new temporary watchlist feature. Other changes include rollbacks treating consecutive IPv6 editors in the same /64 range as the same user, adding a preview for shared IP tagging, a preference for watching users after CSD notification, and for sysops, the ability to block the /64 and link to a WP:RfPP request, and new copyright blocks default to indefinite.
 * Shortdesc helper now v3.4.17, changes include minor fixes and preventing edits that don't change the description.
 * BrandonXLF's CitationStyleMarker now recognizes WP:CSLSA and WP:CSVAN styles, in addition to WP:CS1 and WP:CS2
 * Joeytje50's JWB now version 4.1.0, includes the ability to generate page lists from the search tool, major updates to the handling of regular expressions, the storing of user settings, the addition of upload protection, and an option to skip pages that belong to a specific category, among other changes. See User:Joeytje50/JWB/Changelog for a full list of recent changes.
 * User:SD0001/deleted-metadata-link.js – updated to include a link to snippet of deleted content, instead of just metadata.


 * Info Simple bw.svg Miscellaneous


 * User scripts/List has been revamped to make it easier to find scripts suited for your needs. If you know of a cool script that is missing on the list, or a script on the list that is no longer working, please edit the list or let us know on the talk page.


 * 1009Clock.jpg Pending requests


 * A request for a script to help admins posting "recent death" items to Template:In the news
 * A request for a script to warn users about syntax and template errors
 * ...and many more, all available at User scripts/Requests



My apologies for this long-overdue issue, and if I missed any scripts. Hopefully going forward we can go back to monthly releases - any help would be appreciated. Thanks, --DannyS712 (talk) 13:04, 2 June 2021 (UTC)

About the newsletter

Subscription options

Discuss this issue

Snuggle
Hi. I hope you are doing well. Just wanted to ask, are you the creator of Snuggle? On the video tutorial Jackson_Peebles is mentioned. —usernamekiran (talk) 21:21, 20 September 2021 (UTC)
 * usernamekiran hi! I am the creator of Snuggle.  Jackson Peebles was very helpful in the design and coordination around the release of Snuggle.  It's been a long time since I did that work with him, but let me know if you have any questions about it and I'll try to help. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 22:35, 20 September 2021 (UTC)
 * thanks. I was just curious. I have liked the look of snuggle since I joined the WP, even though I had never used it. And I also respected it a lot. I wish I could use it. Thanks again. —usernamekiran (talk) 22:45, 20 September 2021 (UTC)

Nomination for deletion of Template:Random-subst
Template:Random-subst has been nominated for deletion. You are invited to comment on the discussion at the entry on the Templates for discussion page. Did Q28 make a mess today? 04:47, 18 November 2021 (UTC)

Scripts++ Newsletter – Issue 22
<div style="border-style:solid; border-color:#606060; background-color:#f8f8f8; border-width:2px; text-align:left; padding:7px; border-radius:1em; box-shadow:0.1em 0.1em 0.5em rgba(0,0,0,0.75);" class="plainlinks"> Hello everyone, and welcome to the 22nd issue of the Wikipedia Scripts++ Newsletter. This issue will be covering new and updated user scripts from the past seven months (June through December 2021).

Got anything good? Tell us about your new, improved, old, or messed-up script here!

Featured script
 * LuckyRename, by Alexis Jazz, is this month's featured script. LuckyRename makes requesting file moves easier, and automates the many steps in file moving (including automatic replacement of existing usage). Give it a shot!

Updated scripts
 * SD0001: hide-reverted-edits has been updated to take into account changes in reversion tools like Twinkle and RedWarn.
 * Writ Keeper: massRollback.js has added a rollback-some option (missed in last issue).
 * ClaudineChionh: SkinSwitcher (a fork and update of Eizen's script) provides an options menu/toolbox/toolbar allowing users to view a given page in MediaWiki's default skins.

Pending requests
 * Mass-patrolling at Special:NewPages
 * Taking up the mantle of an almost completed peer review javascript bot
 * Restore the Wikipedia Citation Tool for Google Books
 * Autohighlighting specific characters (Greek)
 * ...and many more, all available at User scripts/Requests

Miscellaneous
 * User scripts/Ranking is a sortable table of Wikipedia's thousand-or-so most commonly used scripts; it includes their author, last modification date, installation count, and sometimes a short description.
 * Toolhub is a community managed catalog of software tools used in the Wikimedia movement. Technical volunteers can use Toolhub to document the tools that they create or maintain. All Wikimedians can use Toolhub to search for tools to help with their workflows and to create lists of useful tools to share with others.

New scripts
 * Ahecht:
 * draft-sorter sorts AfC drafts by adding WikiProject banners to their talk pages. It supersedes User:Enterprisey/draft-sorter, adding a few features and fixing some bugs.
 * massmove, a modified User:Plastikspork/massmove.js that adds a link to the left column, allows adding and removing both prefixes and suffixes.
 * watchlistcleaner removes missing pages (redlinks), redirects, pages you haven't edited recently, and/or pages you've never edited from your watchlist.
 * Awesome Aasim:
 * Infiniscroll adds infinite scrolling to user contributions, page histories, and log pages.
 * Quick create allows for the fast creation of red-linked pages with two clicks.
 * Caburum:
 * UTCclock adds a clock displaying the current UTC time.
 * Chlod:
 * CopiedTemplateEditor, mainly for CCI case handlers, allows graphically editing a talk page's copied templates.
 * DaxServer:
 * BooksToSfn adds a portlet link in Visual Editor's source mode editing, in main namespace articles or in the user's Sandbox. When clicked, it converts one inside a   tag block into an.
 * FlightTime:
 * OneClickArchiver is a custom version of User:Technical_13/Scripts/OneClickArchiver which doesn't prepend Clear to the top of each section on the archive page.
 * Jon Harald Søby:
 * diffedit enables editing directly from viewing a diff "when, for instance, you notice a tiny mistake deep into an article, and don't want to edit the entire article and re-find that one line to fix that tiny mistake".
 * warnOnLargeFile warns you if you're about to open a very large file (width/height >10,000px or file size >100 MB) from a file page.
 * JPxG:
 *  PressPass  adds a collection of tools for Newspapers.com including configurable automatic citation generation in five different formats.
 * CurrentSwitcher gives you links on the contribs page to hide duplicate entries, current revisions, rollbacks, huggles, twinkles, and redwarns.
 * TrackSum lets you automatically sum the lengths of tracks in templates like track listing and get total runtimes.
 * Nardog:
 * CopySectLink adds a button to copy the unencoded page title or section path next to each heading.
 * IPAInput allows you to type in IPA symbols by directly looking at an IPA key like Help:IPA/English and clicking on the symbols.
 * TemplatePreviewGuard warns when you try to use "Preview page with this template" with a page that doesn't transclude the template.
 * NguoiDungKhongDinhDanh:
 * ContribsTabVector adds "Contributions" and "Statistics" tabs to user and user talk pages on the Vector skin.
 * CopyvioChecker adds a "CopyvioCheck" tab to all pages, except Special (Vector skin only).
 * LiveDiffLink is a version of Equazcion's LiveDiffLink which shows a wikilink instead of a URL.
 * QuickDiff (by OneTwoThreeFall at Fandom) lets you quickly view any diff link on a wiki, whether on Recent Changes, contribs pages, history pages, the diff view itself, or elsewhere. For more information, view its page on Fandom.
 * Novem Linguae:
 * DetectSNG scans a list of 1,600 SNG keywords and displays them at the top of the article.
 * NotSoFast highlights recently created articles in the new pages feed, to discourage patrolling them too quickly.
 * UserRightsDiff concisely displays what perm was added or removed when viewing Special:UserRights.
 * VoteCounter displays a rough count of keeps and deletes at XFDs, RFCs, etc.
 * WatchlistAFD automatically watchlists the AFDs of any pages you AFC accept or NPP patrol, to help you calibrate your reviewing.
 * P.T.Đ:
 * TwinkleMobile enables Twinkle on mobile view (Minerva skin).
 * Qwerfjkl:
 * editRedirect adds a → link after redirects to edit them.
 * RegExTypoFix, a script for fixing typos, is a wrapper for User:Joeytje50/RETF.js.
 * talkback creates links after user talk page links like this: | |  (with the first linking to the user's contributions, and the latter giving the option of sending a talkback notice). It also adds a [] link next to section headers.
 * Rublov:
 * diff-link shows "copy" links on history and contributions pages that copy an internal link to the diff (e.g., Special:Diff/1026402230) to your clipboard when clicked.
 * Rummskartoffel:
 * auto-watchlist-expiry automatically watchlists every page you edit for a user-definable duration (you can still pick a different time using the dropdown, though).
 * generate pings generates the wikitext needed to ping all members of a category, up to 50 editors (the limit defined by MediaWiki).
 * share ExpandTemplates url allows for easy sharing of your inputs to Special:ExpandTemplates. It adds a button that, when clicked, copies a shareable URL to your exact invocation of the page, like this. Other editors do not need to have this script installed in order to access the URL generated.
 * show tag names shows the real names of tags next to their display names in places such as page revision histories or the watchlist.
 * Tol:
 * VisualEditor Citation Needed adds a button (under "Insert") in VisualEditor to add a citation needed tag.
 * Venkat TL:
 * ColourContrib color-codes the user contributions page so that pages you've edited last are sharply distinguished from pages where another editor was the last to edit the page.
 * Vukky:
 * StatusChanger is a fork of Enterprisey's Status Changer, which adds a UI to the script. (using Morebits, so you'll need to have Twinkle enabled to use it).

All in all, some very neat scripts were written in these last few months. Hoping to see many more in the next issue -- drop us a line on the talk page if you've been writing (or seeing) anything cool and good. Filling in for DannyS712, this has been jp×g. Take care, and merry Christmas! jp×g 07:30, 24 December 2021 (UTC)

About the newsletter

Archive

Subscribe or unsubscribe

Discuss this issue

"Wikipedia:Ikipedia:Snuggle/IRC/2013-07-10/Broadcast message" listed at Redirects for discussion
An editor has identified a potential problem with the redirect Wikipedia:Ikipedia:Snuggle/IRC/2013-07-10/Broadcast message and has thus listed it for discussion. This discussion will occur at Redirects for discussion/Log/2021 December 25 until a consensus is reached, and anyone, including you, is welcome to contribute to the discussion. Q28 (talk) 10:38, 25 December 2021 (UTC)

Notify URL to update
Hi, and thanks a lot for your work and your tools (I use a lot mediawiki-utilities). Anyway, just to notify you, the URL in your page need to be updated, where you write: ''ORES is a AI prediction as a service. It hosts machine learning models for predicting vandalism, article quality, etc.''

Now must point to: https://www.mediawiki.org/wiki/ORES

Thanks a lot, Gelma — Preceding unsigned comment added by Gelma (talk • contribs) 19:12, 9 January 2022 (UTC)
 * Gelma thanks for the notice! ✅ --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 18:35, 10 January 2022 (UTC)

Chat-GPT discussion at VP Policy
Hey, long time no see...

It might be helpful if someone with machine learning expertise as applied to Wikipedia took a look at the following discussion:


 * Village pump (policy)

And of course, I naturally thought of you. ;)

Not looking for a particular viewpoint. I'm hoping you can help answer the questions, "Should we be concerned?" and, if so, "What can or should we do to get ready for this?" And... "How much effort will it take?"

I figure that you are probably more attuned to the ramifications of this technology, and how to adapt to it, than the rest of us.

Thank you. Sincerely,  &mdash; The Transhumanist   23:34, 20 January 2023 (UTC)


 * Thanks for the ping. It's an important discussion.  I'll catch up and try to chime in.  It might take me a couple of days. --<span style="display:inline-block;padding:0 .25em;border-bottom:2px solid #008;background: #f6f6f6;" title="aka halfak">EpochFail  (talk &bull; contribs) 05:39, 21 January 2023 (UTC)

Chatbot ramifications

 * 1) Panicked Google to unveil over 20 AI products in 2023
 * 2) ChatGPT reaches 100 million users two months after launch
 * 3) Google has stiff competition now, after Microsoft integrated powerful A.I. technology into its search engine.

I've been wondering...

There are hundreds of companies building apps on top of GPT-3.


 * GPT-3: Building Innovative NLP Products Using Large Language Models - Anna’s Archive

A big problem with GPT-3 apps (such as ChatGPT) at this time is that they are not compatible with Wikipedia's content policies. They produce content without citations, content with bogus citations, cite Wikipedia, cite non-reliable or inappropriate sources, may make up content altogether, produce content with biases, etc.

What would be entailed in building such an app devoid of such problems, designed to support Wikipedia development?


 * Note that many such apps access GPT-3 via prompts through the OpenAI API, making prompt engineering a key component of development.

Please look into the feasibility.

I look forward to your reply. &mdash; The Transhumanist  11:56, 12 February 2023 (UTC)

P.S.: Please ping me when you reply. Thanks. TT

Timeline of ChatGPT news
(in reverse-chronological order)


 * 1) Robots let ChatGPT touch the real world thanks to Microsoft | Ars Technica
 * 2) Oxford and Cambridge ban ChatGPT over plagiarism fears but other universities choose to embrace AI bot
 * 3) AI Is Set to Boom Into a $90 Billion Industry by 2025 Amid ChatGPT Frenzy
 * 4) Investors are going nuts for ChatGPT-ish artificial intelligence
 * 5) ChatGPT is coming to Snapchat. Just don't tell it your secrets | ZDNET
 * 6) ChatGPT: Chinese apps remove chatbot as global AI race heats up | CNN Business
 * 7) Hackers use fake ChatGPT apps to push Windows, Android malware
 * 8) ChatGPT Website Cracks Global Top 50 With 672 Million Visits in January
 * 9) Microsoft is bringing ChatGPT-powered Bing to Windows 11 in latest update
 * 10) Microsoft to demo its new ChatGPT-like AI in Word, PowerPoint, and Outlook soon - The Verge
 * 11) Microsoft Invests $10 Billion in ChatGPT Maker OpenAI
 * 12) What is ChatGPT? Viral AI chatbot at heart of Microsoft-Google fight
 * 13) How ChatGPT Kicked Off an A.I. Arms Race - The New York Times
 * 14) ChatGPT reaches 100 million users two months after launch | Chatbots | The Guardian
 * 15) ChatGPT sets record for fastest-growing user base - analyst note | Reuters
 * 16) Faced with criticism it's a haven for cheaters, ChatGPT adds tool to catch them | CBC News
 * 17) Students using ChatGPT to cheat, professor warns
 * 18) ChatGPT gained 1 million users in under a week. Here’s why the AI chatbot is primed to disrupt search as we know it
 * 19) ChatGPT hit 1 million users in 5 days: Here’s how long it took others to reach that milestone | Technology News,The Indian Express
 * 20) Why We're Obsessed With the Mind-Blowing ChatGPT AI Chatbot - CNET
 * 21) ChatGPT is a new AI chatbot that can answer questions and write essays

No doubt, this is going to impact Wikipedia in a major way. How the Wikipedia community will adapt in response is a big deal. I hope you see this soon. &mdash; The Transhumanist  08:48, 1 March 2023 (UTC)

Nomination for deletion of Template:Cleanup-discussion
Template:Cleanup-discussion has been nominated for deletion. You are invited to comment on the discussion at the entry on the Templates for discussion page. – Jonesey95 (talk) 21:14, 11 September 2023 (UTC)

Nomination for deletion of Template:Cleanup-discussion-reference
Template:Cleanup-discussion-reference has been nominated for deletion. You are invited to comment on the discussion at the entry on the Templates for discussion page. – Jonesey95 (talk) 21:14, 11 September 2023 (UTC)

Request
I've started a revamp of Tools/Optimum tool set.

Please take a look and let me know if there are any essential techniques or must have tools that you think should be included.

Thank you.

Sincerely,  &mdash; The Transhumanist   07:08, 5 November 2023 (UTC)