User:SDZeroBot

SDZeroBot runs on Node.js and uses the mwn bot framework, also developed by SD0001. Most tasks are written in JavaScript, while the newer ones are in TypeScript. The source code is available on GitHub.

One-time / on-demand
Tasks which edit only in the userspace don't require a BRFA.

How do you generate article excerpts?
Good question. Excerpts of articles used on many of SDZeroBot's classification pages are generated using a combination of regex and some slightly more formal parsing methods. The Node.js source code used can be seen here, which also relies on mwn's wikitext class. This excerpt generator is also available as a webservice hosted on Toolforge at https://summary-generator.toolforge.org/ with a horrible bare minimum UI, but a better API endpoint. See the GitHub README for usage instructions.

I initially considered using the code from popups, but it was all too messy and integrated with a lot of other popups code that I couldn't get it to work standalone.

All excerpts are short enough, so that attribution and copyright concerns are avoided.

Source code
All source code that drives SDZeroBot is publicly available via the GitHub repository, as well as on the /data/project/sdzerobot directory on Toolforge. Even the logs (*.out and *.err files) are publicly visible, which is by default not the case on Toolforge. The jobs.yml file used to schedule the tasks can also be viewed there.

To do
If you're interested in helping out with these tasks, please contact me.


 * Split certain sortlists into subtopics.
 * For the sports list (1500+ pages), use machine learning to section this list by sport.
 * Split the biography list by professions - this can be done just by looking at the other topics the bios have been classified with.
 * For the STEM lists, make sections for core articles, STEM biographies, STEM media, STEM companies, ... (discussion)


 * Automatically produce short descriptions for articles and drafts.
 * Drafts mostly don't have short descriptions at all. They'd be very useful on AfC sorting. Also useful for the AfD, NPP, and PROD lists.
 * Explore use of machine learning for this, failing which other methods such as Trialpears' bio shortdesc generator.
 * If that also doesn't give desired level of accuracy (esp. for non-bio articles), don't actually add the shortdescs to the article, but show it on the sortlists.


 * Consider creating a Toolforge-hosted web UI for AfD sorting list, so that more columns can be added (whose visibility can be toggled using javascript), based on ideas in here.
 * Automate delsort tagging of AfDs using ORES. Works only for select delsort lists which have a corresponding ORES topic, or
 * Automate delsort taggings using some custom machine learning model. The model can be trained on basis of delsort tagging done so far by humans. Difficulty here is that unbiased training of the model requires access to content of deleted articles as well. Simply training it on articles that were kept at AfD would not give good results.
 * Big one: identify promising AFC drafts using ML.
 * Probably using TemplateStyles, improve the appearance of the tables on very small and very wide screens.
 * Create unified lists for PROD and AFD which include both deletion rationale and lead text. ✅ PROD
 * Don't duplicate nomination text on AFD sorting report for multi-article nominations.
 * User:SDZeroBot/Declined AFCs, G13 soon figure out ways to better identify bad and good drafts?
 * integrate unreliable source detection using user:headbomb/unreliable.js
 * Create articlesearch tool – shows excerpts of articles from CirrusSearch queries – use ReactJS

Monitoring failures
For each SDZeroBot task, most of the code is in an async function with a catch that traps errors and formats it as an email sent to the tool account, which lands in my inbox. For good measure, there's also a process-level uncaughtException handler.

The only kinds of errors the above wouldn't handle are the ones that occur even before the javascript code starts executing (such as the file accidentally losing its executable permission) or OOMs, which are both handled by using  while using Toolforge Jobs framework.

In addition, for the report pages, this user page lists them above along with their last updated timestamps. Along with the expected frequency of the updates, it is fed into a Lua module which prints the timestamp in bold red if it's delayed.

There's also WP:BAM which although maintained by SDZeroBot, is not used for monitoring itself.

A good combination of failure monitoring techniques is essential for operating bots that reliably perform a number of tasks without requiring you to spend time and energy on making sure everything is running.

Handling blacklisted links
If SDZeroBot is unable to save an edit because it is introducing a spam-blacklisted link (which of course isn't the bot's fault since it likely just picked up the text to be added from another place), it identifies the problematic link from the API response, and removes the protocol ("http:" or "https:") from the link, and then attempts to save the page again. This does mean that a link that was supposed to look like Link label ends up looking like [google.com Link label], but it is closest to the original while allowing the edit to go through. Besides, the link was blacklisted anyway so probably shouldn't be clickable.

Use OAuth
Always use OAuth instead of BotPasswords. There are all these advantages:
 * Faster: BotPasswords requires at least 3 API calls just to get the bot off the block: one to fetch a login token, another to actually log in, and usually another one to fetch editing tokens. Since OAuth doesn't require any API call to begin the authentication, you just need one API call – to fetch the tokens.
 * Lesser errors: Session loss often occurs using cookie-based authentication methods. Good bot frameworks should handle these automatically by logging in again on getting the assertbotfailed or assertuserfailed API response, but if yours doesn't, you can avoid these problems just by using OAuth. OAuth tokens don't expire.
 * No need to cache cookies: If your bot task is too frequent (say every 10 minutes), you're likely to have a high login rate unless you cache the login cookies and use them across bot runs. High login rates are frowned upon by server admins. Again, with OAuth, you don't have to worry about this.
 * More secure: OAuth is more secure.