User:Dušan Kreheľ/Signpost draft:My idea about wikipage parser

This article is about how I got into bot editing and and why it discourages me. This article will deal with the wikipedia page parser.

I started learning the Croatian language. I also decided to use Wikipedia for learning. I checked the Croatian wiki pages dealing with Slovakia. I noticed that the town statistics are 4 years old. I, as well as the programmer, decided to change this state on my own. So I wrote a bot and used the software Wikimate to work through the wiki API. When working on the plwiki, it was already happening somewhere that when the data in the infobox was changed, the definition of the reference remained orphaned in the references. So I created a code that will check and update the state of the reference and its definition after updating the data on the page. But mine didn't do it quite correctly. In addition, the code was strongly focused on writing to work with references and act. statistical data. So I decided to write a new code that will be better written. It will already be in OPP, so that the entire grammar of the wiki page can be implemented later. In the beginning, only necessary things for my bot purpose were implemented (i.e. references, template; but no longer headings, tables). After two new main versions, when testing the syntax-semantics of the Wikimedia wiki page, I decided to stop all work on the code and not to continue development on that code. And also not to continue developing completely new functions that the bot could possibly do. One thing led me to stop the development - very complex or incorrect processing of error syntax entries of the wiki page (obtained by testing MediaWiki). Even if the grammar of the wiki page is defined (it exists, but I can't find the link somehow), there is no RFC document that would specify the entire syntax and semantics of the wiki page, and especially how to deal with bad inputs.

So, I think that if we want to work with bots today in a high-quality and honest manner, it is not possible with 100% certainty now, as it is missed:
 * §1 RFC for the complete syntax-sematic definition of a wiki page,
 * §2 new reference implementation of the parser.

These two paragraphs are more detailed::
 * RFC language definition
 * To define in detail the syntax and semantics, as it is necessary to know (because to exist a right and wrong combinations) where each element ends and begins when creating a document tree, and it is important when creating a subtree of individual node elements.
 * Wikipage page language is actually a mix of HTML and wiki syntax – certain boundaries.
 * Will bring the possibility to use the implementation also in other non-Wikimedia Movement software.
 * Complete the process of separating meta data from the wiki page content (or clearly specifying).
 * Expand of Mediawiki syntax in Out Wikimedia Movement.
 * Does an audit of the wikipedia page language and modernize it sensibly.
 * New wiki parser
 * Create a library where wikipage pages are worked with through DOM.
 * Possibly a freer license (e.g. MIT).
 * Should only work with wiki text.
 * Pair or unpaired HTML characters should be recognized by the wiki parser but not interpreted (maybe).
 * Can enable export of Examples to other formats (plain text or GUI widgets) than just HTML.
 * Can also be used in other software:
 * offline apps for document development,
 * format suitable for user notes,
 * use in other editorial systems (Wordpress, Drupal, …).
 * For the community
 * Quality work is required from bot users and we give them full responsibility when starting the bot, but in my opinion, they currently do not have the best (to a reasonable extent) means for this.
 * The software environment around MediaWiki will be only get bigger and bigger, Example new modules or the new elements (Example Wikifunctions, so it's best to do it now and properly. Practically, the wikipage is made for humans, but nowadays bots operate a lot, so it would be good if it were made for both sides and the syntax should be clear.

The current situation even has a negative effect on the movement:
 * Even if you do the syntactic task of the bots, after a while semantic processing is also necessary (practically: under some conditions, do not perform the given operation) and the author doesn't want that. (Look [[[:m:Special:Permalink/23662865#The_problem_change practical]).
 * Publish Technical RfC: new syntax for multi-line list items/talk page comments
 * Instead of a high-quality implementation of the parser using existing tools, here we add other things to the syntax-semantics of the page. The parser should process the input document block by block rather than line by line.
 * When I edit a wiki page, the syntax-semantics for colorsyntax in the source code of the page are not completely identical to those used by the WikiMedia parser.

So here we have the one way – Parsoid, and but this solution also has negative sympathies for me:
 * It is a converter between wiki page and HTML, i.e. fulfills only one requirement for Wikimedia.
 * Processing is not in the form of a single tree (but a record list).
 * This is a plus-minus only-MediaWiki solution.
 * Conversion wiki -> HTML -> wiki is not with the same result (This is important for the bot. Adjustments should only be optional, not default.).
 * It is slow.
 * A widely used thing should be optimized – protection of the planet.

Epilogue
I see that there are two needs when designing and implementing a Wikimedia site: I don't like that there is no officially defined "Wikimedia DOM wiki page" with access as a wiki document and not an HTML document, and I don't like, for example, access to bots, where access and editing of wiki pages happens via regular significantly. By changing to abstract tree, it could be interesting even outside the Wikimedia movement.
 * convert from/to HTML
 * have a Wikimedia DOM wiki page – for bot needs.

I decided to use the older knowledge in another dwiki project, and certainly more interesting speed of converting wiki pages to HTML.