Module talk:UnitTests

Test group order
At the moment, the test groups appear on the page in random order. Is there any way of getting them to appear in the order that they are defined? — Mr. Stradivarius  ♪ talk ♪ 15:30, 3 April 2013 (UTC)
 * As far as I know it's not possible to determine the order in which tests were defined, unless perhaps the test framework reads the module page (Lua source) itself. It's also not clear this is always the desired behaviour. There are a couple other approaches: always using alphabetical order, and naming your tests alphabetically in order using numbers; or giving you an option to explicitly specify the order. Dcoetzee 17:09, 3 April 2013 (UTC)
 * Putting them in alphabetical order would be a good way around the problem. The test cases on Module talk:Delink/testcases don't appear in alphabetical order, though - it's just random, as far as I can tell. I think it's probably the natural table order that pairs found, although I don't know the code well enough to try and change it. (And I might cause some disruption to other people's testing if I do.) — Mr. Stradivarius  ♪ talk ♪ 19:15, 3 April 2013 (UTC)

Don't treat the result off a call as Wikitext
Hi UnitTesters. I'm failing to find the proper format for the nesting tests at Module:Delink/testcases function test_nesting. How do I do that? What I want is the expanded result to be compared to the non-expanded expected cases. Martijn Hoekstra (talk) 15:50, 3 April 2013 (UTC)
 * I've fixed the problem using the nowiki option. The option isn't documented yet, so I might go through and add it when I have a second. — Mr. Stradivarius  ♪ talk ♪ 19:10, 3 April 2013 (UTC)
 * Thanks for adding nowiki option. It was very useful at commons:Module_talk:Coordinates/testcases. --Jarekt (talk) 18:19, 13 December 2013 (UTC)

Alternative implementation
This module has several shortcomings:
 * logic and presentation are mixed together, making it impossible to present the tests in different formats
 * as a consequence, it is not possible to run tests in the debug console (which is quite convenient when you need to change the tests)
 * there is line number for failed assertions
 * errors thrown by tested code are not caught

I created an alternative test module in hu.wikipedia and would welcome any comments or feature requests. The module is at hu:Modul:Homokozó/Tgr/ScribuntoUnit, a usage example is at hu:Modul:Homokozó/Tgr/Set/tests (the documentation is in Hungarian, but comments and variable names are in English, and the code follows xUnit conventions, so understanding it shouldn't be a problem). It throws exceptions from failed assertions, builds a result table based on which tests throw exception/error, and can then present the results in any way; I believe the separation of actual testing and display code makes it more maintainable and reusable. --Tgr (talk) 15:17, 25 May 2013 (UTC)
 * didn't have time to fully read it, would it be much work on the test cases to convert them to your new module ? —Th e DJ (talk • contribs) 20:53, 4 June 2013 (UTC)


 * I created Module:UnitTests/sandbox which right now only mixes logic and presentation in three places. I created Module talk:Citation/CS1/testcases2 to make sure it still works and for comparison purposes. testcases2 uses 6.91 MB of memory and takes 3.964 seconds to process tests compared to testcases which uses 7.23 MB of memory and takes 4.933 seconds to process tests. Maybe with full separation of the logic and presentation the memory footprint and processing time can be decreased further. I think another approach to unit testing would be better, but that will require rewriting current tests, which like in the case of the citation module, could take a bit of effort to do. -- dark lama  15:11, 5 June 2013 (UTC)

Test a string contains expected text
Please see Module talk:ScribuntoUnit for an enhancement request. --Derbeth talk 21:41, 1 January 2014 (UTC)

Compare template vs. module
When comparing template vs. module, going with "==" seems wrong. The template and module may differ in e.g. number and types of HTML whitespaces which don't matter, or HTML representations (&nbsp, etc..). I think this is a good sample: Module:Sandbox/Dts/testcases. At first stage, I'd suggest making first_difference a member function. If this member function returns nil then strings are considered identical. This will allow tests to define their own method. Second stage will be to offer some pre-created options.. Tsahee (talk) 20:44, 19 January 2014 (UTC)

Add a value to to show the wikitext only if the actual result does not contain a script error
I suggest making the  option support a string like "if no errors" as a value that would make   not be applied on the actual result if a script error can be detected in it. If there's a script error, the wikitext is of no use (it will be the same regardless of the error), while the rendered result can be clicked on to show the error message, making it easier to fix. --Mark Otaris (talk) 16:42, 13 October 2015 (UTC)

give different result
Why  give different result for direct #invoke vs. invoke via UnitTests module? --Ans (talk) 13:58, 11 October 2017 (UTC)

templatestyles
Module:Citation/CS1 supports some 25 live templates and Module:Citation/CS1/sandbox supports an equal number of sandbox templates. We could have added WP:TemplateStyles markup 25× to the live templates and 25× to the sandboxen but that just seemed dumb so each of the modules concatenate template styles to the end of the cs1|2 template rendering using this (where   is the name of the appropriate css page):

and that works great.

Except in Module:Citation/CS1/testcases.

Where every test fails. 318 failures. There are differences between the live and sandbox modules but not that many.

Because of TemplateStyles. Why? Because TemplateStyles inserts a stripmarker at the end of every cs1|2 template rendering and each stripmarker has a unique id number. So, this always fails:

even though the two templates are identically written. Here are two transclusions of identical templates; note the stripmakers at the ends:

To get round this, I have hacked Module:UnitTests function  (called only by  ) to accept a new. When that option is set to, the code looks at the content of   and extracts the   stripmarker identifier (an 8-digit hex number). It then overwrites the  stripmarker identifier in   so that they both have the same identifier. Only then does  compare   against.

If you are looking to text changes in ~/sandbox/styles.css compared to ~/styles.css, this change won't help you – and Module:UnitTest is probably the wrong tool anyway because stripmarkers are replaced with their actual content after this module has run.

I suppose that there might be reasons that we might want to expand the capabilities of this functionality though I'm not sure just what those reasons might be. For example these possibilities:
 * – remove  stripmarkers from both   and  ; no styling applied to the renderings
 * – replace  stripmarker identifier in   with the   stripmarker identifier from  ; both use ~/sandbox/styles.css for styling

Perhaps there are others.

—Trappist the monk (talk) 19:27, 28 March 2019 (UTC)
 * Good work, life is getting complicated! Johnuniq (talk) 23:14, 28 March 2019 (UTC)

Table of Contents
What about generating a table of contents? Trigenibinion (talk) 14:54, 12 March 2021 (UTC)

Conflicts with 'Module:No globals'
A handful of functions in this module are not marked 'local', but could and (arguably) should be. --86.143.105.15 (talk) 10:36, 27 January 2022 (UTC)

Tests not failing when they should
I'm creating testcases for Module:GetShortDescription and Module:Annotated link and due to my own derping, left some copy-pasta typos which should have caused a series of tests to fail, but they did not. I fixed the typos but purposefully altered one test to fail and it sailed through running. Am I doing something wrong, or is there a problem with this module? 09:42, 27 January 2023 (UTC)
 * "Test methods like test_hello above must begin with 'test'". I knew that.  09:45, 27 January 2023 (UTC)
 * Might I suggest not outputting "All tests passed." when no tests have been run?  11:18, 27 January 2023 (UTC)
 * ✅. I've also included the total amount of tests ran in general. Aidan9382 (talk) 11:49, 27 January 2023 (UTC)
 * Very nifty; thank you 😊  13:01, 27 January 2023 (UTC)

Allow nowiki option to have three states
Currently it appears that  is a boolean option; could we have a third option to display both the  and parsed results? We could of course double all tests where this might be desirable, but 1) they might not end up anywhere near each other, and 2) it'd be inefficient. Suggest:  (semantically nice and easy math) for the third state. 20:36, 27 January 2023 (UTC)
 * For the sake of ease in handling the code, and the fact I'd rather keep those options as just "truthy" checks instead of exact == checks (the only reason its  in the doc is probably because its shorter than typing  ), does something like a seperate   or   option sound better? I'll probably have to standardise the module a little to make adding this not mean pasting the same code in 5 different functions, but it should be doable (I'll just have to think about how to lay it out in the output). Aidan9382 (talk) 09:37, 28 January 2023 (UTC)
 * Sounds good to me! Definitely think of the maintainability of the code ahead of the minor convenience of only having one option to consider. Personally; I think  is better. Thank you for your consideration 😊   09:46, 28 January 2023 (UTC)
 * I've managed to get some initial work done on this (currently got the main functions  and   running under the new system idea in the sandbox) - Does the format I've given seen fine in your testcases? I don't want to start working on the more complicated to convert functions unless it's all working fine. You can test these by changing   to   and specifying   instead of  . Aidan9382 (talk) 14:41, 28 January 2023 (UTC)
 * Really; that looks great! Very neat table organisation. I hope all your effort is appreciated by more than just me, but rest assured I am impressed. Seeing the raw markup is great for technical analysis, but seeing the result is great for rapid detection of issues. Having them side by side (well over and under but same thing) like that is a quality of life improvement. Thank you 😊  16:07, 28 January 2023 (UTC)

Present failed tests together at the top
Another suggestion: present all failed tests at the top of the results. This might be achieved multiple ways and someone with greater familiarity with the code might be best suited to decide exact what approach is best. As a user; seeing two sections – the uppermost for failed and the next for passed tests – would be ideal. Section depth should be unimportant unless the results are substed (but who would do that?); lvl 3 sections should be fine since the whole lot can be placed under a standard lvl 2 section for posterity. Diverting the results as their condition becomes known to the appropriate section should be trivial (easy for me to say right? I'm a tad busy right now but will tackle it myself if necessary). 09:24, 28 January 2023 (UTC)
 * I'll try theory this in the sandbox too. Aidan9382 (talk) 09:37, 28 January 2023 (UTC)
 * I will prepare a WikiLove goat for you! Seriously though; thanks again for you consideration ❤  09:46, 28 January 2023 (UTC)
 * Some wild yellow backgrounds have appeared 👀 😉  16:13, 28 January 2023 (UTC)
 * Ooooh yes, forgot I implemented that. There was a feature only in preprocess_equals_preprocess where failed tests were highlighted orange/yellow so they were easier to spot - I decided to give it a test run in the sandbox to make it apply the highlighting to more functions and completely forgot I did so. I'll probably make it an opt-in argument of the  (maybe , a bit like  ).
 * (changed to an invoke option - Aidan9382 (talk) 16:43, 28 January 2023 (UTC))
 * Oh, and I'm currently working on doing the split of failed and successful results from above, though I'm gonna have to think about how to do multi tests if they fail in the middle (I'm not sure they split too well right now, but we'll see). Aidan9382 (talk) 16:32, 28 January 2023 (UTC)
 * I understand and fully appreciate how complex it is; there's no hurry or even need for this nice-to-have feature request. Don't forget to have a good day while you're working on it.  16:44, 28 January 2023 (UTC)
 * Alright, I've decided not to implement the splitting with headers - screwing with the positional layout, especially with functions that do multiple checks in one run, is a bit more complicated and finicky than I think it's worth. Hopefully the highlight feature helps enough with finding the errors enough in that regard. As for everything else, I'll be moving that from the sandbox version to the live version some point soon when im free, and I'll also make sure to update the doc page (it's missing both the new stuff and some already existing stuff). Aidan9382 (talk) 18:56, 28 January 2023 (UTC)
 * Understood. I hope it didn't trouble you too much trying, and again, I really appreciate the effort. I made a little helper script, moveFailedModuleTestsToTop.js, that shifts all the failures to the top on load. It's dirt simple and could do with extra qualification; would you mind if the tables include something like  so the script can be more particular? I nearly went ahead and stuck it in there myself, but considered that might be a bit rude. I should have the script to pick out sets of tables where results of multiple invocations are present...   01:51, 29 January 2023 (UTC)
 * Don't worry about any trouble I had doing this, coding is a big hobby of mine and I enjoy fixing up stuff like this. I've gone ahead and added the class to the table headers and called it . I don't think it would've been rude to add the class yourself, it's a simple minor improvement and it doesn't screw with the existing layout, so it's completely fine. Aidan9382 (talk) 06:07, 29 January 2023 (UTC)
 * Many thanks again 😊 I know what you mean; I love coding too. I hoping one day to be good at it 😉  07:48, 29 January 2023 (UTC)

ID fix for all strip markers
@Pppery Hello, I noticed you reverted my edit. Could you share an example of a test that should be failing passing. Thanks, – Brandon XLF  (talk) 16:34, 25 May 2023 (UTC)
 * The case that brought this to my attention was Module talk:YouTubeSubscribers/testcases, but AFAIK any test using preprocess_equals to compare unequal strings should trigger the bug. * Pppery * it has begun... 16:37, 25 May 2023 (UTC)
 * I've reimplemented the stripping for the expected with the bug hopefully fixed (the expected was accidentally replaced with the actual, causing it to just see if equals itself). Aidan9382 (talk) 17:04, 25 May 2023 (UTC)
 * Oh I didn't catch that, thanks! – Brandon XLF  (talk) 23:17, 25 May 2023 (UTC)