Talk:Assembly theory

Abiogenesis
ought to consider implications of this for abiogenesis -- Waveguy (talk) 04:59, 1 April 2022 (UTC)

Theory or Hypothesis?
The original article with a quotation of Cronin referring to his work as a hypothesis. The article offers no evidence that Cronin's hypothesis (as he himself characterizes) enjoys substantial support and scrutiny to characterize the work as a scientific theory. Evolution is a theory based on all of the characteristics one would expect from a hypothesis promoted to theory by repeated failure of attempts of falsification, confirmation of elements of the hypothesis, widespread endorsement, explication of consilience with established theory, etc. Demasio's work is called the somatic marker hypothesis, and is far more known and explored than this claim, and to accidentally promote this hypothesis to theory by equivocating between the lay sense of theory (guess, hypothesis) with the rigorous sense of theory (as in gravity, evolution, and Hebbian firing) is problematic. One doesn't "invent a theory" (original language), one proposes a hypothesis which eventually because so prominent and prevalent in the literature (and the hearts and minds of scientists) that it attains the property. Is there really a rejoinder to this complaint?

2023-06-09 jtvisona (talk)


 * While the word "Theory" is often used for an hypotheses that has been well established and accepted, "Theory" is also quite frequently used to refer to an organizing framework or collection of otherwise established observation. Wikipedia vital articles level-4/Mathematics (https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level/4/Mathematics) lists, under Other, both "Number Theory" and "Category Theory" as established approaches to and areas of knowledge that were collected together under these titles. If there ever was a Number Hypothesis or Category Hypothesis, it's did not refer to the same collection of ideas. I know less of "String Theory", but the same may be true there.
 * A recent issue of Science, 22 Aug 2023, in a News article (doi: 10.1126/science.adk4451, ) talked of six theories of consciousness being identified for use in evaluating Large Language Model Systems, and identified "the Recurrent Processing Theory", " the Global Neuronal Workspace Theory," as well as "Higher Order Theories". The second already has a Wikipedia page.
 * That complex things can be built out of simpler pieces is well established, and that large concentration of identical or very similar complex things is uncommon in the absence of living systems is well established in statistics. Assembly theory is an attempt to identify and organize these facts as a way to look at what it would serve to identify the presence of "life". This also has implications for different way of looking at Time, as in American Scientist, Sept-Oct 2023
 * Rodion.rathbone (talk) 15:33, 7 September 2023 (UTC)


 * This is not a different way to look at 'time' as per the popular American Scientist article on assembly theory. Time has always been fundamental and regarded in this way in evolutionary theory. For example, in the copies of GC amino acids across species with which they construct the protein blocks for life as explained by some of the critics. Also in things like gene copy number and so on. This is just basic literature knowledge stuff rehashed by people not familiar of either fields they want to contribute to (which sometimes can be positive but in this case is very negative), complexity science and evolutionary biology. For an introduction, see https://en.wikipedia.org/wiki/GC-content and https://www.genome.gov/genetics-glossary/Copy-Number-Variation DaveFarn (talk) 11:23, 16 April 2024 (UTC)

Serious omission
A serious omission in the current version of this article: exactly when this hypothesis was proposed by chemist Leroy Cronin and developed by the team he leads at the University of Glasgow, and exactly when it was extended in collaboration with a team at Arizona State University led by astrobiologist Sara Imari Walker. Adding these dates (or months/years) will help to make this article more properly encyclopedic. 173.88.246.138 (talk) 17:50, 7 October 2023 (UTC)

Problem with lead in current version of article
There are a number of issues with the current article, but the one that immediately jumps out is the lead, which does not describe assembly theory in the broader terms of reconciling physics and evolution. This is taken from the abstract of the team's latest paper, and summarises the theory and its implications in considerably more accurate (although admittedly scientific) terms:

Scientists have grappled with reconciling biological evolution with the immutable laws of the Universe defined by physics. These laws underpin life’s origin, evolution and the development of human culture and technology, yet they do not predict the emergence of these phenomena. Evolutionary theory explains why some things exist and others do not through the lens of selection. To comprehend how diverse, open-ended forms can emerge from physics without an inherent design blueprint, a new approach to understanding and quantifying selection is necessary. We present assembly theory (AT) as a framework that does not alter the laws of physics, but redefines the concept of an ‘object’ on which these laws act. AT conceptualizes objects not as point particles, but as entities defined by their possible formation histories. This allows objects to show evidence of selection, within well-defined boundaries of individuals or selected units.

Also, the update to the theory as covered in the new paper needs to be included in the article. There are popular articles on the updated theory on both phys.org and Nature.

Cadar (talk) 17:50, 12 October 2023 (UTC)

Critical views section
Recently, a critical views section was added, and then deleted without any specific reasons. I added it again because I believe it provides a wider and more complete view on the subject. If anyone removes it again, it will be plausible to give good and strong reasons to do so. — Preceding unsigned comment added by JulioISalazarG (talk • contribs) 03:54, 14 October 2023 (UTC)
 * As the person who originally created this article, I agree it should be included. At the time I created it, assembly theory was still a new thing and criticism hadn't been published. ~Anachronist (talk) 07:04, 14 October 2023 (UTC)
 * @Anachronist and @JulioISalazarG I reverted the changes because it was added by a paid editing farm, specifically, who inoriginally added the Critical views section and is also the reason I added the UPE tag to article. Anachronist, given you created the article and still keep an eye on it, I will leave it in your expert hands. S0091 (talk) 19:59, 15 October 2023 (UTC)
 * I'm not being paid by anyone and I added it again because (1) I believe Wikipedia should offer a wide and complete view on any subject and (2) there's no reason to deleted it regarding the content and arguments on the subject. JulioISalazarG (talk) 20:10, 15 October 2023 (UTC)
 * Correction - it was originally added by, another blocked sock by the same paid editing farm. S0091 (talk) 20:10, 15 October 2023 (UTC)
 * It appears that addition by Trueo was the first edit done by that account. Looking at the contribution history, it looks like part of an attempt to achieve autoconfirmed status quickly, although it's an unusually lengthy edit for such a purpose. No sources were cited, and I would have reverted that addition also on that basis. However, what JulioISalazarG added included citations to a couple of decent sources. Paid editing or not, it was a good addition that summarizes those sources succinctly. ~Anachronist (talk) 04:55, 16 October 2023 (UTC)
 * @Anachronist good enough for me. I had not plans/reason to remove the material again anyway but thought I should explain my actions.  I also see after my revert @JulioISalazarG was reverted two other times, once by MaterialScientist (although Julio was self-reverting, they got an unexplained removal notice on their talk page), then again by an IP (odd) which was promptly reverted by another editor. Messy. JulioISalazarG, I was not accusing you of being paid; only explaining why I reverted and you did the right thing by raising your concerns here.  S0091 (talk) 14:48, 16 October 2023 (UTC)
 * Contributions should be evaluated by their own merit. Strange the hypotheses about paid editors. Just like Encyclopedia Brittanica should not be cancelled as a credible source because editors get paid, writers and editors who contribute good content should not be suggested not to be credible on the basis of something else than their contributions.
 * As an enthusiast about Wikipedia articles as a way to provide complete and accessible information about scientific theories, I agree Wikipedia should include critical views and many, wider, perspectives about science. In particular, it’s strongly acceptable to have critical points of view on [Assembly Theory], as, from the best of my knowledge, it has important gaps that should be discussed, clarified and verified Dr. Chinguetas (talk) 20:24, 24 October 2023 (UTC)
 * They are not paid by Wikipedia for writing good articles, they are paid by third parties for making those third parties look good. Wikipedia has rules about that: WP:PE.
 * And it is not a scientific theory, it is "abject wankwaffle", as one critic put it. --Hob Gadling (talk) 15:46, 13 November 2023 (UTC)
 * Speaking for me, I repeat that I was not being paid by anyone (in fact, I can accuse you of the same thing). The section was deleted again without any specific or good reason. So I will add it once more, if necessary, I'll keep doing it until the real paid ones stop deleting it. JulioISalazarG (talk) 04:58, 19 February 2024 (UTC)
 * The critical section that keeps being added is totally without merit. It is an act of vanadlism from Hector Zenil and his cronies. The inclusion of the benner blog is bafalling. It does not say anything.
 * The citation of the Zenil blog which is one of the most unprofessionally constructed rants I've seen on the internet does a diservice to wikipedia.
 * It appears the criticsm section, in its entirety has to be removed. Henizcolter (talk) 23:49, 23 March 2024 (UTC)


 * I agree. At the moment, the Critical views section is based on some
 * * blogs ,
 * * a preprint full of errors, groundless accusations, and omissions, as has been shown below,
 * * a vlog, and
 * * an equation-less commentary, citing this preprint, vlog, and one of these blogs.


 * The only credible publication in this section is Hazen et al.. However, this research does not criticize assembly theory, as such, but questions MA≥15 as a threshold for unambiguous biosignatures, and thus should be moved to the Background section.


 * Guswen (talk) 01:02, 24 March 2024 (UTC)
 * False. The whole assembly theory started with their 2017 paper claiming all over the media that they could even find life in the universe based on that hard cutoff. It is trivial to show it is not the case, and the answer given by the assembly theory authors to the authors of this paper is ludicrous, saying that they have to pre-filter molecules and only consider organic ones making the assembly index and cutoff totally redundant. If you remove that cut-off, assembly theory has zero empirical evidence of anything they claim they can do. They have been building a theory on top of something that does not work and that is proven to be equivalent of decades-old algorithms that they could have used, credited and moved on but instead insist ill-informed that they have nothing to do with and instead provide examples that show that they do not understand even the most basics of information theory, such as the example given in this Wikipedia article at the beginning with the Hamming distance example which is totally wrong. Like the authors of assembly theory finding a new toy that they think they can use in their favor but without really understanding its meaning and rather displaying the ignorance they have in all those topics. DaveFarn (talk) 01:27, 29 March 2024 (UTC)


 * Please refrain from word salads and stick to the merit. AT is not proven to be equivalent of decades-old algorithms. It's an entirely novel approach and the correct description of nature. Do you have any peer-reviewed scientific publication to support your claim (you obviously cannot have, as this claim is false)? If not, please read this talk page carefully and only here you'll find a lot of arguments falsifying your claim. (Hamming distance is not the same as Hamming weight. You know that, right?)Guswen (talk) 11:19, 29 March 2024 (UTC)


 * The first line on the Wikipedia page about it reads: "The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length." You want to make a deal as if they were different, and I didn't know the difference... Your nitpicking is a bit silly. You have a proven conflict of interest trying to make assembly theory look credible because you wrote a paper about it. So you have lost all your credibility, and we have proven our point that you were acting in bad faith, preventing this page from showing some neutrality with criticisms that you cannot stand that are made by academics who are not crackpots. Precisely, stick to the science, tell us please on the merit of the science, where the math error is in the proof that assembly theory is equivalent to Shannon Entropy, and no, it is not your silly argument of indeterminism already debunked above and below. Also, stop adding comments after your signature. I have to keep fixing your bad edits on the Talk page because you don't know how to use Wiki, and add text after you finish as if it were footnotes. Lastly, somewhere you tried to correct the author of the Royal Society Interface journal saying that the assembly index is not a scalar but an integer. You know an integer is also a scalar right? The natural numbers, integers, are all contained in the set of real numbers R. DaveFarn (talk) 01:10, 30 March 2024 (UTC)


 * You're mistaken: the Hamming weight is a property of one string, while the Hamming distance is a function of two strings. For example, the Hamming weight of a string $$C=[01010101]$$ is 4 as this string has four "ones". On the other hand the Hamming distance between strings $$C=[01010101]$$ and $$D=[00010111]$$ is 2 as these strings differ on the 2nd and 7th position only. That's obviously not the same.
 * Every integer is a scalar. But not every scalar is an integer. Scalar can be a real or complex (cf. scalar probability amplitudes of quantum mathematics). The assembly index is a natural number. You cannot assemble and object in 2.5, $$\sqrt{2}$$, $$\pi$$, or $$2i$$ steps.
 * With due respect, but the rest of what you say is ad hominem word salad. It contains some repeated arguments, but I have addressed them so many times already... Please read this talk page carefully again. Guswen (talk) 10:37, 30 March 2024 (UTC)
 * Nobody is discussing about the Hamming distance... And nobody said that the assembly index was not an integer. Please, read your own words with the hope to make sense and connect logically two sentences. It is the other way around: an integer is a particular case of a scalar, but you seem to pick on silly red herring stuff unable to have a logical conversation. Of course the Hamming weight is a property of one string but it is deeply related to the Hamming distance as the Hamming distance to the zero string. Nobody even knows what is your point or what are you discussing here. Your examples are like high school examples that you think are teaching a deep lesson. Could you, for example, tell me where did I say that a scalar was an integer? You tried to correct the Royal Society Interface paper, giving the impression they were saying nonsense because they mentioned 'scalar' which does contain the integers, yet you have converted this into a thing and even reversed the claim to suggest that me or someone else has suggested that a scalar is an integer and that you cannot assemble an object 2.5 steps... Could you please focus and make some sense when writing? How can you expect to be able to maintain this technical article when you make so many basic argumentative mistakes? It all came up because you thought you were making a profound point nobody asked for, saying "(Hamming distance is not the same as Hamming weight. You know that, right?" You create straw man arguments, are unable to follow a simple logical argument, are opinionated on topics our of your depth and overestimate your understanding and credentials, trying to rub your 30+ year old PhD on people's face in this Talk page. DaveFarn (talk) 04:53, 31 March 2024 (UTC)


 * Didn't I say that? "not every scalar is an integer" <=> "integer is a particular case of a scalar". It is not "the other way around".
 * "'scalar' which does contain the integers"... I really don't know what are you talking about... Guswen (talk) 20:02, 2 April 2024 (UTC)

Besides the unhelpful ad hominem debate regarding the critical views section, I do find more reasons to keep it than to skip it: 1. Many important journals such as The Royal Society Interface and the Journal of Molecular Evolution are citing the leading critics and implying they are right in their main concerns. 2. As it happens to any other theory and idea, a critical view about anything makes it more comprehensive and complete. Furthermore, for both the general public and the theory itself, it is better to know critical views than not. 3. The person who has been deleting the critical views section, Heinzcolter, has cited YouTube videos as the main source (contradicting his reasons to, in principle, delete the critical views section), directly claiming to be right without any evidence. 4. The person who originally wrote the article agreed to have a critical views section. Furthermore, I’m practically copying what the published authors have said about the criticism of Assembly Theory. For all those reasons, I’m adding the critical views section again and I suggest any further changes are discussed here first as has been done before when it was agreed to keep the criticism section. — Preceding unsigned comment added by FernandaHernan (talk • contribs) 03:19, 18 March 2024 (UTC)


 * The interface article is weird. It does not make any criticsm of the theory except by making a false assertion that assembly theory somehow says 15 is the 'written in stone' threshold. It does not. The interface article cites the zenil preprint. The criticism are all circular. The JME paper is a COMMENT which itself is a copy of a BLOG. None of this is peer reviewed. Henizcolter (talk) 23:51, 23 March 2024 (UTC)
 * They are peer reviewed sources. Are you disputing the peer reviewing process? GodelFan (talk) 01:01, 25 March 2024 (UTC)
 * It is called a Commentary because it has gone trough peer-review. Henizcolter evidently has some connection to the Glasgow group and is in their interest to remove any valid criticism going to any extreme. DaveFarn (talk) 01:28, 29 March 2024 (UTC)

As the person who originally opened the discussion on the talk page about the critical views section, I agree with the new addition of the section: everything has been properly cited and sourced. Previous reasons given to delete the critical views are no longer valid. Any further changes should be discussed here on the talk page, and anyone who modifies the content should provide strong evidence to do so. — Preceding unsigned comment added by 187.189.145.147 (talk) 01:04, 19 March 2024 (UTC)

I've been following Assembly Theory and other new, hyped-up scientific theories for a while. Arguments to delete a critical views section damages Wikipedia's credibility and the image of science in general. I find solid evidence in the critical views. I believe that any idea deserves to be presented with balance of angles so that everyone can check and decide. I agree that an entry on Assembly Theory (or any other theory) should contain a critical views section unless there is a good and valid reason to delete or modify it. — Preceding unsigned comment added by César E. Valera (talk • contribs) 01:39, 19 March 2024 (UTC)


 * Here you got an example of a "critical views", User talk:César E. Valera, User:FernandaHernan, User talk:Hob Gadling, User:Dr. Chinguetas, and User:JulioISalazarG.


 * An unpublished, non-peer-reviewed preprint that most likely will never be peer-reviewed and published, yet allegedly claims to "have proven that Assembly Theory is an approximation to Kolmogorov complexity and that the assembly index and assembly number are equivalent to Lempel–Ziv–Welch compression", as stated in the Assembly_theory#Critical_views section.


 * What do the authors of this preprint have to support this claim?


 * Code and data in a link provided in the paper? You could also perform the experiments. In fact, the authors of AT could, in order to show that Dr. Zenil is wrong! DaveFarn


 * You're mistaken: this preprint does not contain any link to any supplementary codes and data. The previous one does. Guswen (talk) 13:40, 22 March 2024 (UTC)
 * I am not mistaken. Page 21 of https://arxiv.org/pdf/2210.00901.pdf has a GitHub link to: https://github.com/Abicumaran/MSComplexity/ with all the data and code. In fact, the original paper of Cronin's has missing data, have you even opened it? This is proof that you act in bad faith tying to fool everyone with fake information. DaveFarn (talk) 00:23, 30 March 2024 (UTC)
 * And that's exactly what I said. Please read carefully before posting some nonsense:
 * https://arxiv.org/pdf/2210.00901.pdf is a (previous) preprint that contains code] (p. 21);
 * https://arxiv.org/pdf/2403.06629.pdf is a preprint that does not contain code.
 * Guswen (talk) 10:37, 30 March 2024 (UTC)
 * Please read yourself before you say nonsense. What code you want in a mathematical proof paper showing once and for all that assembly theory is Shannon entropy? You are totally out of your depth. What is your point that the second paper does not contain code? It is hard to establish a rational conversation like this... DaveFarn (talk) 04:14, 31 March 2024 (UTC)


 * For example, (page 9) they claim that:
 * an "object with a low assembly index (...) necessarily displays low entropy", and
 * "an object with a high assembly index will (...) necessarily display high entropy"
 * But this is plain, obvious nonsense! For example, two binary strings $$C=[01010101]$$ and $$D=[00010111]$$ have the same length $$N=8$$ bits, the same Hamming weight (i.e. the same number of "ones") $$N_1=N/2=4$$, and therefore the same Shannon entropy:
 * $$H(C)=H(D)=-(4/8)\log_2(4/8)-(4/8)\log_2(4/8)=\log_2(2)=1$$ bit.
 * However, their assembly indices are different: $$a(C)=3$$ and $$a(D)=6$$. Therefore, this claim of this preprint is wrong.


 * You seem not to have read Shannon's first paper. There he explains how Shannon entropy applies to blocks, he calls them n-grams and n-orders. Dr. Zenil explains it in his blog. Notice how he says this is a 1948 paper that the Assembly Theory authors seem not to have read and you seem also not to be familiar with. I recommend you give it a good read: https://hectorzenil.medium.com/the-8-fallacies-of-assembly-theory-ba54428b0b45 You may learn how to correctly apply Shannon entropy to those strings. DaveFarn


 * I am familiar with digrams, trigrams, and n-grams, in general, where the probabilities depend on preceding symbols in a sequence (a succession of events), defined by Shannon in his 1948 paper, and the entropy rate is used instead of a Shannon entropy.
 * But the entropy rate does not apply to the method of determining the assembly index since here n varies. Therefore, the claim of your link that "They did not take the Entropy rate of the object which would have found the pair of blocks that minimise the Entropy that provides the same information that their example from Assembly Theory offers" is false. In determining the assembly index, "an exhaustive traversal over the [whole] assembly space of a molecular graph" is performed as explained e.g. here (code snippets are provided).
 * Guswen (talk) 13:40, 22 March 2024 (UTC)
 * If you think Shannon Entropy rate is different from Shannon Entropy, I think that tells everything and there is nothing else to discuss. What you are saying is that you take a particular Shannon Entropy application to 1-grams that obviously would provide the wrong answer and not apply the correct use of Shannon Entropy with its full power because then Assembly Theory collapses. Please, refer to Zenil's latest paper by Abrahao et al. 2024 (Sup. Inf.) where the proof of convergence is provided. Published or not, I am sure you can follow or do not want to. Regarding your comment that n varies, see below. DaveFarn


 * We know that LZ family of compression algorithms converge to entropy at the limit.
 * What Abrahao et al. 2024 claim to have proved is that the assembly index for strings in the optimal case of stationary ergodic processes can only converge towards LZ compression and thus is bounded by Shannon Entropy by the (noiseless) Source Coding Theorem.
 * Since this is not (there cannot be, see below) a general proof, the Assembly Theory remains intact.
 * Guswen (talk) 20:30, 22 March 2024 (UTC)
 * What Abrahao et al. 2024 proven is that the assembly index is equivalent to LZ complexity. The assembly space graph is equivalent to a CNF Grammar, which in turn is equivalent to the LZ encoding of the object. Either you show the proof is wrong or accept it. GodelFan (talk) 00:24, 25 March 2024 (UTC)


 * No, they did not. It is simply impossible since LZ encoding is deterministic (causal), while AT captures more than causality. It captures the "degree of causation" (cf. Nature 2023 AT paper). Abrahao et al. only claim to have proven that (Corollary 8) "With the stated corollary we have proven that the assembly index for strings is equivalent to the compression length of a compressing grammar and, in the optimal case it can only converge towards LZ compression [30, 59, 60] and thus is bounded by Shannon Entropy by the (noiseless) Source Coding Theorem".
 * For binary strings, this optimal case corresponds to strings of length $$N=2^s$$, $$s \in \mathbb{N}$$.
 * Guswen (talk) 08:33, 25 March 2024 (UTC)
 * You deterministic argument is just a red herring. In their "Formalising the Pathways to Life Using Assembly Spaces" paper, authors prove that the assembly index is computable. A string has several different equivalent LZ encodings too, if that is what you reefer as "deterministic". And no, that is not the optimal case, the optimal case is the LZ encoding with smallest number of factors, which maps 1 to 1 with the shortest assembly path. Or if any other assembly space is proposed, then the theorem 7 shows an equivalent compressing grammar and corollary 8 shows equivalent LZ encoding. GodelFan (talk) 09:38, 25 March 2024 (UTC)


 * True: the assembly index is computable.
 * False: If you set the predefined LZ encoding parameters (Sliding Window Size, Lookahead Buffer Size, etc.), you'll get deterministic encoding/decoding, and the same string will be encoded/decoded in the same way, whatever the "factors" of this encoding/decoding. This does not hold for assembly theory, where the same string can be assembled along different assembly paths, featuring the same assembly index but leaving different subunits in their assembly pools along the way.
 * Indeed, there are special cases where assembly theory corresponds to deterministic algorithms; these are the cases where an object can be assembled along only one assembly path. For binary strings, these are the strings having length $$N=2^s=2,4,8,16,\dots$$ in the form $$[00\dots]$$, $$[0101\dots]$$, $$[1010\dots]$$, or $$[11\dots]$$. There is only one way to assemble them, while maintaining the shortest path (the assembly index).
 * Guswen (talk) 11:54, 25 March 2024 (UTC)
 * You don't seem to understand what deterministic means. If the assembly index is computable then it is deterministic. Now, the assembly index is intractable since it NP-C, therefore, a non optimal solution must be used.


 * And what don't I understand about determinism? Determinism is related to causality. DaveFarn once raised (Revision as of 14:52, 22 March 2024) that "the whole idea of assembly pathways is to capture causality", which is not true, as the gist of assembly theory is "capturing the degree of causation", not causality.
 * The assembly index is computable, but it is not deterministic, as many different assembly pathways have the same assembly index. Knowing the object and its assembly index, you can't say which pathway was used to assemble the object.
 * Why don’t you understand?
 * What do you mean "non optimal"? Who defines the optimality criteria? You? Assembly theory is a brand-new framework that explains and quantifies selection and evolution with its own axioms and criteria for "optimality".
 * Guswen (talk) 11:14, 26 March 2024 (UTC)
 * Please, really read a basic book on computer science and information theory. You need to learn the very basics of reversibility, determinism, computability and causality. Many different computer programs have the same size (most of them), many strings (most of them) have same Shannon entropy, that does not make Kolmogorov complexity or Shannon Entropy indeterministic. What part don't you understand? Optimality has a very specific definition in information theory and is defined formally in mathematics for different purposes, cf. basic papers of Shannon and others. DaveFarn (talk) 00:46, 29 March 2024 (UTC)
 * Are you a creationist?!
 * Do you believe that life originated with some supernatural act of optimal divine creation?!
 * Please go and post your comments on some religion portal. Here, we're discussing science, not your "optimality beliefs". (I got the PhD in computer science and information theory, my anonymous friend. Do you?) Guswen (talk) 11:19, 29 March 2024 (UTC)
 * Sorry, but you have been a patent lawyer for 20 years according to your website (https://patent.pl/index.php?tm=1&srvid=0&attid=szymon_lukaszyk&lg=en), but digging a bit more, it seems to be over 31!, that was like two centuries ago) with no affiliations or up-to-date credentials publishing crackpot papers online on topics from imaginary dimensions, to quantum mathematics and mysticism. I do not expect you to understand the developments of science in the last 30+ years DaveFarn (talk) 00:25, 30 March 2024 (UTC)
 * Well, You are entitled to have your own opinions. Everyone is. But as someone wise (I'm not sure who) said "Either you have knowledge or you have opinions." Guswen (talk) 10:37, 30 March 2024 (UTC)
 * Indeed, just to clarify, your opinion is one from a patent lawyer for more than 30 years out of his depth. You should be editing law articles, I am sure it would be more useful there, sorry. You never explained how come that you think assembly index is special because it can generate the same number from different pathways when Kolmogorov complexity and Shannon entropy do exactly the same. They generate the size of a program in bits and the number of bits needed to pick an element from a distribution in both cases the bits are a number that can come from multiple ways. Even a same program output can be generated by a countable infinite number of programs. So your special 'non-deterministic' property that makes assembly index different is not. And you could only mention that property as being different from Shannon entropy... Period. DaveFarn (talk) 04:19, 31 March 2024 (UTC)
 * Yeah, right. Nature, Nature Communications, Philosophical Transactions of the Royal Society, and Science Advances, etc. all got it wrong; they are all mistaken about assembly theory, and you are right: assembly theory "is Shannon entropy in disguise" and the authors of assembly theory "found a new toy and now display the ignorance they have in all those topics". Guswen (talk) 20:02, 2 April 2024 (UTC)
 * Exactly. You said it. DaveFarn (talk) 17:16, 3 April 2024 (UTC)
 * No, I did not. It's your baseless ad personam speculation. Perhaps you don't know that "assembly (theory) differs from entropy because of its explicit dependence on the contingency in construction paths intrinsic to complex objects (...) AT leads to a unified language for describing selection and the generation of novelty, and thereby produces a framework to unify descriptions of selection across physics and biology.". Guswen (talk) 17:52, 3 April 2024 (UTC)
 * Sorry but you are just repeating what their papers say which is precisely what is in dispute. You are not a scholar and clearly it shows when you are not able to make a different argument not based on how copy-pasting their arguments (which by the way make no sense, "contingency", "intrinsic", etc, they are like ChatGPT sounding like making sense when they are not). You have to make your mind, on the one hand. you claim assembly theory has been published in top journals when it is on their favor, but on the other hand, you delete references to papers even published in the same journals because you don't agree with them. Everybody here agrees you have a COI, so please stop editing this article. DaveFarn (talk) 09:23, 10 April 2024 (UTC)


 * Another claim of this preprint (p. 7, Fig. 1) is that “A Turing machine is a computer program just as the Assembly Index is.”
 * But that’s ridiculous! Assembly index is a natural number, not a program. And it is at least as hard as NP-complete to calculate it (cf., Supplementary Materials, Section S3). A program takes information describing an object as an input and returns its assembly index as an output.


 * Of course a natural number is not a program, what is a program is the Assembly index algorithm that produces the number. Is that very hard to see? You seem not to know this very basic fact/result, but of course, the algorithm of the assembly index and the assembly number are Turing machines. A 'Turing machine' is synonymous with 'algorithm'. It is not the Turing machine that is equal to a number/index; the Turing machine that implements the assembly index can produce that index and the other way around. This discussion about the assembly index and all assembly algorithms not being Turing machines as an argument to distance itself from algorithmic complexity makes absolutely no sense. They are, period. DaveFarn


 * What? Do you claim that the assembly index obtained as a result of the operation of an exhaustive assembly space search algorithm can produce that algorithm ("the other way around")? And in turn the input of this algorithm (an object)?


 * Of course you can. You do not have to do it every time you calculate, but if you did and traced back every operation taken, search or not, heuristic or not, you would produce the generating algorithm. By the other way around, I meant that an algorithm is a Turing machine, and a Turing machine is an algorithm. Not sure what is controversial about it but this is Computer Science 101. DaveFarn


 * What algorithm is 5? What is 8? What is 33? :) Guswen (talk) 20:30, 22 March 2024 (UTC)


 * This is ridiculous: a natural number cannot be equal to an algorithm. Even if this natural number were encoded as a binary string (it is; all information boils down to binary strings), we would have to know the type of encoding (Gray, binary lsb, binary msb, etc.) of this number and the way the set of predefined instructions of this algorithm, encoded in this string.
 * What is the most important, however, is that different objects have the same assembly index (cf.Supplementary Information, p. 7, 20-22, 24, 25, 30, 31, 45). An argument, I raised many times here.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)


 * No matter how many times you raise it, is the number of times that is wrong. Nobody has said that a number is equivalent to an algorithm but equivalent to the algorithm that produces that number just like Shannon Entropy produces a scalar in bits or a compressed file a size in bytes. Are you claiming that the assembly measures are not computable? The authors of Assembly Theory provided an (unnecessary) proof of computability. Maybe you claim it is not reversible, but introducing reversibility by tracking every step is trivial, and, in fact, the whole idea of assembly pathways is to capture causality. So this is typical of defenders of assembly theory, who switch to different versions when it is convenient.


 * The point is that objects can be combined in various ways to produce other objects (that is why Marshall et al. use quivers).
 * Therefore, the whole idea of assembly pathways is not about capturing causality but about capturing “degree of causation”. It is obviously not the same.
 * Any object has one assembly index, but (for a>1) this object can be constructed along various pathways providing this same index.


 * LZ77, Huffman coding, etc. capture causality, indeed. Huffman tree for example is in a bijective relation with the string it encodes (under some basic assumptions concerning the tree construction).
 * (I’ll call this comment The Comment to avoid repetitions)
 * Guswen (talk) 20:30, 22 March 2024 (UTC)
 * Correction for the case of strings: Any string has one assembly index, but (for a>1) this string can be constructed along various pathways providing this same assembly index iff this string does not belong to the optimal case of stationary ergodic processes, to which Abrahao et al. 2024 Corollary 8 refers. For example, the string $$C=[01010101]$$ has an assembly index of 3 but for this index value it can be constructed along only one pathway. Guswen (talk) 21:01, 22 March 2024 (UTC)


 * Why is it relevant that different objects have the same assembly index? That can also happen with any of the other indexes, from Shannon Entropy to any compression algorithm. I'm not sure what your argument is regarding binary strings. Nothing has to be binary strings. Shannon Entropy and compression can be applied to any computable object as long as it has any representation. All the objects in assembly theory have a representation unless you claim magic. Shannon also proves that as long as you have a discrete representation of your data, you can convert it to binary without losing any information. That applies to any data you store in a computer, even if it appears on the surface to be non-discrete. The implementation of the assembly index and every algorithm in assembly theory is also a computer program ultimately able to be coded in binary. Elementary theory of information. DaveFarn


 * True. The rest of your discourse is an elementary theory of information. Two different strings can have the same entropy, the same assembly index, etc. This argument pertains to your previous argument that algorithm = natural number. But now, when you say that “Nobody has said that a number is equivalent to an algorithm”, it becomes irrelevant.
 * Guswen (talk) 20:30, 22 March 2024 (UTC)
 * A Natural number is indeed a program for an Universal Turing machine, therefore can be seen as an algorithm. Any natural number in their binary (or any other representation) can be used an an input of a Universal Turing machine. GodelFan (talk) 00:36, 25 March 2024 (UTC)
 * True. A natural number can be used an input of a Universal Turing machine. But otherwise it must be interpreted to work. This is the gist of the "halting problem".
 * Do "42" and show me the results :)
 * Guswen (talk) 08:37, 25 March 2024 (UTC)
 * Irrelevant. I already explained to you that we are talking about the algorithm that produces the number, which is computable and therefore, despite your despise and that of the authors of assembly theory, equivalent to a Turing machine. Two different strings can have the same entropy, the same assembly index, the same Kolmogorov complexity, etc. Nothing special about assembly theory and has nothing to do with the fact that a number can be seen as an algorithm too. DaveFarn (talk) 09:25, 10 April 2024 (UTC)


 * Yet another claim (page 20): “the assembly index is a parameter that can be employed to encode an object”.
 * That’s not true. If an assembly index could be employed to encode a binary string, the same index could be employed to decode this string. For $$2^N$$ strings of length $$N$$, at least 4 will have the same assembly index for each $$N>1$$. For example, among $$2^6=64$$ strings of length $$N=6$$, ten have $$a=3$$, forty-four $$a=4$$, and ten $$a=5$$, which makes the assembly index as an encoding/decoding parameter redundant.


 * Again, it is the algorithm with its index, not the index alone. You could use that flawed argument with Shannon Entropy and Kolmogorov complexity, saying you cannot reconstruct, encode/decode from the number of bits it was compressed into statistically or algorithmically. It is the result and the program (e.g. the assembly pathway). Compression algorithms like Huffman and LZW are lossless, meaning you can reconstruct the original object in full because you have access not only to the resulting size of the object in bits but also to the program that you can reverse. You can do the same with the assembly index; you have the index and the assembly pathway. But it is irrelevant if you think is not reversible, it is not needed in any argument. DaveFarn


 * But if you have the assembly pathway, you also see the object, generated by this pathway and its assembly index!
 * (cf. Fig. 1)
 * There is no such thing as "compressed/encoded/shortened/etc. object" in assembly theory.


 * Of course, there is the shortest pathway. DaveFarn


 * No, there is not. There are many shortest pathways. Cf. The Comment. Guswen (talk) 20:30, 22 March 2024 (UTC)


 * You do not "reconstruct the original object in full". If you have an object you can determine its assembly index by an exhaustive traversal over the assembly space. And this process is at least as hard as NP-complete.


 * The NP-complete comment is irrelevant. Kolmogorov complexity is even beyond NP-complete, as a non-computable index. Yet approximations produce an algorithm. Not sure what is your point. Either you think that the assembly index algorithm arrives at the index value by magic that cannot be captured by a computer even when it is executed by a digital computer, or you simply refuse to understand the basics of information theory and computer science. It does not matter if you do not reconstruct the original object in full, you could if you wanted just by tracking every operation even if it is a heuristic, but this is irrelevant anyway, the algorithm of the assembly index is equivalent to Shannon Entropy. DaveFarn


 * No, the algorithm of the assembly index is not equivalent to Shannon Entropy. Cf. The Comment. Guswen (talk) 20:30, 22 March 2024 (UTC)
 * OK, so because you say it is not equivalent therefore is not equivalent? -The Comment- makes no sense and cannot be true only because you say so. If an algorithm is capturing causality, it can also capture degree of causality. It is much stronger to capture causality than degree. This is actually the definition of algorithmic complexity, you divide the components in two: what you can explain mechanistically and can be 'compressed', and what you cannot. The balance of both parts gives you the degree of causality. That is, how much you can explain about the object by mechanistic means and how much you cannot if, for example, there are many stochastic ways to produce the object but there is no single path. As long as all this is computable, it can be captured by algorithmic complexity (K). However, because the assembly index algorithm is computable and statistical, it is proven to converge to Shannon Entropy, not K, and empirically, it behaves exactly like Shannon Entropy. I will call this The Better Comment to avoid repetition. DaveFarn (talk) 22:41, 22 March 2024 (UTC)


 * The Comment makes sense, not because I say so, but because it reflects the axioms of assembly theory.
 * True: If an algorithm captures causality, it also captures the degree of causality. However, this "degree of causality"=100% in this case. It is indeed the strongest degree of causality.
 * But the other way around is false: If an algorithm captures a degree of causality, it doesn't have to capture causality. An example of such an algorithm is an exhaustive search of an assembly space to determine the assembly index of an object, not the pathway to achieve it, as there are many (in general).
 * Are you now talking about "stochastic ways to produce the object"? Welcome aboard the assembly theory! (I thought that we were discussing lossless data compression algorithms to which "the assembly index and assembly number are [allegedly] equivalent").
 * And are you trying to say that a non-compressible string with maximum Shannon entropy has zero "degree of causality"? Why?
 * Guswen (talk) 00:36, 23 March 2024 (UTC)
 * Not worth discussing with a patent attorney. You are out of your depth, sorry. DaveFarn (talk) 00:29, 30 March 2024 (UTC)


 * On the other hand, if you have an index, you cannot say anything about the object that provided this index. Even if the index is zero, you won't be able to determine which basic object it represents.


 * This is the same case with Shannon Entropy, compression or any approximation to Kolmogorov complexity too. If you only see at the size of the compressed file, the number of bits encoded from the Entropy calculation, or the size of the shortest computer program, you cannot reconstruct the original program. Your arguments are misled and are elementary knowledge from computing science.DaveFarn


 * You took it out of a broader context. So let me skip it.Guswen (talk) 20:30, 22 March 2024 (UTC)
 * What broader context? You did not know what to answer because this alone throws away your main argument that the assembly index is a number that could be produced by many ways. Just as when you see the size of the compressed file, the number of bits encoded from the Entropy calculation, or the size of the shortest computer program, you cannot reconstruct the original program and many programs or descriptions of mass probability distributions lead to the same index! You need to take a basic course on computer science 101 urgently even if 30 years ago perhaps you took one. DaveFarn (talk) 00:32, 30 March 2024 (UTC)


 * No, the assembly index is a natural number that can be produced in only one way: by counting the steps on the shortest path leading to an object and there are many shortest paths featuring the same assembly index.
 * Please refrain from ad hominem attacks. Guswen (talk) 10:37, 30 March 2024 (UTC)
 * First you say that the most important property of assembly index is its nondeterministic nature and then you say there is only one way the assembly index natural number can be produced. You have to pick one. It is not an ad hominem attack to establish that you think you know computer science based on your studies more than 30 years ago and no academic or scholar associations. You are a patent lawyer that may give the impression to know about computer science but the fact is that you are out of your depth. Sorry. For example, all these red herrings about Hamming distance and the like that nobody asks but you seem to explain as if it were something deep or related, is very telling. DaveFarn (talk) 00:37, 17 April 2024 (UTC)


 * Furthermore, you wouldn't be able to reconstruct an object, even if you knew its assembly index and the pool used to generate it. For example, the assembly index $$a=3$$ and the pool $$P=\{0,1,01\}$$ are valid for strings $$C_1=[000000]$$, $$C_2=[111111]$$, $$C_3=[010101]$$, $$C_4=[010010]$$, $$C_5=[101101]$$, $$C_6=[010010]$$, $$C_7=[001001]$$, $$C_8=[00000000]$$, $$C_9=[11111111]$$, and $$C_{10}=[01010101]$$ (I hope I didn't miss any).
 * Guswen (talk) 13:40, 22 March 2024 (UTC)
 * I did miss twelve binary 4-bit strings and some out of eighteen 5-bit strings with the assembly index of three.
 * Guswen (talk) 14:21, 22 March 2024 (UTC)


 * Same answer as above. No difference with Shannon Entropy, compression or K. DaveFarn


 * No, there is substantial difference with Shannon Entropy, compression or K. Cf. The Comment. Guswen (talk) 20:30, 22 March 2024 (UTC)


 * There is no substantial difference. If anything, is Shannon Entropy with extra steps but perhaps some pedagogical features for the wider public. DaveFarn


 * This time, no comments. Guswen (talk) 00:36, 23 March 2024 (UTC)
 * Irrelevant that there are 'many shortest paths'. In algorithmic complexity there can be many shortest programs. Why is that relevant. DaveFarn (talk) 00:54, 29 March 2024 (UTC)
 * We're not discussing "shortest programs". Each lossless data compression algorithm is a program (It doesn't matter if it's shorter/longer). But (please read this talk page carefully again) we're discussing the assembly index (in particular), which is not "equivalent (to LZ77/LZ78, cf. legend under Figure. 5)" to any of those programs (it cannot be; it's a natural number). Guswen (talk) 11:19, 29 March 2024 (UTC)


 * Guswen (talk) 19:31, 21 March 2024 (UTC)


 * Also pinging @Anachronist who might be interested in this discussion. S0091 (talk) 19:36, 21 March 2024 (UTC)


 * In my view, we shouldn't be citing non-peer-reviewed preprints, because they are essentially self-published sources. Until it gets peer-reviewed and published by a journal considered reliable, it shouldn't be cited here. On the other hand, Wikipedia occasionally cites self-published blog articles written by a person with relevant expertise, attributing the views to that person, and we could treat this source in a similar way. I'd prefer to remove it for now and wait to see if it gets published. And I have just removed it. ~Anachronist (talk) 20:18, 21 March 2024 (UTC)


 * I agree - self-published and unpublished sources should not be cited until they are peer-reviewed and published by a reliable journal.


 * Here's another example of a "critical views". An unpublished, non-peer-reviewed preprint that most likely will never be peer-reviewed and published (three authors are the same as in the previous preprint ).


 * As stated in the Assembly_theory#Critical_views section, this time, the authors of this preprint claim "to have reproduced the results of Assembly Theory with traditional statistical algorithms".


 * What do they have to support this claim?


 * The authors provide a link to code and data. Anyone can reproduce the results. The authors of Assembly Theory can too, not too difficult. DaveFarn


 * True. In this case, the links to code and data are provided, and anyone can reproduce the results.
 * But these results have nothing to do with the assembly theory.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)


 * They do. They show that all those other indexes reproduce the same results (Abicumaran et al, 2023). So, if anything, the value of assembly theory is pedagogical, especially under the light of the second paper by Abrahao et al (Zenil's group too), 2024 showing not only that empirically it was expected to get the same results, which they showed, but theoretically also are the same (See Sup. Info). DaveFarn


 * No, all those other indexes do not reproduce the same results. Cf. The Comment. Guswen (talk) 20:30, 22 March 2024 (UTC)


 * Yes, please Cf. The Better Comment above. DaveFarn


 * p. 5 "Lower entropy indicates simpler patterns"
 * p. 14 "The proposed MA [assembly index] falls into the category of entropy encoding measures"


 * That’s not true. For the reasons I have already given above.
 * Shannon entropy is based on frequencies (the frequency is the ratio of the number of the same symbols contained within a string of a given length), which can be interpreted as probabilities if a string is transmitted through a communication channel (this is not the case in the assembly theory, where a whole object, e.g., a chemical compound, is analyzed to determine its assembly index numerically and/or experimentally). Thus, binary strings with the same Hamming weight will necessarily have the same Shannon entropy. Shannon entropy is simply insensitive to patterns in the strings.


 * Wrong. See my explanations below and above. Wrong way to apply something as basic as Shannon Entropy. Please read his 1948 paper carefully. DaveFarn


 * Again, the entropy rate has nothing to do with assembly theory, as:
 * 1. the number of preceding symbols in a string varies.
 * 2. the mere concept of a "preceding symbol" or a "preceding state" is misleading in the assembly theory, as you can join already assembled subunits from the assembly pool both to the beginning (say, "from the left") and to the end (say, "to the right") of the assembled string.
 * Google assembly theory "entropy rate" or ask your ChatGPT to confirm that.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)


 * You just described Shannon Entropy for me while trying to explain Assembly Theory, thanks. Refer to Supplementary Information at https://arxiv.org/abs/2403.06629. You just described the concept of a dictionary-based compression algorithm, which makes the point of the paper cited and the criticism of Dr. Zenil's group, which in turn approximates Shannon Entropy rate, and that you claim is not Shannon Entropy. You are also describing the strategy of the authors of assembly theory. They compared theoretically (because do not dare to do it empirically as they would get the same results!) that Shannon Entropy was different from Assembly, according to the authors. However, by Shannon Entropy, they mean n-gram Shannon Entropy with n restricted to be 1, this is n=1, and claim that if n is not equal to 1 and traverses the object (just as the assembly theory does) is not Shannon Entropy. This would officially be called monkey business in the conduct of science. Furthermore, the contribution of dictionary-based algorithms was precisely to build a dictionary to reuse blocks to converge faster to the entropic value, and to define a variable n called variable window, which also allows per LZ78 (as compared to LZ77) to traverse the string only once (another property that assembly theory falsely claims to be only of assembly theory). But, then, guess what? All dictionary-based compression algorithms are defined exactly in the same way as you described assembly theory; they all converge to surprise, surprise, Shannon Entropy! (proven by Huffman for his first dictionary-based algorithm), and then by Lempev and Ziv, precisely for the LZ algorithms. DaveFarn


 * Indeed, I didn't precisely describe the strategy of the authors of assembly theory in my previous reply.
 * Cf. The Comment for the description of the strategy of the authors of assembly theory. Guswen (talk) 20:30, 22 March 2024 (UTC)
 * Cf. The Better Comment above. DaveFarn


 * p. 5 "MA is most correlated to 1D-Huffman coding"
 * p. 14 "The assembly method derived from the 'Assembly Theory' proposed by the original authors [10] consists roughly in finding a pattern-matching generative grammar behind a string by traversing and counting the number of copies needed to generate its modular redundancies, decomposing it into the statistically smallest collection of components that reproduce it without loss of information by finding repetitions that reproduce the object from its compressed form."


 * That’s not true.
 * Huffman coding is a lossless data compression algorithm. It takes a string containing symbols from a set with radix of at least two and calculates the frequencies of occurrences of these symbols within the string. Then, it sorts these frequencies and creates a tree starting from the less frequent symbol and encoding particular branches so the most frequent symbol requires just one branch (one bit). Knowing the tree and the compressed string encoded by its branches, one can decode (uncompress) it.


 * You cannot say it is not true to empirical results. Maybe you don't like it, and it is true that Huffman has a different implementation, but because it converges to Shannon Entropy, it does produce the same results as assembly theory. However, I do accept that it is LZ that resembles the most to assembly theory because it fully instantiates and is exactly equivalent to the assembly index and assembly number (the algorithms that generates them, because you seem to insist that they are suggesting that a number is a program).DaveFarn


 * No, Huffman does not produce the same results as assembly theory. Huffman algorithm is deterministic, while in assembly theory an object can be constructed along various pathways providing the same assembly index (and also various pathways with the number of steps larger than the assembly index). Cf. The Comment. Guswen (talk) 20:30, 22 March 2024 (UTC)


 * A binary string can be constructed with up to 2^n shorter programs producing the same string bounded by the length n of the bitstring. You call this a non-deterministic program. I call it a red herring. The same with Entropy and compression. Several compression algorithms can reconstruct an object through different paths. The main point is that the number of steps will remain about the same and exactly the same in the limit converging to Shannon Entropy. The assembly index does the same; for every case of high assembly value, it will have exactly low Shannon Entropy and high compressibility in exact equivalent proportions. No difference, just extra steps and unnecessary gimmicks like the quiver graph. Next time, you will claim that the assembly index is a quantum algorithm that no classical computer can approximate, even when its basic definition is to count identical copies... Please Cf. The Better Comment above. DaveFarn


 * True. Different compression algorithms can reconstruct (decode) an object (from an encoded object) through different (deterministic) paths.
 * I will not claim that the assembly index is a quantum algorithm, as the assembly index is a natural number. You did a lot to persuade me that a natural number is equivalent to an algorithm, and you almost succeeded. But you won't persuade me that a natural number corresponds to a quantum algorithm. Guswen (talk) 00:36, 23 March 2024 (UTC)
 * Impossible to discuss with you. You came all the way back to claim that a number is not an algorithm? The algorithm that produces the index is an algorithm! And nobody said I wanted to convince you that a number was a quantum algorithm, what I said is that your last resort seems to be to suggest that the assembly index comes from some sort of quantum algorithm because you have been cornered. If you say "True. Different compression algorithms can reconstruct (decode) an object (from an encoded object) through different (deterministic) paths." What follows? That what you said made the assembly index special is not, and there is nothing else to assembly theory or the assembly index. As per all the discussion above, therefore, it is Shannon entropy in disguise that only highly misguided people can think is doing something different. DaveFarn (talk) 04:23, 31 March 2024 (UTC)
 * That's simply unbelievable: assembly theory "is Shannon entropy in disguise that only highly misguided people can think is doing something different"; some anonymous DaveFarn is of the opinion that Nature (journal), Nature Communications, Philosophical Transactions of the Royal Society, Science Advances, etc.; all these renowned academic publishing companies are a bunch of "highly misguided people". Guswen (talk) 20:02, 2 April 2024 (UTC)
 * It is you who think that papers in the Royal Society and Nature (Springer Nature) aka the journal of Molecular Evolution, etc, that have had only had a couple of years to start finding the many errors of assembly theory, are wrong, and that your set of chosen papers positive to assembly theory, are right. You are right, the authors of assembly theory have had a head start of 10 years to plan or indulge their own ignorance but you are just falling in confirmation bias, spreading those papers that sugar talk assembly theory and rejecting those that do not. This is not about journals, this is about arguments. Your argument that the assembly index is indeterministic has been debunked because you thought that a single number would have a single Shannon stochastic process or a single (shortest) computer program when it comes to Kolmogorov complexity producing a size-in-bits number in what you thought was a one-to-one mapping but I have exposed your limited knowledge on the topic above and below showing you how this is not true and that your 'indeterministic' argument was not only not unique to assembly theory but also irrelevant. It is very simple, Abrahao et al have shown that anything with high Shannon entropy will have high assembly index, and anything with low Shannon entropy will have low assembly index. This was also confirmed empirically by Abicumaran. Not a single example will be found breaking that relationship, and the relationship is directly proportional, therefore, both are the same. Assembly theory renamed Shannon entropy and proposed a compression algorithm that they refuse to call it compression but is based upon exactly the same principles as those implemented by the simplest statistical compression algorithms. Ergo, people, including reviewers behind flashy journals, can be fooled by cunning manipulators. DaveFarn (talk) 11:34, 16 April 2024 (UTC)


 * Assembly theory, on the other hand is about finding the assembly index of an object: the smallest number of steps required to assemble it from some basic subunits.


 * One can see Shannon or Kolmogorov exactly that way, as finding the encoding or program that minimizes the size in steps of the object from its most basic units... There is no difference; it's exactly the same. DaveFarn


 * Again. There is no such thing as a "compressed/encoded/shortened/minimized/etc. object" in assembly theory.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)


 * There is; they call it assembly shortest path.


 * Who "they"? They call it the assembly index. Cf. The Comment. Guswen (talk) 20:30, 22 March 2024 (UTC)


 * From this Wikipedia article itself, quoting the original article, reads "The 'assembly index' is defined as the shortest path to put the object back together". Cf. The Better Comment above. DaveFarn


 * Thank's for bringing that out. This definition was imprecise. I've just updated it on the basis of Nature 2023 article. Guswen (talk) 00:36, 23 March 2024 (UTC)
 * From the Nature 2023 Article:
 * "For each object, the most important feature is the assembly index, which corresponds to the shortest number of steps required to generate the object from basic building blocks." GodelFan (talk) 00:40, 25 March 2024 (UTC)
 * True: the assembly index corresponds. But it "is [defined in Nature 2023 Article as] the number of steps on a minimal path producing the object".
 * Guswen (talk) 08:44, 25 March 2024 (UTC)
 * And that is different because? ... GodelFan (talk) 09:40, 25 March 2024 (UTC)


 * It isn't different. The latter definition is simply more precise. In general, there are many different minimal paths leading to the same object; paths featuring the same assembly index but leaving different subunits along the way.
 * The first definition might be wrongly interpreted by an inattentive reader as an "IKEA assembly instruction", i.e., as implying that there is only one valid way to assemble an object.
 * There is only one valid way to encode/decode a string using Huffman, LZ, etc. lossless data compression algorithms.
 * But this is simply untrue in assembly theory. In assembly theory, there are many different ways to assemble an object. That's the gist of this theory.
 * Guswen (talk) 11:30, 25 March 2024 (UTC)
 * That does not make any sense. Even assuming that what you say is true and that is the 'gist' of the theory, that is not what the authors claim. But it is not true. As said to you once and again, all other methods do the same, you can arrive tot he same output through many different means, that does not make Shannon Entropy or algorithmic complexity non-deterministic or special in any way. And yet, they are all reversible not from the output but from the algorithm. In any case, even non-deterministic, that does not mean uncomputable, and so the paper by Abrahao et al. still holds and proves once and for all that assembly theory IS Shannon entropy implemented in one of many ways, in particular LZ. DaveFarn (talk) 00:57, 29 March 2024 (UTC)

Here you're right for the 1st time (though what you say is obvious): known lossless compression methods (such as LZ) are reversible (who would want to have a "compression" method that does not allow decompression of the compressed information?). But AT is different, as it is irreversible (please read this section carefully again to understand that crucial aspect of AT). Guswen (talk) 11:19, 29 March 2024 (UTC)


 * The assembly index is a natural number. There is no way to "decode" an object knowing its assembly index. Different objects (e.g. chemical compounds) have the same assembly indices (cf. e.g., Supplementary Information, p. 7, 20-22, 24, 25, 30, 31, 45).


 * You are comparing two different things. The assembly index is equivalent to the size of the compressed object in your analogy, which means you cannot decode the original program. The assembly pathway is the equivalent to the compressing program, from which the index is calculated, it is equivalent to a traditional statistical compression algorithm, Huffman or LZW, they all converge to Shannon, as shown by Abrahao, Hernandez, Kiani, Tegner and Zenil, 2024 and shown empirically in their previous paper by Abucimaran producing indexes that are deeply correlated because basically define the same algorithm. DaveFarn


 * Again. If you have the assembly pathway of an object, you also see the object generated by this pathway. You don't even need an assembly index of this object as "a parameter that can be employed to encode an object". You simply count the number of double-branches that led to this object. This is its assembly index. Figure. 5 A of this preprint, for example, does not explain how was "ABRA" reused. It cannot. This figure would explain it.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)
 * What do you mean, this is the typical behavior of any dictionary-based coding algorithm. There is nothing special and ABRA can be reused in any way you want, specially in the way assembly theory implements it. DaveFarn (talk) 11:35, 16 April 2024 (UTC)


 * Guswen (talk) 21:34, 21 March 2024 (UTC)
 * I don't agree with @Anachronist or @Guswen. Peer-reviewed papers published by Springer Nature and the Royal Society consider that the cited blogs from two highly cited academics from top institutions (that have been deleted today) and their preprints can be cited and are reliable sources but Wikipedia would hold higher standards and would not allow to cite those sources even via citing those papers? That makes no sense to me, and there are plenty of examples of citations to arxiv and preprint papers from Wikipedia when these are known authors and academics or are preprint papers cited by regular papers while they are published, if ever. One of the authors, Prof. Zenil, has even written the only other book on applications of algorithmic complexity ever published. We are talking about an author, Dr. Zenil, who has published books for Cambridge University Press, and one of his books has a preface by Sir Roger Penrose (Nobel Prize in Physics 2020). If Dr. Zenil, a professor from King's College, is a reliable author for Sir Roger Penrose, I do not know how it would not be for anyone here (https://www.amazon.com/COMPUTABLE-UNIVERSE-UNDERSTANDING-EXPLORING-COMPUTATION/dp/9814374296/). Some preprints are sometimes never even published, including the paper by G Perelman solving the Riemann hypothesis! (Ricci flow with surgery on three-manifolds arXiv:math/0303109) ever published in any journal. There are hundreds of cases like this from Wikipedia from all sorts of subjects that provide the reader with context and balance. Publishing negative results takes longer but should not be an excuse to conceal criticism. The reader can decide what side to take, but as editors, there is a responsibility. Now the whole section has been removed again. Even when several authors have contributed and was the consensus that it was providing a more balanced view. Also, all this is not the technical space to discuss papers from academics. However, the arguments by Guswen, if it is theirs, are wrong. The application of Shannon entropy is wrong (read Shannon's original paper on how entropy is properly applied, not as you do), and the idea of a single number from the assembly index is not comparable with the single number of Shannon entropy or Kolmogorov complexity makes no sense, all three retrieve numbers, and the cited paper has a 20-page appendix proving mathematically their relationship. Also, I'm not sure what you mean by what they have to support their claim. Their first paper has a GitHub repository with all the code and data, you can run it and see the evidence that statistical compression reproduces the results of Assembly Theory. DaveFarn (talk) 02:57, 22 March 2024 (UTC)


 * We're talking about facts here. Not about the unquestionable authority and prestige of Dr. Zenil within the scientific community.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)
 * Another fact is that Wikipedia allows citation to arxiv and preprints from recognized scholars. Wikipedia is full of these citations. For example https://en.wikipedia.org/wiki/Latent_space DaveFarn (talk) 16:56, 12 April 2024 (UTC)


 * Correct. And the fact here is that this reputable scientist has posted a learned opinion and his group has posted papers clearly under revision (so no misleading) that provide balance to this article DaveFarn

Besides that everything that @Dave Farn wrote is correct and accurate, here is a list containing examples of wikipedia relevant articles that rely on non-peer-reviewed preprints for relevant parts of the entry: - https://en.wikipedia.org/wiki/String_theory - https://en.wikipedia.org/wiki/Riemann_hypothesis - https://en.wikipedia.org/wiki/Entropy_(information_theory)#:~:text=Generally%2C%20information%20entropy%20is%20the,referred%20to%20as%20Shannon%20entropy. - https://en.wikipedia.org/wiki/Generative_adversarial_network - https://en.wikipedia.org/wiki/Machine_learning - https://en.wikipedia.org/wiki/Modal_logic.

If we apply the logic @Anachronist and @Guswen are following, wikipedia wouldn't have many of its most relevant articles, or they will be incomplete. Furthermore, @Henizcolter, the user who's been deleting most of the critical views section, has rely mainly on ad hominem fallacies, false claims about Dr. Zenil's persona and credentials, and grandiloquent, unprofessional and unsupported comments on both Benner's an Zenil's critical views. Most of his comments are sentences like: "Benner blog has no sense", "Zenil's arguments were debunked and destroyed by Lee Cronin in a video with a Youtube influencer", "The critics are trolls with not verified affiliation (claim that's ben proven false) and their preprints will never be published". You all are free to go and check the history of that vandalism and unprofessionalism. How can we rely on a source like this user? Besides, his arguments for deleting the critical views section are basically the same that had been given here and can be summarized as: "they don't like the critique" "the critique has not been published". The first of those is self-explanatory wrong and even absurd. The second is partially false, as two of the most prestigious scientific journals have cited the original critics and suggesting they are fundamentally right; even without that two citations, the reasons you have for not adding a critical views section to this wikipedia entry are not sufficient as some of the fundamental wikipedia entries rely on non-peer-reviewed preprints. — Preceding unsigned comment added by JulioISalazarG (talk • contribs) 06:21, 22 March 2024 (UTC) JulioISalazarG (talk) 06:24, 22 March 2024 (UTC)


 * ibid.
 * Guswen (talk) 13:40, 22 March 2024 (UTC)


 * Ps. The article, that you refer to, published in Journal of Molecular Evolution that cites Dr. Zenil, et al. [preprint], claims that one
 * can use assembly theory to check whether something unexpected is going on in a very broad range of computational model worlds or universes.
 * Therefore, this article reveals the author's preferred interpretation of the quantum measurement problem, which is the many-worlds interpretation. But this interpretation is unverifiable, unfalsifiable, and thus unscientific.
 * What is more important, this article reveals the author's creationistic inclinations. Since assembly theory is complete and experimentally confirmed, by "something unexpected", he must mean something supernatural.
 * Guswen (talk) 14:48, 22 March 2024 (UTC)


 * Assembly Theory is not experimentally confirmed in any way or form. Confirmed against what? Everything that is disputed or challenged? Every prediction has been wrong or already reported before. DaveFarn


 * Assembly Theory is experimentally confirmed at least by tandem mass spectrometry, nuclear magnetic resonance, and infrared spectroscopy. See the article for details. Guswen (talk) 00:36, 23 March 2024 (UTC)
 * False. Mass spectrometry just debunked by latest paper. All other applications can be resolved with Shannon Entropy and you are ahead of yourself citing unpublished papers when you like them, but then complaining when you don't. DaveFarn (talk) 01:13, 30 March 2024 (UTC)
 * Would you kindly advise which scientific, peer-reviewed paper allegedly "debunked" experimental confirmation of assembly theory by mass spectrometry?
 * Such a groundbreaking result would have to be immediately introduced to the critical section of the assembly theory article! :)
 * Guswen (talk) 10:37, 30 March 2024 (UTC)
 * You like so much experimental data, why you don't just follow the arxiv paper you despise so much and in 15 min reproduce the results with the code provided. So easy. Debunked. DaveFarn (talk) 04:25, 31 March 2024 (UTC)
 * Experimental data (obtained from chemical samples via tandem mass spectrometry, nuclear magnetic resonance, infrared spectroscopy, etc.) is not the same as the "Hello world!" output of some code. You know that, right? Guswen (talk) 20:02, 2 April 2024 (UTC)
 * You see who is the condescendant? There are many ways to make ad hominem attacks like the ones you accuse me of. I attack your knowledge on the topic based also on the fact that you have no contact with professional computer science (or science in general) in more than 30 years and your profession is a patent lawyer, not as a scientist, trying to pubish papers in this case in assembly theory that you want so hardly to be right. But your way of offending is adding stuff and asking "you know the difference, right?" most of it about random stuff and irrelevant 'factoids'. DaveFarn (talk) 16:59, 12 April 2024 (UTC)


 * No, that does not reveal the author's preferred interpretation of quantum mechanics, nor does it suggest any creationist sympathies. XOR&#39;easter (talk) 01:43, 23 March 2024 (UTC)


 * "No, because no."
 * Perhaps that's the only thing that assembly theory critics can say.
 * I can't call it an "argument" because there is no reasoning behind it.
 * Guswen (talk) 17:02, 23 March 2024 (UTC)
 * You made an assertion based, as far as I can tell, solely upon a failure of reading comprehension. XOR&#39;easter (talk) 20:11, 23 March 2024 (UTC)


 * It may well be so. I'm from Poland; I'm not a native speaker. But if so would you kindly enlighten me, XOR&#39;easter, on what part of the sentence
 * "one can use assembly theory to check whether something unexpected is going on in a very broad range of computational model worlds or universes"
 * I misunderstood? "something unexpected", "broad range of (..) universes", or both?
 * Guswen (talk) 01:12, 24 March 2024 (UTC)
 * "Unexpected" is not synonymous with "supernatural", and "universes" was not being used in the sense of the "worlds" in the MWI. That's just not what the source was saying. I don't know how that could be made more obvious. XOR&#39;easter (talk) 18:43, 26 March 2024 (UTC)


 * Assembly theory does not need any additional "ingredients" to explain and quantify selection and evolution. Therefore, yes, "unexpected" is synonymous with "supernatural" in this context.
 * There is only one computational universe. Our universe. If one speaks about "a very broad range of computational universes", one speaks about unscientific MWI. This is also obvious in this context.
 * That's what that source implicitly says. I really don't know how you can't see it. Guswen (talk) 21:25, 26 March 2024 (UTC)
 * Because it's not there, any more than the MWI is implicit in an everyday saying like "the best of all possible worlds". You're inventing a meaning that is at odds with how the English language works. XOR&#39;easter (talk) 22:45, 26 March 2024 (UTC)


 * Please do not mix poetry with the acknowledged (albeit unscientific) interpretation of the quantum measurement problem. This commentary is nicely written. But still, it's not a poem. At least it wasn't intended to be one.Guswen (talk) 01:11, 27 March 2024 (UTC)
 * I'm not talking about poetry. The source you point at has nothing to do with the interpretation of quantum mechanics. It doesn't even have the word quantum in it. I'm sorry I don't have a more polite way of saying this, but you are reading things into it that are not there. You are bringing your own preconceptions about assembly theory, quantum measurement, information theory and more, and you are synthesizing them into something incomprehensible. XOR&#39;easter (talk) 15:54, 27 March 2024 (UTC)


 * I'm neither bringing, nor synthesizing anything. I am simply protecting an article about the experimentally confirmed theory that has been developed for almost 10 years by a number of renowned Universities (including University of Glasgow and Arizona State University), and published in renowned journals (including Nature (journal), Nature Communications, Philosophical Transactions of the Royal Society, and Science Advances) against vandalism. I only cite the research results that can be found in these publications. Guswen (talk) 08:57, 28 March 2024 (UTC)
 * You are not protecting an article. You are trying to make an article biased in favor of assembly theory trying to delete and downplay valid criticisms from valid sources that peer-reviewed journal papers are already citing but you claim are not up to your standards and rather take at face value the results of what you think are top scientists or top journals. They may well be but also the ones criticising the theory and not always the criticisms are journal papers.Readers can judge and decide. The authors you want to remove have published their stuff in the same class of journals and are experts in the area in which the authors of assembly theory are only recently getting involved with. DaveFarn (talk) 01:03, 29 March 2024 (UTC)


 * Is Wikipedia supposed to rely on some Chinese whispers? Someone cited someone that cited someone that cited... If there is any merit in these never-to-be-published preprints in question, let their authors publish them in some scientific journal (they cannot, as the claims in these preprints are false), and - as I already declared - I'll be the first to cite these findings in this article. Guswen (talk) 11:19, 29 March 2024 (UTC)
 * These are not whispers. It means that if a source is cited by a reliable source, then it gives credibility to the first source. Check the Wikipedia guidelines. It is you who spreads misinformation, could you please tell us what magic ball you use to know that experts in the field with over 100 articles published will never manage to publish those current preprints in a journal? Is it maybe because you have a lot of personal experience with rejections being a patent lawyer for more than 30 years trying to do science? DaveFarn (talk) 01:21, 30 March 2024 (UTC)


 * No, it does not. For example, a publication describing the atrocities of fascism and citing "Mein Kampf" as a foundation of fascism does not give credibility to "Mein Kampf" and/or fascism.
 * Please refrain from word salads and ad hominem excursions; stick to the facts published in renowned scientific journals over the past decade. Guswen (talk) 17:25, 30 March 2024 (UTC)
 * Of course in your example the citation would give credibility to Mein Kampf as a publication (not to its contents). What you are doing is deleting any reference to that source that other sources cite. Nobody is claiming to be endorsing the contents. Wikipedia does not vouch for people or sources, it only presents all the information to the readers. Please, read what you write before posting, so that it makes sense when others read it. DaveFarn (talk) 04:28, 31 March 2024 (UTC)

@Anachronist, @S0091" I think that after this discussion, where all interested parties could present their arguments, we should remove this unpublished preprint that falsely claims, among others, that: Guswen (talk) 19:52, 23 March 2024 (UTC)
 * AT "is a formulation that mirrors the working of previous coding algorithms" (p. 3) - it isn't;
 * "LZ77/LZ78 is at the core of AT up to a constant" (p. 3) - LZ77/LZ78 is not at the core of AT;
 * "What AT amounts to is a re-purposing of some elementary algorithms in computer science" (p. 4) - AT does not re-purpose any elementary algorithms;
 * "AT is a very special narrow and weak case [of Huffman coding algorithm]" (p. 15) - AT is not any narrow case of Huffman coding;
 * "AT intended but failed to implement [...] namely, to count the minimum of expected steps" (p. 16) - AT does count the minimum of steps (not "expected" but factual) on a minimal path producing the object; this number is in AT called the 'assembly index' (MA);
 * "based on the principle of ‘counting repetitions’ as AT intended" (p. 17) - AT is not (or not only) about counting repetitions (copies), but about counting steps on a minimal path producing the object;
 * "the claims regarding the capabilities of AT to characterise life, redefine time, find extraterrestrial life and unify biology and physics, are shown to be unfounded or exaggerated" (p. 18) - they were not, as Nature 2023 publication shows;
 * AT "mathematical and computational methods based on decades-old coding schemes" (p. 19) - that's not true;
 * "MA is most correlated to 1D-Huffman coding" (p. 5) - MA is unrelated to Huffman coding;
 * "The proposed MA falls into the category of entropy encoding measures and is indistinguishable from an implementation motivated by algorithmic complexity using methods such as LZW and cognates, except for perhaps meaningless variations" (p. 14) - this is not true (cf. the discussion above);
 * "Our results also show that MA, and its generalisation in AT, is prone to false positives and fails both in theory and in practice to capture the notion of high-level complexity reveals" (p. 17) - there is no "generalisation" of MA in AT; there is only one definition of MA, the same from the onset of AT in 2017, and it is not prone to "false positives";
 * etc.


 * Pinging @XOR'easter as well as they have expressed an opinion. I have no interest in this article so no need to ping me in the future.  S0091 (talk) 20:00, 23 March 2024 (UTC)
 * I have very little interest in this article, but the above list of bullet points looks like a Gish gallop aiming to deligitimize a paper by assertion &mdash; essentially, claiming that it is wrong because it is wrong. In most cases, I would advise against citing an arXiv preprint, but this is one of the rare examples where such citation could be warranted, seeing that the preprint in question has been cited in the peer-reviewed literature. XOR&#39;easter (talk) 20:19, 23 March 2024 (UTC)
 * It's not a Gish gallop, but an incomplete list of errors and omissions in this preprint, which was not cited in the peer-reviewed research but in some equation-less commentary. "A Flurry of Hype"? What is this? A conjecture? A lemma?
 * Guswen (talk) 01:22, 24 March 2024 (UTC)
 * The arxiv paper and blog that you hate so much have been cited by the Journal of Molecular Evolution whose publisher is Springer Nature (https://link.springer.com/article/10.1007/s00239-024-10163-2), the same as the Nature journal that you seem to hold so high to take everything that the authors of assembly theory say as gospel. The same arxiv paper and blog you hate so much have also been cited by the Royal Society Interface journal (https://royalsocietypublishing.org/doi/10.1098/rsif.2023.0632) which again you seem to hate when it criticizes assembly theory but praise when it publishes something about assembly theory per your comment above listing the journal of the Royal Scociety. You cannot have it both ways. The truth is you are clearly not an expert in complexity science or information theory but even if you were, Wikipedia is not the place to make judgement calls based on what you think are your scientific beliefs. Those belong to the experts and the author of both the arxiv paper under revision and the blog can be traced back as experts in those areas probably even more than the authors of assembly theory judging by the number of papers published in peer-reviewed journals in those areas alone. I grant that the Glasgow group may know their stuff when it comes to chemistry, hopefully, but the same you have to grant about complexity and information for the Cambridge/King's groups that you seem to want to silence because you don't like them. Finally, it is false that blogs or arxiv papers cannot be cited in Wikipedia, you have been given the guidelines before and precisely cover the case of academics: https://en.wikipedia.org/wiki/Wikipedia:Blogs_as_sources DaveFarn (talk) 01:16, 29 March 2024 (UTC)


 * I do not "hate" these preprints and blogs. Most claims they pose are simply false. If you insist, the statements in these blogs that are plausible (there are few) could be cited WP:RSOPINION
 * And please refrain from word salads and ad hominem excursions; stick to the facts published in renowned scientific journals over the past decade.
 * (As for the rest cf. WP:NOCHINESEWHISPERS). Guswen (talk) 11:19, 29 March 2024 (UTC)
 * And they are false because? Options:
 * 1. You are an expert in the field
 * 2. You have evidence they are false
 * 3. You cite the original author in COI on a video saying they are false.
 * We know it is not 1 or 2, it is 3, which make you fall into a contradiction with your Chinese whispers argument. DaveFarn (talk) 04:32, 31 March 2024 (UTC)


 * To cut the story short, these statements are false because they were not published in any self-respecting (aka renowned) journal. And they will not be published in any self-respecting (aka renowned) journal because they are false. Guswen (talk) 20:02, 2 April 2024 (UTC)
 * Now you have come to the point of absurdity. That is a circular fallacy and if you are to prove they are false, then show it without the false appealing to prestigious journals (by the way, journals with conflict of interest) JulioISalazarG (talk) 20:28, 2 April 2024 (UTC)


 * It is not a "circular fallacy". It is (almost) an equivalence.
 * (Almost, since certain true statements are nevertheless nonpublished in renowned journals. But this is (at least partially) the aftermath of Gödel's incompleteness theorems; any consistent axiomatic system contains true statements that are unprovable.)
 * (At least partially, since certain true and proven statements are nevertheless nonpublished in renowned journals until they gain sufficient public attention.)
 * What do you mean by "journals with conflict of interest"? That any scientific journal cannot publish scientific advances because it has a conflict of interest in publishing them? Would you please elaborate on this "argument"? Guswen (talk) 08:55, 3 April 2024 (UTC)
 * Another fact is that Wikipedia allows citation to arxiv and preprints from recognized scholars. Wikipedia is full of these citations. For example https://en.wikipedia.org/wiki/Latent_space DaveFarn (talk) 17:00, 12 April 2024 (UTC)


 * Are you serious? After all the reasons many people presented you? You're deleting it only because you don't like it and you want Assembly Theory to remain unchallenged? There are more people here that agreed to have the critical views section than the ones who don't; and, mostly, the ones who doesn't want the section are most likely people somehow related to the authors of Assembly Theory.
 * I want only objectivity. This is completely unhelpful as you weren't even able to (1) hold fallacious arguments like the one staying the paper is not published (wikipedia relies on many unpublished material) or the ones questioning the critics's credentials (2) the technical arguments against not having the critical views section, as they are insufficient and show a lack of understanding of the most fundamental parts of concepts like Shannon entropy or Huffman coding. JulioISalazarG (talk) 20:50, 23 March 2024 (UTC)
 * If everything sourced to a blog should be removed, then so should everything sourced to a press release or to the inventors' own website. Reliance on press releases is particularly bad, since they are both light on detail and heavy on hype. A blog post by a subject-matter expert can at least provide expertise, but a press release provides only hot air. Likewise, if criticisms presented (so far) in preprints must be removed, then so should anything supported only by Jirasek et al. (2023), which only exists in preprint form. XOR&#39;easter (talk) 18:49, 26 March 2024 (UTC)


 * I agree.
 * Therefore, if you insist, let us remove Jirasek et al. (2023) that supports nuclear magnetic resonance and infrared spectroscopy experimental confirmation of assembly theory from the Background section, as it only exists in a preprint form at this moment. However, as far as I know, this research is almost accepted in ACS Central Science.
 * Mostly, a blog post by a subject-matter expert provides only this expert's hidden agenda, not expertise. If this expert has something scientific to say, let them publish it.
 * Guswen (talk) 21:14, 26 March 2024 (UTC)
 * That's not how science communication has worked for years. And the idea that a blog post must serve a "hidden agenda" is pure speculation. One might as well say that a published paper serves a "hidden agenda" whenever it disagrees with some other published paper. XOR&#39;easter (talk) 22:43, 26 March 2024 (UTC)


 * "Hidden agenda", which is substantiated with peer-reviewed reasoning that anyone can verify for themselves, is no longer a hidden agenda.
 * Some blogs are blatantly false. Since "people work at the level of conclusions, not at the level of arguments" (J. Tischner), people do not verify the false arguments of these blogs but jump straight to (false) conclusions, particularly if there is some "authority" behind these false conclusions. Guswen (talk) 01:07, 27 March 2024 (UTC)
 * I mean that anyone with a sufficiently conspiratorial mindset can "find" a hidden agenda in a published paper and then claim that it was so well hidden that peer review did not stop it. In other words, it's always possible to speculate about a hidden agenda and then assert that because such an agenda would invalidate the results, the paper can't be trusted. Making up a "hidden agenda" to devalue a blog post is the same thing &mdash; perhaps even sillier, since blog posts are an author's own words presented without oversight, whereas the formal style demanded of journal articles imposes a kind of fake politeness. XOR&#39;easter (talk) 15:46, 27 March 2024 (UTC)
 * I really don't know what you are talking about. I'm far from having a "conspiratorial mindset", and this discussion contains plenty of arguments showing that the novelty of experimentally confirmed assembly theory stems from the fact that the same object can be constructed along various pathways providing this same assembly index so that AT captures "degree of causation" not causality, like LZ77, Huffman coding, etc. do. Guswen (talk) 09:01, 28 March 2024 (UTC)
 * I agree with XOR-easter. Also, the argument you keep repeating like a parrot is debunked. I already took the time to explain you: the number of bits of the size of a computer program or object representation can be produced by many programs, the bits given by Shannon entropy for an object is not unique. What you think is unique of assembly index is not. If you like equations, please, check Abrahao et al. recent paper with a 10-page mathematical proof that assembly theory is just Shannon entropy in disguise. Refute that and publish a paper, I promise we will support adding it to this Wikipedia article even if it is not published if it looks like you have any credentials and can show that you can disproof a mathematical proof. DaveFarn (talk) 01:21, 29 March 2024 (UTC)


 * Please read this talk page carefully again.
 * I have already explained that what Abrahao et al. 2024 claim to have proved is that the assembly index for strings "in the optimal case for ergodic and stationary stochastic processes" can only converge towards LZ compression and thus is bounded by Shannon Entropy by the (noiseless) Source Coding Theorem (cf. p. 24, 3rd paragraph).
 * There cannot be a general proof that the assembly theory is equvalent to LZ compression. It folows from the mere construction and axioms of this EXPERIMENTALLY CONFIRMED theory that has been developed for almost 10 years by a number of renowned Universities and published in renowned journals. Guswen (talk) 11:19, 29 March 2024 (UTC)
 * The only 'EXPERIMENTAL EVIDENCE' (using your caps) was that organic versus non-organic cutoff that was just destroyed by the Royal Society Interface paper (https://royalsocietypublishing.org/doi/full/10.1098/rsif.2023.0632?af=R) that you keep deleting because you don't like it. No other 'experimental evidence' but only speculations. DaveFarn (talk) 00:35, 30 March 2024 (UTC)
 * Oh, no. The experimental evidence for assembly theory is solid. You dare to question Nature (journal) journal (with an ascribed impact factor of 64.8 making it one of the world's most-read and most prestigious academic journals) credibility based on some blogs, vlogs, ad hominem attacks, never-to-be-published preprints, press-releases, etc. "For molecules, the assembly index can be determined experimentally".
 * The Royal Society Interface paper only questions the "15 threshold". But his issue was already addressed so many times here... Guswen (talk) 10:37, 30 March 2024 (UTC)
 * No matter how many times the '15 theshold' is 'addressed' by you here (no one else... but this shows how you like to mislead and you try to fool). You do not have authority over this article and you are a patent lawyer out of your depth to be able judge the work of current academics finding assembly theory in error (their words) in a journal of the Royal Society that you keep deleting. So it is you that like certain journals when they are in favor of assembly theory but not other journals when they are not. The Nature paper has zero experimental evidence, but it is not me deleting the Nature paper, it is you deleting the other sources, including peer-reviewed publications. So, please, stop the nonsense and vandalizing this page. Do not try to reverse things and try to gaslight people. I am not deleting the Nature paper, it is you who is deleting and silencing critical views with a huge COI because, by your own admission, you submitted a paper to the journal Mathematics, on assembly theory, that has not accepted your paper and you don't want reviewers to find the opinion of divergent voices not aligned with your views on assembly theory. Therefore, because of your COI, STOP. DaveFarn (talk) 04:38, 31 March 2024 (UTC)
 * I do not have any WP:COI. I am only "protecting an article about the experimentally confirmed theory that has been developed for almost 10 years by a number of renowned Universities (including University of Glasgow and Arizona State University), and published in renowned journals (including Nature (journal), Nature Communications, Philosophical Transactions of the Royal Society, and Science Advances) against vandalism. I only cite the research results that can be found in these renowned publications." Guswen (talk) 20:02, 2 April 2024 (UTC)
 * You do have a COI given you have a paper submitted on assembly theory to the journal Mathematics (which is unlikely to be accepted as you have a very long experience of rejected papers) and you have been found with suck-puppets all over Wikipedia always trying to insert your views and research for your personal gain and you are most likely the suck puppets also vandalizing this page removing the critical views altogether on top of you shaping the critical views to your limited understanding as a patent lawyer and not the scholars experts in the field that are writing multiple opinions and papers against assembly theory and which Wikipedia does say it can be covered given their status. DaveFarn (talk) 00:09, 11 April 2024 (UTC)


 * The continued attempt to disparage and remove the JME paper as a "copy of a blog post" is way off the mark. It's an edited version of something that was first posted at a blog. It was published as a commentary, meaning that it underwent peer review. XOR&#39;easter (talk) 01:26, 28 March 2024 (UTC)
 * Agreed. Some people here want the article to read as if there are no criticisms. Even removing or belittling peer-reviewed publications or publications under review cited by peer-reviewed publications. Even the criticism section, they want it to read positive! DaveFarn (talk) 01:22, 29 March 2024 (UTC)

I have gone through the critical views section again and reduced it to what is published and kepted the radically unprofessional blog from Zenil which will serve to remind us that the valadism of this wiki page is an attempt at missinformation by determined set of anti assembly theory people who have unscientific agendas. — Preceding unsigned comment added by 185.31.155.153 (talk) 05:04, 15 April 2024 (UTC)

COI and multiple sock puppets vandalizing this article
Just to report that Guswen is Szymon Łukaszyk, who has been found to be sockpuppeting or meatpuppeting in Wikipedia before in order to promote his work. Guswen's previous socks are detailed in WP:Sockpuppet investigations/Guswen/Archive. He is also the co-author of a preprint on assembly theory A patent attorney with fabricated papers that are not his but amount to some sort of 'scholar' index with articles he is not author listed on https://scholar.google.com/citations?user=FH3ExXUAAAAJ&hl=en and those that are his are on topics ranging from 'negative and imaginary dimensions', assembly theory, 'quantum law'... Apparently, someone whose only good judgment is to put assembly theory at the same level of crackpot theories he explores as described on his attorney patent's website https://patent.pl/index.php?tm=1&srvid=0&attid=szymon_lukaszyk&lg=en DaveFarn — Preceding undated comment added 00:16, 30 March 2024 (UTC)

Extended-confirmed-protected edit request on 21 May 2024
There’s a “citation needed” request in the fourth paragraph of the Background section of this article after “…can be taken from the assembly pool.” Kindly provide the missing citation:

to improve the condition of this article. 2A00:F41:2821:C000:F594:F944:65B6:2DC9 (talk) 10:12, 21 May 2024 (UTC)
 * ✅   [[User:CanonNi ]]  (talk • contribs) 04:10, 1 June 2024 (UTC)
 * @CanonNi, this request was likely made by a sock/meat puppet seeking to promote their article. See above talk. --Classicwiki (talk) If you reply here, please ping me. 08:53, 1 June 2024 (UTC)
 * @Classicwiki Thanks for pointing that out, I've self-reverted.   [[User:CanonNi ]]  (talk • contribs) 09:01, 1 June 2024 (UTC)