Wikipedia:Articles for deletion/Causal Neural Paradox (Thought Curvature)


 * The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review).  No further edits should be made to this page.

The result was delete. Very clear and actionable consensus to delete. This could have been closed earlier under the CSD criteria, and certainly the lengthy comments with excessive white space didn't do anything to help facilitate any sort of debate to keep the article. KaisaL (talk) 00:07, 5 July 2016 (UTC)

Causal Neural Paradox (Thought Curvature)

 * – ( View AfD View log  Stats )

This draft has multiple end references but no in-line references and appears to have two problems. First, it appears to be original research. Second, it is incomprehensible. (It isn't entirely syntactic English, but parts of it that are syntactically valid have no recognizable semantic content.) Robert McClenon (talk) 23:18, 27 June 2016 (UTC)
 * Note: This debate has been included in the list of Science-related deletion discussions. Shawn in Montreal (talk) 00:06, 28 June 2016 (UTC)


 * Delete. Unsourced gobbledegook. Xxanthippe (talk) 00:10, 28 June 2016 (UTC).
 * Delete. This is a cut and paste by the original editor from his draft at Draft:Causal Neural Paradox (Thought Curvature), AfC template and all. This copy should be deleted and the article reviewed there. StarryGrandma (talk) 00:25, 28 June 2016 (UTC)
 * Delete This article is buzzword compliant, but doesn't make any sense from the mathematical, computer science or neuroscience angles. It reminds me of a typical result of a stochastic grammar computer science paper generator; local phrases are OK, but there is no global meaning. A search revealed no secondary independent sources for this topic. Hence it fails notability guidelines and should be deleted. --Mark viking (talk) 00:31, 28 June 2016 (UTC)
 * Speedy delete per G1. Utter gibberish that's lowered my intelligence just reading it. Clarityfiend (talk) 03:02, 28 June 2016 (UTC)
 * If delete, then also delete Draft:Causal Neural Paradox (Thought Curvature). Anthony Appleyard (talk) 04:30, 28 June 2016 (UTC)
 * Agreed. Xxanthippe (talk) 06:07, 28 June 2016 (UTC).


 * Comment I have also tagged the draft for deletion via MFD. Robert McClenon (talk) 13:15, 28 June 2016 (UTC)
 * Delete. Utter gibberish that at Draft appears to has but the original reading. Note: This are semantic content sources but doesn't make article reference it fails no global editor; local phrases artic English, but that are syntactic Englist of a stochastic condary independent sources for this is no sense from there is article reviewed there. This copy should be delete. (Yes, I did put this AfD through a Markov chain text generator. Which I assume is also where the article text came from.) Opabinia regalis (talk) 04:41, 28 June 2016 (UTC)
 * Speedy delete G1/G3. WP:HOAX that reads as if it was automatically generated by a random text generation algorithm such as mathgen.   Sławomir Biały  (talk) 11:49, 28 June 2016 (UTC)
 * Promote to featured article, put on the main page, and see if anyone notices Delete. Newyorkbrad (talk) 16:09, 28 June 2016 (UTC)

I claim not, of omniscience. However, this piece's construct consists strictly of verifiable parts, that confluence to engender naive 'novel' lemma. Please delete draft. I hadn't realized of draft + non draft space submissibleness. JordanMicahBennett
 * Please consider


 * Your comment contains as much sense as the article itself. Xxanthippe (talk) 02:35, 29 June 2016 (UTC).

@Xxanthippe What is your area of expertise? JordanMicahBennett 09:54, 28 June 2016 (EST).

@Xxanthippe In reducing thought curvature, I had absorbed https://www.academia.edu/13808654/Trigonometric_rule_collapser_set. Please absorb. JordanMicahBennett 10:14, 28 June 2016 (EST).
 * Please absorb
 * Delete. It is original research and fails to meet our notability guideline. I love how the editor demonstrates his non-use of computer aids for generating content in this AfD. DeVerm (talk) 04:53, 29 June 2016 (UTC)


 * Speedy delete WP:G1 as per #2 of WP:PN - agree with others that it reads like computer-generated stuff, though it may just be that the creator is very very bad at English as the comment above attests. G3 does not really apply: an article void of extractable information cannot misinform readers. Tigraan Click here to contact me 11:06, 29 June 2016 (UTC)
 * I disagree that G3 does not apply. Using a machine to generate a fake article definitely qualifies as a hoax, even if it is nonsensical.   Sławomir Biały  (talk) 11:23, 29 June 2016 (UTC)
 * G3: pages that are blatant and obvious misinformation, blatant hoaxes (including images intended to misinform) (...) - "misinformation" is "false or inaccurate information, especially that which is deliberately intended to deceive". I believe an absence of readable information is neither "false" nor "inaccurate" - it is not even wrong.
 * Actually I think G1 and G3 are mutually incompatible. (But I guess none cares. This article will get down the drain under either.) Tigraan Click here to contact me 15:29, 29 June 2016 (UTC)
 * At least one famous hoax was meaninglessness masquerading as meaning. Not every hoax is a jackelope.  Sławomir Biały  (talk) 15:58, 29 June 2016 (UTC)


 * Comment - I think that we should assume good faith when the author says it has meaning. If he thinks that it has meaning, it isn't a hoax, but it is patent nonsense.  Since the author thinks it has meaning, let's let the AFD run its course rather than have an administrator unilaterally decide on WP:G1 let alone WP:G3.  Robert McClenon (talk) 15:43, 29 June 2016 (UTC)
 * Has the author said it has meaning? He mentions "submissibleness", which is not a word.   Sławomir Biały  (talk) 16:01, 29 June 2016 (UTC)
 * I will assume good faith on user conduct, unclear messages and the like. But if someone says the Earth is flat, I will not pretend to take their opinion seriously on WP:AGF grounds; I will only pretend they are actually thinking so, and are not a disruptive troll. Similarly, I cannot "assume good faith" that the article is not patent nonsense - maybe the creator thinks it is well-written and meaningful, but by any objective measure it is not. I see no reason not to proceed with G1. Tigraan Click here to contact me 16:57, 29 June 2016 (UTC)
 * That is all that I meant by assume good faith, that the author takes it seriously. I do not consider the author to be a troll, which doesn't mean that the author should be taken seriously.  (If I thought it should be taken seriously, I wouldn't have nominated it.)  Robert McClenon (talk) 01:04, 1 July 2016 (UTC)
 * Incompetency so advanced as to be indistinguishable from trolling... does not really matter if we AGF or not. Was Time cube a troll?  Really, it does not matter.  AGF is meant to mitigate against good faith disagreements, not over-the-moon nonsense.  This deletion should have been speedied, and the author blocked for trolling.  If he is able to contribute positively to the project, presumably he is also able to formulate a coherent unblock request.  I think we seriously need to do some soul-searching if we think it's worth wasting time on contributors like this.  WP:DFTT.  Block 'em and move on.   Sławomir Biały  (talk) 01:23, 1 July 2016 (UTC)


 * see Competence is required. Xxanthippe (talk) 03:20, 1 July 2016 (UTC).
 * Do we still seriously believe that this guy is not trolling?? [[User:Sławomir Biały|Sławomir Biały ]] (talk) 02:54, 2 July 2016 (UTC)
 * I fear that the guy is not trolling but is sincere. That is why I invoked Competence is required. Xxanthippe (talk) 03:34, 2 July 2016 (UTC).

What is the depth of machine learning study amongst executive editors? For whom this applies: http://machinelearningmastery.com/4-steps-to-get-started-in-machine-learning/
 * Quaint query

Post scriptum: I profoundly apologize par usage of term 'submissible-ness', in this deletion discussion, prior. Satirical post scriptum: I steeply apologize par usage of apparently crude language. Albeit, profound language assimilation engenders of non narrow comprehension potential. 1:15, 29 June 2016 (EST). User:JordanMicahBennett
 * Comment to the author - Do not disrupt this AFD by redirecting the AFD template on the article. The redirection has been reverted to allow discussion to continue.  Robert McClenon (talk) 01:04, 1 July 2016 (UTC)

@Robert McClenon I haven't a concept for how template redirecting transpires. Albeit, I apologize. 10:08, 30 June 2016 (EST). User:JordanMicahBennett — Preceding unsigned comment added by 184.170.15.229 (talk)
 * Template redirect?

This theorem is really simple.
 * Thought curvature theory qualms

**Break it down** It proposes a naive mathematical fabric for the creation of causal reinforcement learning. We know reinforcement learning is viable from early research, where dopamine signals code for error. Furthermore, we derive that some causal basis must persist, upon which perceptual learning abstractions form as child matures. Combining these results mathematically, proposes the causal basis, to ensue as learnt distinct basis units (manifolds of non-quantum properties). That basis then produces coordinates in a higher manifold of interactions between these. A supermanifold, then utilizes almost standard temporal difference learning routine, to choose basis units, based on the interaction paradigm.

**An empirical causal reinforcement learner** There is no empirical proof (a running sample, though the constituent components are verifiable in polynomial time :)) of this 'novel' fabric, simply as I am a software engineering fellow, who does body building on the side. (not much financing thus far)   The most powerful machine I could purchase (an i7 6700, 32 gig ram, 2 terabyte hdd, gtx 960), cannot reduce this fabric's computational requirement.    As such, please allow this theorem to persist, hereafter.

**Perhaps wrong** I am human, and so this theorem may contain certain errors. Albeit I garner from logical reasoning, and from mathematically tabulating certain work done by many actual PHD's, that this theorem is of sound description/rigour. — Preceding unsigned comment added by 184.170.15.229 (talk) 03:36, 1 July 2016 (UTC)

10:39, 30 June 2016 (EST). User:JordanMicahBennett

Thanks to all for support thus far, whether positive or negative. (Thanks user:Sławomir Biały and User:Robert_McClenon) I admit, the script does seem computer generated. Ironically, the universe might just be simulatory (james gates' ) and so, we may all well be machines, sputtering strings. 10:48, 30 June 2016 (EST). User:JordanMicahBennett
 * Thus far, I thank you

SHOWBALL DELETE Staszek Lem (talk) 20:28, 1 July 2016 (UTC)

The math is justifiable. Super manifolds, manifolds, and temporal difference learning paradigm combine to form a schematic for a causal reinforcement learner. If any high schooler pays attention in deep q learning class, he/she may observe that these learners don't typically use pooling. If said high schooler likewise observes state of the art in causal physics learning, (uetorch) he/she sees that it uses pooling. If the high schooler isn't stoned, he/she will soon recognize that these things appear to be opposites at first glance; pooling vs non pooling frameworks, though it may be sensible to combine these, because: 1)humans learn by reinforcement (dopamine signals encode error) 2)humans have some in built intuition of causal laws of physics at childhood (mama I know where that tower block will fall), particularly as new abstractions develop in the brain as child matures.
 * Please don't delete

However, if the high schooler is stoned, he/she may instead utter delete the lesson sir or delete the lesson miss, not realizing that the lesson, embedded super manifolds as some higher order fabric, that mutated in sub manifold terms, particularly based on temporal difference plane.

However, the tutor can only attempt to review the lesson (though the lesson is not perfect, but begins to approach the combination of cause and temporal difference learning horizon) only after the high schooler has destoned him/her self.

JordanMicahBennett July 1, 2016, 9:55 EST

Firstly, this 'new' fabric is only mathematically traceable (as far as I have logically derived) in manifolds, super manifolds, and deep q learning (where dqn is listed separately, though it has been shown to involve manifolds), all of which have been linked appropriately. As I (and or any non drunk individual) enhance the theorem, probably, more links shall intromit.
 * Crude language

I admit, the language use is 'elegant' (apparently non welcoming, though 'strangely' of sound grammar...hmmm) However, such is simply how my mind chooses to reduce information. (Though I have managed to condense my thought cycles to more typical expression styles in this deletion discussion segment) If I had encountered this article absent somewhat deep research, I would probably holler bs. Here is a short intro to dqn (to whom such applies): https://www.quora.com/Artificial-Intelligence-What-is-an-intuitive-explanation-of-how-deep-Q-networks-DQN-work/answer/Jordan-Bennett-9 I admit, the dqn overview above is my own, written a few months prior. I have found very nice dqn guides, but none is as clear as the above. The causal physics learner stuff is included in thought curvature article, so is official state of the art dqn stuff.
 *  'Elegance' 

JordanMicahBennett July 1, 2016, 10:20 EST
 * Speedy delete as nonsense/hoax. FreeKnowledgeCreator (talk) 10:11, 3 July 2016 (UTC)

@FreeKnowledgeCreator Worry not, as the deciding has been requested to be cast upon slightly more competent administrators, via user Xxanthippe Ps: Please research dqn and uetorch, then read via Stahl et al. Easy up on the "ignorade"
 * Worry not

JordanMicahBennett July 3, 2016, 6:58 EST
 * In addition to deleting the article, there would also be a strong case for blocking its creator for deliberate trolling. FreeKnowledgeCreator (talk) 20:43, 3 July 2016 (UTC)
 * or on the basis of WP:Competence is required. Xxanthippe (talk) 22:37, 3 July 2016 (UTC).

I am quite disappointed. While few commenters display some Math/Physics know-how, none (except myself) exhibits machine learning experience. This is strongly ironic, displeasing, and incredibly worrying.
 * Disappointment in deletion suggestions (Arguments for Keeping)

JordanMicahBennett July 3, 2016, 6:05 EST

This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (June 2016) This work is of novel nature. The lemma described is first introduced herein. This article needs more links to other articles to help integrate it into the encyclopedia. (July 2016) A reiteration: Such fabric is solely mathematically traceable (as far as I have logically derived) in manifolds, super manifolds, and deep q learning (where the latter, is listed separately, though deep learning has been shown to encode manifolds), all of which have been linked appropriately. As I (and or any other) enhance the theorem, more links shall perhaps intromit. JordanMicahBennett July 3, 2016, 6:11 EST
 * Keep (Sited issues are inexistent)

I don't wish to come off as a know it all, as I am far far far (perhaps infinitely so) from omniscience. Indeed, thought curvature theorem is sub-perfect. The math described, is of-course, right, but extended parameters are considerable. Thought curvature theory is of note because this lemma introduces a naive hypothetical structure for causal reinforcement learning. Please help me in disseminating this naive, but robust, abstract geometrical concept. I have been devoting quite a significant amount of energies into thought curvature's description/logic. I've went as far as neglecting software job in the past, just to pour time into this work. If one observe's my, one observes that a significant portion of my for example, concerns machine learning. The article's content is of sound rigour. To compose such, I had to grasp many topics, including quantum mechanics We all (unless brain damaged) have billions of usable neurons. We must all attempt attempt to activate a great majority of them whilst making decisions. Albeit, you guys may have recognized by now that I am one of you. JordanMicahBennett July 3, 2016, 6:25 EST
 * Keep (Why Thought Curvature Theory is Notable)
 * Filling up the Afd page with nonsense comments, which seem intended to be as long as possible, is disruptive, and may be grounds for a block if you persist in such behavior. FreeKnowledgeCreator (talk) 04:06, 4 July 2016 (UTC)


 * The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.