Talk:Regulatory feedback network

Problems
This article by User:Achler about a topic sourced only to recent primary sources by author Achler is not likely to pass WP:NOTE. At best, a brief mention of RFNs in another article may be in order, but even there the decision should be left to an editor not in a WP:COI situation. Dicklyon (talk) 08:23, 14 December 2009 (UTC)


 * I wrote this because the Wikipedia format is a good way to explain a complicated method that is difficult to explain with paper. I understand the concern about self-citing and I think that it should be clearly indicated.  However deleting seems counterproductive and harsh.  Similar articles can be found and don’t cite at all (e.g. Normalization_model).


 * Hopefully this problem is now adequately addressed. I couldn't figure out how to put back the milder warning that indicated this article self-cites.  --Achler (talk) 19:55, 21 December 2009 (UTC)


 * The key problem is notability (see WP:NOTE). If you can show us sources (not your primary sources) about the concept, we can consider keeping an article about it.  Given such sources, other editors can do much of the writing, or least check what you're doing.  Dicklyon (talk) 06:45, 22 December 2009 (UTC)


 * There are two secondary sources cited. Do you want their pdfs? --Achler (talk) 17:18, 22 December 2009 (UTC)


 * That might be useful. They're not secondary with respect to your more recent primary sources, so I've have to see what they have to say on the topic. The first one, here, doesn't seem to have any words similar to "regulatory feedback network."  The other I don't find for free online, but I see that you reference it offhandedly for the stability of some equations, and search suggests that it also doesn't talk about "regulatory feedback networks."  So I don't see any secondary sources about the topic. Dicklyon (talk) 08:28, 23 December 2009 (UTC)


 * The secondary references refer to “regulatory feedback networks” with connection weights as “Virtual Lateral Inhibition”. The name Virtual Lateral Inhibition is not ideal because: 1) not all configurations of “Virtual Lateral Inhibition” will appear as Lateral Inhibition,  2) Virtual lateral inhibition does not describe the underlying  mechanism.  The name “regulatory feedback” does a better job explaining the mechanism.  I also included "virtual lateral inhibition" in the text of the article.  I can also make a redirect link for virtual lateral inhibition to this page.  --Achler (talk) 18:57, 24 December 2009 (UTC)


 * Yes, I see that the concept of "virtual lateral inhibition" is widely referenced and discussed, so is notable enough for an article. In spite of your good reasons to want to rename it, that would not be appropriate here.  We should move the article and write it from the broader range of sources found via that name.  Dicklyon (talk) 21:02, 24 December 2009 (UTC)


 * I do not see why one page can’t reference the other. So I am understanding from your feedback that even though I wrote this article, worked on this for over 15 years, provide refereed citations, have a doctorate degree in this, gave detailed feedback and support on this subject,  I am a piece of garbage and my work or contribution does not matter. --Achler (talk) 23:24, 24 December 2009 (UTC)


 * Not at all. What I'm saying is that you should take the time to learn about Wikipedia policies and try to contribute within the way this thing works.  See WP:PILLARS, WP:V, WP:RS, WP:NPOV and such.  A neutral and verifiable approach to this topic should respect what the topic is mainly called in the literature.  You can add some material from your own recent primary sources, but they can't dominate the topic; see WP:COI.  Then see WP:NOTE; if "regulatory feedback network" is a notable topic separate from "virtual lateral inhibition", show us the secondary sources with significant coverage of the topic, and we can have an article on it; if you can't show those, we can't.  It's not about you, but about your contributions (see WP:NPA).  Dicklyon (talk) 04:31, 25 December 2009 (UTC)

Lead section
The lead section is clearly missing some context. It doesn't address what area of study the article is about, nor what the word "network" means in this context (neural networks?), making it a very confusing read to anyone unfamiliar with the subject. Does the model apply to real-world psychology or just AI? -- intgr [talk] 11:17, 22 December 2009 (UTC)


 * During recognition feedback dynamically determines each input’s relevance – salience. This contributes to the model’s inherent human-like search performance.  Human-like phenomena include: a) target-distractor difficulty effects based on similarity;  b) pop-out, faster processing for targets with novel features; and c) numerosity (Achler, Vural, & Amir, 2009), an ability to estimate the number of patterns in a scene without individually counting each one.
 * For example, you can see from the salience $$s_i$$ equation that if an input is unique (not utilized by many outputs), then $$\sum_{r\in{FB_i}}y_r(t)$$ will be small and the input is more likely to have larger salience and 'pop-out'.
 * Some of these findings are not adequately published yet so I did not include a discussion of them in the page. --Achler (talk) 17:13, 22 December 2009 (UTC)