Talk:Types of artificial neural networks

One-shot associative memories don't require parallel processing
It is stated in the article that the one-shot associative memory type network "however requires parallel processing". This cannot be correct—since all computations that are performed in parallel can be serialized—unless there is some computation that is required to run in a speed that could not be attained unless the computation was parallelized. This however is dubious, and I don't see what the sentence is actually trying to say. To me, it only seems like an advantage that the algorithm actually can be parallelized. —Kri (talk) 14:21, 4 October 2014 (UTC)


 * I see now that "it" was supposed to refer to real-time pattern recognition and high scalability, while my initially interpretation was that it referred to the one-shot associative memory itself. I changed "it" to "this" and hope that will resolve the ambiguity. —Kri (talk) 10:08, 14 October 2014 (UTC)

Section How RBF networks work copied from external page
In this edit, content was added that seems to have been copied from this external page. Did the author of that page give his/her permission for the text to be copied into Wikipedia? Otherwise, this could be a potential copyright violation. Do we need to handle this in any way? —Kri (talk) 10:48, 14 October 2014 (UTC)

Reconciliation with ANN
Artificial Neural Networks uses a different outline than this page. Neither article is properly sourced. I am removing the details from ANN so that we have a single source of truth. Would appreciate an expert's eye on the results. Generally, I am using this article's outline, while adding elements that are not in this article, but are in ANN. Lfstevens (talk) 16:51, 18 June 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 external links on Types of artificial neural networks. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20101218121158/http://herselfsai.com/2007/03/probabilistic-neural-networks.html to http://herselfsai.com/2007/03/probabilistic-neural-networks.html
 * Added archive https://web.archive.org/web/20120131053940/http://www.psi.toronto.edu/~vincent/research/presentations/PNN.pdf to http://www.psi.toronto.edu/~vincent/research/presentations/PNN.pdf

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 18:44, 14 September 2017 (UTC)

Closed-form continuous-time neural networks ?
I don't understand this well enough to add to the article, but just read about the concept here:

MIT solved a century-old differential equation to break 'liquid' AI's computational bottleneck - Engadget, Nov. 15, 2022 Closed-form continuous-time neural networks - Nature (Machine Intelligence), Nov. 15, 2022 Cheers! 98.155.8.5 (talk) 03:58, 19 November 2022 (UTC)