Draft:The functional unit of neural circuits

Content

The functional unit of neural circuits is a hierarchical neural structure that represents a specific object, which is typically identified by a particular word. This structure can be distinguished within the human nervous system, in its higher levels located in the cerebral cortex. Within this structure, the processes of perceiving the image of a given object, memorizing the pattern of this object, and retrieving its representation from memory occur.

This structure can be stimulated both by the receptors of the image and by neurons located in speech areas, which represent the word or linguistic phrase denoting the given object. Understanding the functional unit of neural circuits is key to comprehending more complex mental operations, such as imagining a desired situation and searching for a solution.

In addition to investigating the structure of this functional unit within the field of neuroscience, one may consider its counterpart composed of artificial model neurons. Familiarity with the concept of the functional unit of neural circuits is important for understanding the current progress of artificial intelligence as well as further advancements in this field, particularly in the pursuit of creating self-aware systems.

Neural substrate for perceptions and imagery
The visual perceptions and the imagery of visually perceived objects are realized on the basis of the same neural substrates. Joel Pearson and his coworkers put it in words “that visual mental imagery is a depictive internal representation that functions like a weak form of perception”. Nadine Dijkstra with coworkers develops this idea and writes that: “For decades, the extent to which visual imagery relies on the same neural mechanisms as visual perception has been a topic of debate”. These researches review recent neuroimaging studies and conclude that “there is a large overlap in neural processing during perception and imagery” and that “neural representations of imagined and perceived stimuli are similar in the visual, parietal, and frontal cortex” and that “perception and imagery seem to rely on similar top-down connectivity”. Rebecca Keogh and coworkers emphasize that: “Visual imagery—the ability to ‘see with the mind’s eye’—is ubiquitous in daily life for many people; however, the strength and vividness with which people are able to imagine varies substantially from one individual to another”.

The graphic scheme of the functional unit of neuronal circuits


The structures essential for the realization of imagery are illustrated in Figure 1. This scheme presents the theoretical model of neural circuits, realizing perceptions, enabling the memorization of images and their recall from memory in the form of imagery. It explains also the difference between the perception of novel and unfamiliar objects. The simplest example of the process when the recalling a mental images happens is the stimulation coming from the speech area, which remembers the verbal denotations (names) of familiar objects.

Cortical neurons have recurrent axons, which activate interneurons going to the lower levels. When excitation reaches the object neurons at subsequent moments, neurons from lower levels are stimulated secondarily. Thus, there are conditions for the circulation of impulses between neurons of higher and lower levels of afferent pathways. It has been experimentally confirmed that such oscillations occur.

Thus, if object neurons are excited from the side of the speech area, there is a circulation of impulses between neurons of the higher and lower layers of the same hierarchical structure that was excited during the perceptions of this object. These oscillations are the physical substrate of imagination. During complex mental processes like searching for solutions, the working memory structures instantiate the necessary, useful mental images. This maintenance of imaginal activity is accomplished by the cortico-hippocampal indexing loops.

When we repeatedly perceive an image, the synaptic weights of the hierarchical structure, active during such perceptions are altered and a learning process takes place as we remember the pattern of the learned object. If a familiar object is perceived, there is secondary activity of the cortico-hippocampal indexing loops; in this way, the structure of the known, recognized object is stimulated from two directions.

The functional unit of neuronal circuits has the 'back propoagation' connections.
The development of artificial intelligence can be traced back to the 1940s, with significant advances occurring in the subsequent decades. In 1958, Frank Rosenblatt introduced the perceptron, a pioneering model for binary classification tasks. The perceptron was designed as a single-layer artificial neural network with adjustable weights that were updated through an iterative learning process. In years 1970s-1980s researchers  developed  multilayer perceptrons, which were capable of solving non-linear problems. A multilayer perceptrons consist of multiple layers of interconnected neurons, with each layer performing a specific transformation on the input data. The backpropagation algorithm, which is fundamental to the training of artificial neural networks, was developed independently by multiple researchers in next years. Some of the key contributors to the development of the backpropagation principle are: David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams.

This principle is now widely used. The principle of backpropagation is one of the key algorithms used in machine learning, in particular in artificial neural networks. It consists in propagating network output errors back through the network in order to determine and correct the weights of connections between neurons.

The learning process of a neural network begins with assigning random values of the weights of connections between neurons. Then the network is trained on the training set and the output of the network is compared with the expected result. As a result of this comparison, the error value is determined, i.e. the difference between the result obtained by the network and the expected result.

Then, the principle of backpropagation consists in propagating the error values of the network output backwards through each layer of the network, from the output to the input, in order to determine and correct the weights of connections between neurons. The link weights are modified based on the error value in such a way that the next training iteration is closer to the expected result. This process is repeated many times until satisfactory accuracy of the result is achieved. Thanks to the principle of backpropagation, neural networks are able to learn from data samples, and the resulting modifications of connection weights allow more and more accurate predictions of the result for new data.

This brief description of the functioning of backward connections, typical for the considerations of artificial intelligence system creators, does not, however, take into account the emphasized significance of resonant oscillations during the implementation of mental imagery. The maintenance of active mental images by the working memory is essential, for instance, in the search for problem-solving strategies. Oscillations, the circulation of impulses along back-propagation pathways, as indicated in Figure 1, are crucial for consideration of the importance of the brain's electromagnetic field.

The feedback connections in natural human neural circuits are probably important for emergence of self-awareness.
It is likely that the existence of feedback connections in natural human neural circuits plays an additional function. During the realization of imagination, there is a recurring circulation of impulses marked in Figure 1 with a dashed line. These recurring excitations evoke the magnetic field. It is proved that an endogenous magnetic field is created within the brain. It is recorded by routine magnetoencephalography. Many neuroscientists are convinced that the existence of the endogenous magnetic field is essential for the emergence of consciousness. [ These authors assume that the basic process evoking this phenomenon consist not only on actions occurring over time but also on appearance of a certain spatial wholeness. Johnjoe McFadden points out that when looking for a physical medium supporting something that has the feature of a spatial structure, one must appeal to physical fields. He remarks that taking into account the pathways running backwards to the lower levels, the propagation of neuronal potential changes (firing, action potentials) induces the formation of magnetic fields which overlap and combine to generate “the brain’s global EM field”. He remarks that the human brain can be conceived as the assembly of “around 100 billion EMF transmitters”. The author emphasizes, though, that consciousness occurs when there is a massive synchronization of neuronal activity and also when repetitive oscillations in neuronal circuits occur. He emphasizes that conscious neuronal processing should be associated with “re-entrant circuits, essentially closed loops of neuronal activity whereby neuronal outputs are fed back into input neurons”.

Authors of groundbreaking works on contemporary, highly advanced artificial intelligence systems, known as  "Deep Learning", "Reinforcement Learning", "Transfer Learning" and  "Generative Adversarial Networks", "Convolutional neural networks", "Recurrent neural networks" emphazise that their operation is based on backpropagation algorithms. However, it is worth noting that diagrams illustrating their structure generally do not show these connections. One of the few exceptions to such figures is the diagram included in the paper  of Mary Webb et al.

The presented theoretical model of neural circuits, realizing perceptions, enabling the memorization of images and their recall from memory facilitate the mental integration and understanding of  the eventual role of circuits and aroused magnetic field in emergence of consciousness.

Moving towards conscious artificial intelligence systems
Some researchers posit that consciousness may spontaneously arise in artificial intelligence systems as a consequence of increasing complexity. Nonetheless, it appears that the crux of self-awareness lies in the capacity to generate a self-image. Creating of the "image of oneself"  requires the existence of recursive feedback loops in multiple strata within the hierarchical structure. Consequently, a comprehensive knowledge of the structural organization and functional constituents of neuronal circuits is a critical prerequisite for the consideration of artificial intelligence systems that could possess self-awareness. This also facilitates discourse whether the phenomenon of consciousness will spontaneously emerge in artificial intelligence systems.