Semantic memory

Semantic memory refers to general world knowledge that humans have accumulated throughout their lives. This general knowledge (word meanings, concepts, facts, and ideas) is intertwined in experience and dependent on culture. New concepts are learned by applying knowledge learned from things in the past.

Semantic memory is distinct from episodic memory—the memory of experiences and specific events that occur in one's life that can be recreated at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of stroking a particular cat.

Semantic memory and episodic memory are both types of explicit memory (or declarative memory), or memory of facts or events that can be consciously recalled and "declared". The counterpart to declarative or explicit memory is implicit memory (also known as nondeclarative memory).

History
The idea of semantic memory was first introduced following a conference in 1972 between Endel Tulving and W. Donaldson on the role of organization in human memory. Tulving constructed a proposal to distinguish between episodic memory and what he termed semantic memory. He was mainly influenced by the ideas of Reiff and Scheers, who in 1959 made the distinction between two primary forms of memory. One form was entitled remembrances, and the other memoria. The remembrance concept dealt with memories that contained experiences of an autobiographic index, whereas the memoria concept dealt with memories that did not reference experiences having an autobiographic index. Semantic memory reflects the knowledge of the world, and the term general knowledge is often used. It holds generic information that is more than likely acquired across various contexts and is used across different situations. According to Madigan in his book titled Memory, semantic memory is the sum of all knowledge one has obtained—vocabulary, understanding of math, or all the facts one knows. In his book titled Episodic and Semantic Memory, Tulving adopted the term semantic from linguists to refer to a system of memory for "words and verbal symbols, their meanings and referents, the relations between them, and the rules, formulas, or algorithms for influencing them". The use of semantic memory differs from episodic memory: semantic memory refers to general facts and meanings one shares with others, while episodic memory refers to unique and concrete personal experiences. Tulving's proposal of this distinction was widely accepted, primarily because it allowed the separate conceptualization of world knowledge. Tulving discusses conceptions of episodic and semantic memory in his book titled Precise of Elements of Episodic Memory, in which he states that several factors differentiate between episodic memory and semantic memory in ways that include


 * 1) the characteristics of their operations,
 * 2) the kind of information they process, and
 * 3) their application to the real world as well as the memory laboratory.

In 2022, researchers Felipe De Brigard, Sharda Umanath, and Muireann Irish argued that Tulving conceptualized semantic memory to be different from episodic memory in that "episodic memories were viewed as supported via spatiotemporal relations while information in semantic memory was mediated through conceptual, meaning-based associations".

Recent research has focused on the idea that when people access a word's meaning, sensorimotor information that is used to perceive and act on the concrete object the word suggests is automatically activated. In the theory of grounded cognition, the meaning of a particular word is grounded in the sensorimotor systems. For example, when one thinks of a pear, knowledge of grasping, chewing, sights, sounds, and tastes used to encode episodic experiences of a pear are recalled through sensorimotor simulation. A grounded simulation approach refers to context-specific re-activations that integrate the important features of episodic experience into a current depiction. Such research has challenged previously utilized amodal views. The brain encodes multiple inputs such as words and pictures to integrate and create a larger conceptual idea by using amodal views (also known as amodal perception). Instead of being representations in modality-specific systems, semantic memory representations had previously been viewed as redescriptions of modality-specific states. Some accounts of category-specific semantic deficits that are amodal remain even though researchers are beginning to find support for theories in which knowledge is tied to modality-specific brain regions. The concept that semantic representations are grounded across modality-specific brain regions can be supported by episodic and semantic memory appearing to function in different yet mutually dependent ways. The distinction between semantic and episodic memory has become a part of the broader scientific discourse. For example, researchers speculate that semantic memory captures the stable aspects of our personality while episodes of illness may have a more episodic nature.

Jacoby and Dallas (1981)
This study was not created to solely provide evidence for the distinction of semantic and episodic memory stores. However, they did use the experimental dissociation method which provides evidence for Tulving's hypothesis.

In the first part, subjects were presented with a total of 60 words (one at a time) and were asked different questions.


 * Some questions asked were to cause the subject to pay attention to the visual appearance: Is the word typed in bold letters?
 * Some questions caused the participants to pay attention to the sound of the word: Does the word rhyme with ball?
 * Some questions caused the subjects to pay attention to the meaning of the word: Does the word refer to a form of communication?
 * Half of the questions were "no" answers and the other half "yes"

In the second phase of the experiment, 60 "old words" seen in stage one and 20 "new words" not shown in stage one were presented to the subjects one at a time.

The subjects were given one of two tasks:

Results showed that the percentage of correct answers in the semantic task (perceptual identification) did not change with the encoding conditions of appearance, sound, or meaning. The percentage of correct answers for the episodic task increased from the appearance condition (.50), to the sound condition (.63), to the meaning condition (.86). The effect was also greater for the "yes" encoding words than the "no" encoding words, which suggested a strong distinction of performance of episodic and semantic tasks, supporting Tulving's hypothesis.
 * Perceptual identification task: The words were flashed on a video-screen for 35 milliseconds and the subjects were required to say what the word was.
 * Episodic recognition task: Subjects were presented with each word and had to decide whether they had seen the word in the previous stage of the experiment.

Models
Semantic memory's contents are not tied to any particular instance of experience, as in episodic memory. Instead, what is stored in semantic memory is the "gist" of experience, an abstract structure that applies to a wide variety of experiential objects and delineates categorical and functional relationships between such objects. There are numerous sub-theories related to semantic memory that have developed since Tulving initially posited his argument on the differences between semantic and episodic memory; an example is the belief in hierarchies of semantic memory, in which different information one has learned with specific levels of related knowledge is associated. According to this theory, brains are able to associate specific information with other disparate ideas despite not having unique memories that correspond to when that knowledge was stored in the first place. This theory of hierarchies has also been applied to episodic memory, as in the case of work by William Brewer on the concept of autobiographical memory.

Network models
Networks of various sorts play an integral part in many theories of semantic memory. Generally speaking, a network is composed of a set of nodes connected by links. The nodes may represent concepts, words, perceptual features, or nothing at all. The links may be weighted such that some are stronger than others or, equivalently, have a length such that some links take longer to traverse than others. All these features of networks have been employed in models of semantic memory.

Teachable language comprehender
One of the first examples of a network model of semantic memory is the teachable language comprehender (TLC). In this model, each node is a word, representing a concept (like bird). Within each node is stored a set of properties (like "can fly" or "has wings") as well as links to other nodes (like chicken). A node is directly linked to those nodes of which it is either a subclass or superclass (i.e., bird would be connected to both chicken and animal). Properties are stored at the highest category level to which they apply; for example, "is yellow" would be stored with canary, "has wings" would be stored with bird (one level up), and "can move" would be stored with animal (another level up). Nodes may also store negations of the properties of their superordinate nodes (i.e., "NOT-can fly" would be stored with "penguin").

Processing in TLC is a form of spreading activation. When a node becomes active, that activation spreads to other nodes via the links between them. In that case, the time to answer the question "Is a chicken a bird?" is a function of how far the activation between the nodes for chicken and bird must spread, or the number of links between those nodes.

The original version of TLC did not put weights on the links between nodes. This version performed comparably to humans in many tasks, but failed to predict that people would respond faster to questions regarding more typical category instances than those involving less typical instances. Allan Collins and Quillian later updated TLC to include weighted connections to account for this effect, which allowed it to explain both the familiarity effect and the typicality effect. Its biggest advantage is that it clearly explains priming: information from memory is more likely to be retrieved if related information (the "prime") has been presented a short time before. There are still a number of memory phenomena for which TLC has no account, including why people are able to respond quickly to obviously false questions (like "is a chicken a meteor?") when the relevant nodes are very far apart in the network.

Semantic networks
TLC is an instance of a more general class of models known as semantic networks. In a semantic network, each node is to be interpreted as representing a specific concept, word, or feature; each node is a symbol. Semantic networks generally do not employ distributed representations for concepts, as may be found in a neural network. The defining feature of a semantic network is that its links are almost always directed (that is, they only point in one direction, from a base to a target) and the links come in many different types, each one standing for a particular relationship that can hold between any two nodes.

Semantic networks see the most use in models of discourse and logical comprehension, as well as in artificial intelligence. In these models, the nodes correspond to words or word stems and the links represent syntactic relations between them.

Feature models
Feature models view semantic categories as being composed of relatively unstructured sets of features. The semantic feature-comparison model describes memory as being composed of feature lists for different concepts. According to this view, the relations between categories would not be directly retrieved, and would be indirectly computed instead. For example, subjects might verify a sentence by comparing the feature sets that represent its subject and predicate concepts. Such computational feature-comparison models include the ones proposed by Meyer (1970), Rips (1975), and Smith et al. (1974).

Early work in perceptual and conceptual categorization assumed that categories had critical features and that category membership could be determined by logical rules for the combination of features. More recent theories have accepted that categories may have an ill-defined or "fuzzy" structure and have proposed probabilistic or global similarity models for the verification of category membership.

Associative models
The set of associations among a collection of items in memory is equivalent to the links between nodes in a network, where each node corresponds to a unique item in memory. Indeed, neural networks and semantic networks may be characterized as associative models of cognition. However, associations are often more clearly represented as an N×N matrix, where N is the number of items in memory; each cell of the matrix corresponds to the strength of the association between the row item and the column item.

Learning of associations is generally believed to be a Hebbian process, where whenever two items in memory are simultaneously active, the association between them grows stronger, and the more likely either item is to activate the other. See below for specific operationalizations of associative models.

Search of associative memory
A standard model of memory that employs association in this manner is the search of associative memory (SAM) model. Though SAM was originally designed to model episodic memory, its mechanisms are sufficient to support some semantic memory representations. The model contains a short-term store (STS) and long-term store (LTS), where STS is a briefly activated subset of the information in the LTS. The STS has limited capacity and affects the retrieval process by limiting the amount of information that can be sampled and limiting the time the sampled subset is in an active mode. The retrieval process in LTS is cue dependent and probabilistic, meaning that a cue initiates the retrieval process and the selected information from memory is random. The probability of being sampled is dependent on the strength of association between the cue and the item being retrieved, with stronger associations being sampled before one is chosen. The buffer size is defined as r, and not a fixed number, and as items are rehearsed in the buffer the associative strengths grow linearly as a function of the total time inside the buffer. In SAM, when any two items simultaneously occupy a working memory buffer, the strength of their association is incremented; items that co-occur more often are more strongly associated. Items in SAM are also associated with a specific context, where the strength of that association determined by how long each item is present in a given context. In SAM, memories consist of a set of associations between items in memory and between items and contexts. The presence of a set of items and/or a context is more likely to evoke some subset of the items in memory. The degree to which items evoke one another—either by virtue of their shared context or their co-occurrence—is an indication of the items' semantic relatedness.

In an updated version of SAM, pre-existing semantic associations are accounted for using a semantic matrix. During the experiment, semantic associations remain fixed showing the assumption that semantic associations are not significantly impacted by the episodic experience of one experiment. The two measures used to measure semantic relatedness in this model are latent semantic analysis (LSA) and word association spaces (WAS). The LSA method states that similarity between words is reflected through their co-occurrence in a local context. WAS was developed by analyzing a database of free association norms, and is where "words that have similar associative structures are placed in similar regions of space".

ACT-R: a production system model
The adaptive control of thought (ACT) (and later ACT-R (Adaptive Control of Thought-Rational) ) theory of cognition represents declarative memory (of which semantic memory is a part) as "chunks", which consist of a label, a set of defined relationships to other chunks (e.g., "this is a _", or "this has a _"), and any number of chunk-specific properties. Chunks can be mapped as a semantic network, given that each node is a chunk with its unique properties, and each link is the chunk's relationship to another chunk. In ACT, a chunk's activation decreases as a function of the time from when the chunk was created, and increases with the number of times the chunk has been retrieved from memory. Chunks can also receive activation from Gaussian noise and from their similarity to other chunks. For example, if chicken is used as a retrieval cue, canary will receive activation by virtue of its similarity to the cue. When retrieving items from memory, ACT looks at the most active chunk in memory; if it is above threshold, it is retrieved; otherwise an "error of omission" has occurred and the item has been forgotten. There is also retrieval latency, which varies inversely with the amount by which the activation of the retrieved chunk exceeds the retrieval threshold. This latency is used to measure the response time of the ACT model and compare it to human performance.

Statistical models
Some models characterize the acquisition of semantic information as a form of statistical inference from a set of discrete experiences, distributed across a number of contexts. Though these models differ in specifics, they generally employ an (Item × Context) matrix where each cell represents the number of times an item in memory has occurred in a given context. Semantic information is gleaned by performing a statistical analysis of this matrix.

Many of these models bear similarity to the algorithms used in search engines, though it is not yet clear whether they really use the same computational mechanisms.

Latent semantic analysis
One of the more popular models is latent semantic analysis (LSA). In LSA, a T × D matrix is constructed from a text corpus, where T is the number of terms in the corpus and D is the number of documents (here "context" is interpreted as "document" and only words—or word phrases—are considered as items in memory). Each cell in the matrix is then transformed according to the equation:

$$\mathbf{M}_{t,d}'=\frac{\ln{(1 + \mathbf{M}_{t,d})}}{-\sum_{i=0}^D P(i|t) \ln{P(i|t)}}$$

where $$P(i|t)$$ is the probability that context $$i$$ is active, given that item $$t$$ has occurred (this is obtained simply by dividing the raw frequency, $$\mathbf{M}_{t,d}$$ by the total of the item vector, $$\sum_{i=0}^D \mathbf{M}_{t,i}$$).

Hyperspace Analogue to Language (HAL)
The Hyperspace Analogue to Language (HAL) model considers context only as the words that immediately surround a given word. HAL computes an NxN matrix, where N is the number of words in its lexicon, using a 10-word reading frame that moves incrementally through a corpus of text. Like SAM, any time two words are simultaneously in the frame, the association between them is increased, that is, the corresponding cell in the NxN matrix is incremented. The bigger the distance between the two words, the smaller the amount by which the association is incremented (specifically, $$\Delta=11-d$$, where $$d$$ is the distance between the two words in the frame).

Location of semantic memory in the brain
The cognitive neuroscience of semantic memory is a controversial issue with two dominant views.

Many researchers and clinicians believe that semantic memory is stored by the same brain systems involved in episodic memory, that is, the medial temporal lobes, including the hippocampal formation. In this system, the hippocampal formation "encodes" memories, or makes it possible for memories to form at all, and the neocortex stores memories after the initial encoding process is completed. Recently, new evidence has been presented in support of a more precise interpretation of this hypothesis. The hippocampal formation includes, among other structures: the hippocampus itself, the entorhinal cortex, and the perirhinal cortex. These latter two make up the parahippocampal cortices. Amnesiacs with damage to the hippocampus but some spared parahippocampal cortex were able to demonstrate some degree of intact semantic memory despite a total loss of episodic memory, which strongly suggests that information encoding leading to semantic memory does not have its physiological basis in the hippocampus.

Other researchers believe the hippocampus is only involved in episodic memory and spatial cognition, which raises the question of where semantic memory may be located. Some believe semantic memory lives in the temporal cortex, while others believe that it is widely distributed across all brain areas.

Neural correlates and biological workings
The hippocampal areas associate semantic memory with declarative memory. The left inferior prefrontal cortex and the left posterior temporal areas are other areas involved in semantic memory use. Temporal lobe damage affecting the lateral and medial cortexes have been related to semantic impairments. Damage to different areas of the brain affect semantic memory differently.

Neuroimaging evidence suggests that left hippocampal areas show an increase in activity during semantic memory tasks. During semantic retrieval, two regions in the right middle frontal gyrus and the area of the right inferior temporal gyrus similarly show an increase in activity. Damage to areas involved in semantic memory result in various deficits, depending on the area and type of damage. For instance, Lambon Ralph, Lowe, & Rogers (2007) found that category-specific impairments can occur where patients have different knowledge deficits for one semantic category over another, depending on location and type of damage. Category-specific impairments might indicate that knowledge may rely differentially upon sensory and motor properties encoded in separate areas (Farah and McClelland, 1991).

Category-specific impairments can involve cortical regions where living and nonliving things are represented and where feature and conceptual relationships are represented. Depending on the damage to the semantic system, one type might be favored over the other. In many cases, there is a point where one domain is better than the other (such as the representation of living and nonliving things over feature and conceptual relationships or vice versa).

Different diseases and disorders can affect the biological workings of semantic memory. A variety of studies have been done in an attempt to determine the effects on varying aspects of semantic memory. For example, Lambon, Lowe, & Rogers studied the different effects semantic dementia and herpes simplex virus encephalitis have on semantic memory. They found that semantic dementia has a more generalized semantic impairment. Additionally, deficits in semantic memory as a result of herpes simplex virus encephalitis tend to have more category-specific impairments. Other disorders that affect semantic memory, such as Alzheimer's disease, has been observed clinically as errors in naming, recognizing, or describing objects. Whereas researchers have attributed such impairment to degradation of semantic knowledge.

Various neural imaging and research points to semantic memory and episodic memory resulting from distinct areas in the brain. Other research suggests that both semantic memory and episodic memory are part of a singular declarative memory system, yet represent different sectors and parts within the greater whole. Different areas within the brain are activated depending on whether semantic or episodic memory is accessed.

Category-specific semantic impairments
Category-specific semantic impairments are a neuropsychological occurrence in which an individual ability to identify certain categories of objects is selectively impaired while other categories remain undamaged. This condition can result in brain damage that is widespread, patchy, or localized. Research suggests that the temporal lobe, more specifically the structural description system, might be responsible for category specific impairments of semantic memory disorders.

Theories on category-specific semantic deficits tend to fall into two different groups based on their underlying principles. Theories based on the correlated structure principle, which states that conceptual knowledge organization in the brain is a reflection of how often an object's properties occur, assume that the brain reflects the statistical relation of object properties and how they relate to each other. Theories based on the neural structure principle, which states that the conceptual knowledge organization in the brain is controlled by representational limits imposed by the brain itself, assume that organization is internal. These theories assume that natural selective pressures have caused neural circuits specific to certain domains to be formed, and that these are dedicated to problem-solving and survival. Animals, plants, and tools are all examples of specific circuits that would be formed based on this theory.

Impairment categories
Category-specific semantic deficits tend to fall into two different categories, each of which can be spared or emphasized depending on the individual's specific deficit. The first category consists of animate objects with, animals being the most common deficit. The second category consists of inanimate objects with two subcategories: fruits and vegetables (biological inanimate objects), and artifacts being the most common deficits. The type of deficit does not indicate a lack of conceptual knowledge associated with that category, as the visual system used to identify and describe the structure of objects functions independently of an individual's conceptual knowledge base.

Most of the time, these two categories are consistent with case-study data. However, there are a few exceptions to the rule. Categories like food, body parts, and musical instruments have been shown to defy the animate/inanimate or biological/non-biological categorical division. In some cases, it has been shown that musical instruments tend to be impaired in patients with damage to the living things category despite the fact that musical instruments fall in the non-biological/inanimate category. However, there are also cases of biological impairment where musical instrument performance is at a normal level. Similarly, food has been shown to be impaired in those with biological category impairments. The category of food specifically can present some irregularities though because it can be natural, but it can also be highly processed, such as in a case study of an individual who had impairments for vegetables and animals, while their category for food remained intact.

The role of modality
Modality refers to a semantic category of meaning that has to do with necessity and probability expressed through language. In linguistics, certain expressions are said to have modal meanings. A few examples of this include conditionals, auxiliary verbs, adverbs, and nouns. When looking at category-specific semantic deficits, there is another kind of modality that looks at word relationships which is much more relevant to these disorders and impairments.

For category-specific impairments, there are modality-specific theories that are based on a few general predictions. These theories state that damage to the visual modality will result in a deficit of biological objects, while damage to the functional modality will result in a deficit of non-biological objects (artifacts). Modality-based theories assume that if there is damage to modality-specific knowledge, then all the categories that fall under it will be damaged. In this case, damage to the visual modality would result in a deficit for all biological objects with no deficits restricted to the more specific categories. For example, there would be no category specific semantic deficits for just "animals" or just "fruits and vegetables".

Semantic dementia
Semantic dementia is a semantic memory disorder that causes patients to lose the ability to match words or images to their meanings. It is fairly rare for patients with semantic dementia to develop category specific impairments, though there have been documented cases of it occurring. Typically, a more generalized semantic impairment results from dimmed semantic representations in the brain.

Alzheimer's disease is a subcategory of semantic dementia which can cause similar symptoms. The main difference between the two is that Alzheimer's is categorized by atrophy to both sides of the brain, while semantic dementia is categorized by loss of brain tissue in the front portion of the left temporal lobe. With Alzheimer's disease in particular, interactions with semantic memory produce different patterns in deficits between patients and categories over time which is caused by distorted representations in the brain. For example, in the initial onset of Alzheimer's disease, patients have mild difficulty with the artifacts category. As the disease progresses, the category specific semantic deficits progress as well, and patients see a more concrete deficit with natural categories. In other words, the deficit tends to be worse with living things as opposed to non-living things.

Herpes simplex virus encephalitis
Herpes simplex virus encephalitis (HSVE) is a neurological disorder which causes inflammation of the brain. Early symptoms include headache, fever, and drowsiness, but over time symptoms including diminished ability to speak, memory loss, and aphasia develop. HSVE can also cause category-specific semantic deficits to occur. When this happens, patients typically have damage temporal lobe damage that affects the medial and lateral cortex as well as the frontal lobe. Studies have also shown that patients with HSVE have a much higher incidence of category-specific semantic deficits than those with semantic dementia, though both cause a disruption of flow through the temporal lobe.

Brain lesions
A brain lesion refers to any abnormal tissue in or on the brain, most often caused by a trauma or infection. In one case study, a patient underwent surgery to remove an aneurysm, and the surgeon had to clip the anterior communicating artery which resulted in basal forebrain and fornix lesions. Before surgery, this patient was completely independent and had no semantic memory issues. However, after the operation and the lesions developed, the patient reported difficulty with naming and identifying objects, recognition tasks, and comprehension. The patient had a much more significant amount of trouble with objects in the living category which could be seen in the drawings of animals which the patient was asked to do and in the data from the matching and identification tasks. Every lesion is different, but in this case study researchers suggested that the semantic deficits presented themselves as a result of disconnection of the temporal lobe. The findings led to the conclusion that any type of lesion in the temporal lobe, depending on severity and location, has the potential to cause semantic deficits.

Semantic differences in gender
The following table summarizes conclusions from the Journal of Clinical and Experimental Neuropsychology. These results give a baseline for the differences in semantic knowledge across gender for healthy subjects. Experimental data observes that males with category-specific semantic deficits are mainly impaired with fruits and vegetables while women with category specific semantic deficits are mainly impaired with animals and artifacts. It has been concluded that there are significant gender differences when it comes to category-specific semantic deficits, and that the patient will tend to be impaired in categories that had less existing knowledge to begin with.

Modality-specific impairments
Semantic memory is also discussed in reference to modality. Different components represent information from different sensorimotor channels. Modality specific impairments are divided into separate subsystems on the basis of input modality. Examples of different input modalities include visual, auditory, and tactile input. Modality-specific impairments are also divided into subsystems based on the type of information. Visual vs. verbal and perceptual vs. functional information are examples of information types.

Semantic memory disorders
Semantic memory disorders fall into two groups. Semantic refractory access disorders are contrasted with semantic storage disorders according to four factors: temporal factors, response consistency, frequency, and semantic relatedness. A key feature of semantic refractory access disorders is temporal distortions, where decreases in response time to certain stimuli are noted when compared to natural response times. In access disorders there are inconsistencies in comprehending and responding to stimuli that have been presented many times. Temporal factors impact response consistency. In storage disorders, an inconsistent response to specific items is not observed. Stimulus frequency determines performance at all stages of cognition. Extreme word frequency effects are common in semantic storage disorders while in semantic refractory access disorders word frequency effects are minimal. The comparison of close and distant groups tests semantic relatedness. Close groupings have words that are related because they are drawn from the same category, such as a list of clothing types. Distant groupings contain words with broad categorical differences, such as unrelated words. Comparing close and distant groups shows that in access disorders semantic relatedness had a negative effect, which is not observed in semantic storage disorders. Category-specific and modality-specific impairments are important components in access and storage disorders of semantic memory.

Present and future research
Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) allow cognitive neuroscientists to explore different hypotheses concerning the neural network organization of semantic memory. By using these neuroimaging techniques researchers can observe the brain activity of participants while they perform cognitive tasks. These tasks can include, but are not limited to, naming objects, deciding if two stimuli belong in the same object category, or matching pictures to their written or spoken names.

A developing theory is that semantic memory, like perception, can be subdivided into types of visual information—color, size, form, and motion. Thompson-Schill (2003) found that the left or bilateral ventral temporal cortex appears to be involved in retrieval of knowledge of color and form, the left lateral temporal cortex in knowledge of motion, and the parietal cortex in knowledge of size.

Neuroimaging studies suggest a large, distributed network of semantic representations that are organized minimally by attribute, and perhaps additionally by category. These networks include "extensive regions of ventral (form and color knowledge) and lateral (motion knowledge) temporal cortex, parietal cortex (size knowledge), and premotor cortex (manipulation knowledge). Other areas, such as more anterior regions of temporal cortex, may be involved in the representation of nonperceptual (e.g. verbal) conceptual knowledge, perhaps in some categorically-organized fashion."