Mental lexicon

The mental lexicon is a component of the human language faculty that contains information regarding the composition of words, such as their meanings, pronunciations, and syntactic characteristics. The mental lexicon is used in linguistics and psycholinguistics to refer to individual speakers' lexical, or word, representations. However, there is some disagreement as to the utility of the mental lexicon as a scientific construct.

The mental lexicon differs from the lexicon more generally in that it is not just a collection of words; instead, it deals with how those words are activated, stored, processed, and retrieved by each speaker/hearer. Furthermore, entries in the mental lexicon are interconnected with each other on various levels. An individual's mental lexicon changes and grows as new words are learned and is always developing, but there are several competing theories seeking to explain exactly how this occurs. Some theories about the mental lexicon include the spectrum theory, the dual-coding theory, Chomsky's nativist theory, as well as the semantic network theory. Neurologists and neurolinguists also study the areas of the brain involved in lexical representations. The following article addresses some of the physiological, social, and linguistic aspects of the mental lexicon.

Recent studies have also shown the possibility that the mental lexicon can shrink as an individual ages, limiting the number of words they can remember and learn. The development of a second mental lexicon (L2) in bilingual speakers has also emerged as a topic of interest, suggesting that a speaker's multiple languages are not stored together, but as separate entities that are actively chosen from in each linguistic situation.

Methods of inquiry
Although the mental lexicon is often called a mental "dictionary", in actuality, research suggests that it differs greatly from a dictionary. For example, the mental lexicon is not organized alphabetically like a dictionary; rather, it seems to be organized by links between phonologically and semantically related lexical items. This is suggested by evidence of phenomena such as slips of the tongue, for instance replacing anecdote with antidote.

While dictionaries contain a fixed number of words to be counted and become outdated as language is continually changing, the mental lexicon consistently updates itself with new words and word meanings, while getting rid of old, unused words. The active nature of the mental lexicon makes any dictionary comparison unhelpful. Research is continuing to identify the exact way that words are linked and accessed. A common method to analyze these connections is through a lexical decision task, in which participants are required to respond as quickly and accurately as possible to a string of letters presented on a screen to say if the string is a non-word or a real word.

Model of the mental lexicon
In the sample model of the mental lexicon pictured to the right, the mental lexicon is split into three parts under a hierarchical structure: the concept network (semantics), which is ranked above the lemma network (morphosyntax), which in turn is ranked above the phonological network. Working in tandem with the mental lexicon, in particular with the phonological network, is the mental syllabary, which is responsible for activating articulatory gestures in response to the phonological network. According to the theory which this diagram illustrates, different components both within and outside of the mental lexicon are linked together by neural activations called S-pointers, which form pathways together with large clusters of neurons called buffers (e.g. “concept production” and “word audio” in the diagram).

Theories and perspectives
One theory about the mental lexicon states that it organizes our knowledge about words "in some sort of dictionary." Another states that the mental lexicon is "a collection of highly complex neural circuits". The latter, semantic network theory, proposes the idea of spreading activation, which is a hypothetical mental process that takes place when one of the nodes in the semantic network is activated, and proposes three ways this is done: priming effects, neighborhood effects, and frequency effects, which have all been studied in depth over the years.


 * Priming is a term used in lexical decision tasks that accounts for decreased reaction times of related words. Interchangeable with the word "activation" in many cases, priming refers to the ability to have related words assist in the reaction times of others. In the example above, the word bread "primed" butter to be retrieved faster.
 * Neighborhood effects refer to the activation of all similar "neighbors" of a target word. Neighbors are defined as items that are highly confusable with the target word due to overlapping features of other words. An example of this would be that the word "game" has the neighbors "came, dame, fame, lame, name, same, tame, gale, gape, gate, and gave," giving it a neighborhood size of 11 because 11 new words can be constructed by only changing 1 letter of "game". The neighborhood effect claims that words with larger neighborhood sizes will have quicker reaction times in a lexical decision task because neighbors facilitate the activation of other neighborhood words.
 * Frequency effects suggest that words that are frequent in an individual's language are recognized faster than words that are infrequent. Forster and Chambers, 1973, found that high frequency words were named faster than low frequency ones, and Whaley, 1978 found that high frequency words were responded to faster than low frequency ones in a lexical decision task.

In the spectrum theory, at one end "each phonological form is connected to one complex semantic representation", at the opposite end, homonyms and polysemes have their "own semantic representation[s]". The middle of the spectrum contains the theories that "suggest that related senses share a general or core semantic representation". The "dual coding theory (DCT)" contrasts multiple and common coding theories. DCT is "an internalized nonverbal system that directly represents the perceptual properties and affordances of nonverbal objects and events, and an internalized verbal system that deals directly with linguistic stimuli and responses". Others work around Chomsky's theory that "all syntactic and semantic features are included directly in the abstract mental representation of a lexical word".

Alternative theories
Not all linguists and psychologists support the mental lexicon's existence and there is much controversy over the concept. In a 2009 article, Jeffrey Elman proposes that the mental lexicon does not exist at all. Elman suggests that because context, both linguistic and nonlinguistic, is fundamentally inseparable from language, the human mind should be viewed more holistically when discussing the storage of lexical information. In Elman's view, this is a more realistic approach than assuming that the mental lexicon stores every minute contextual detail about every single lexical item. Elman states that words are not observed "as elements in a data structure" that are "retrieved from memory, but rather as stimuli that alter mental states".

First language development
One aspect of research on the development of the mental lexicon has focused on vocabulary growth. Converging research suggests that at least English-speaking children learn several words a day throughout development. The figure on the right illustrates the growth curve of a typical English-speaking child's vocabulary size.

The words acquired in the early stages of language development tend to be nouns or nounlike, and there are some similarities in first words across children (e.g., mama, daddy, dog). Fast mapping is the idea that children may be able to gain at least partial information about the meaning of a word from how it is used in a sentence, what words it is contrasted with, as well as other factors. This allows the child to quickly hypothesize about the meaning of a word.

Research suggests that, despite the fast mapping hypothesis, words are not just learned as soon as we are exposed to them, each word needs some type of activation and/or acknowledgement before it is permanently and effectively stored. For young children, the word may be accurately stored in their mental lexicon, and they can recognize when an adult produces the incorrect version of the word, but they may not be able to produce the word accurately.

As a child acquires their vocabulary, two separate aspects of the mental lexicon develop, named the lexeme and the lemma. The lexeme is defined as the part of the mental lexicon that stores morphological and formal information about a word, such as the different versions of spelling and pronunciation of the word. The lemma is defined as the structure within the mental lexicon that stores semantic and syntactic information about a word, such as part of speech and the meaning of the word. Research has shown that the lemma develops first when a word is acquired into a child's vocabulary, and then with repeated exposure the lexeme develops.



Bilingual development
The development of the mental lexicon in bilingual children has increased in research over recent years, and has shown many complexities including the notion that bilingual speakers contain additional and separate mental lexicons for their other languages. Selecting between two or more different lexicons has shown to have benefits extending past language processes. Bilinguals significantly outperform their monolingual counterparts on executive control tasks. Researchers suggest that this enhanced cognitive ability comes from continually choosing between L1 and L2 mental lexicons. Bilinguals have also shown resilience against the onset of Alzheimer's disease, monolinguals being an average of 71.4 years old and the bilinguals 75.5 years old when symptoms of dementia were detected, a difference of 4.1 years.

Neurological considerations
Studies have shown that the temporal and parietal lobes in the left hemisphere are particularly relevant for the processing of lexical items.

The following are some hypotheses pertaining to semantic comprehension in the brain:


 * 1) Organized Unitary Content Hypothesis (OUCH): this hypothesis posits that lexical items that co-occur with high frequency are stored in the same area in the brain.
 * 2) Domain-Specific Hypothesis: this hypothesis uses the theory of evolution to posit that certain categories that have an evolutionary advantage over others (such as useful items like tools) have specialized and functionally dissociated neural circuits in the brain.
 * 3) Sensory/Functional Hypothesis: this hypothesis posits that the ability to identify (i.e. be able to recognize and name) living things depends on visual information, while the ability to identify non-living things depends on functional information. Thus this hypothesis suggests that modality-specific subsystems compose an overarching semantic network of lexical items.

Impaired access
Anomic aphasia, aphasia (expressive + receptive aphasia) and Alzheimer's disease can all affect recalling or retrieving words. Anomia renders a person completely unable to name familiar objects, places and people; sufferers of anomia have difficulties recalling words. Anomia is a lesser level of dysfunction, a severe form of the "tip-of-the-tongue" phenomenon where the brain cannot recall the desired word. Stroke, head trauma, and brain tumors can cause anomia.

Expressive and receptive aphasia are neurological language disorders. Expressive aphasia limits the ability to convey thoughts through the use of speech, language or writing. Receptive aphasia affects a person's ability to comprehend spoken words, causing disordered sentences that have little or no meaning and which can include addition of nonce words.

Harry Whitaker states that Alzheimer's disease patients are forgetful of proper names. Patients have difficulty generating names, especially with phonological tasks such as words starting with a certain letter. They also have word-retrieval difficulties in spontaneous speech but still have relatively preserved naming of presented stimuli. Later, loss of naming of low-frequency lexical items occurs. Eventually, the loss of ability to comprehend and name the same lexical item indicates semantic loss of the lexical item.

Syntactic considerations
A 2006 study published in PNAS concludes, based on fMRI data showing activation of different parts of the brain for nouns and verbs, that different syntactic categories are stored separately in the mental lexicon. This study found that both nouns and verbs are primarily processed in the left brain, but with nouns more strongly activating the fusiform gyrus and verbs more strongly activating the prefrontal cortex, superior temporal gyrus, and superior parietal lobule. The notion of segregated syntactic categories within the mental lexicon is more recently supported by a 2020 article in Cognition, which measured speech onset latency when forty-eight speakers (no specification of speech disorders or lack thereof) were distracted from a target verb or noun with a related verb or a related noun. This study found that speech onset latency was greater by approximately 30ms when both the target and distractor were verbs than when the target was a verb and the distractor was a phonologically and semantically minimally different noun; a similar result of approximately 40ms difference was observed when noun targets were paired with noun distractors in comparison with minimally different verb distractors.

However, a 2011 paper in Neuroscience & Biobehavioral Reviews opposes the idea that nouns and verbs are stored separately, instead supporting the point of view that the understanding of nouns and verbs as separate categories arises from semantic and pragmatic notions of objects and actions, respectively, as well as from the learned syntactic environments of the two categories. This perspective is described within the article as "emergentist", from the notion that syntactic classes emerge from other non-syntactic lexical knowledge.

The declarative/procedural model
Not all researchers of the mental lexicon agree that syntax forms a component thereof: Michael T. Ullman proposes in his declarative/procedural model of language that the mental grammar is a distinct entity from the mental lexicon, and that it is the mental grammar, rather than any part of the lexicon, which encodes and processes syntactic (as well as some morphological) information. In this theory, the mental grammar forms the part of the language faculty that utilizes procedural memory, which is tied to computational tasks and fine motor skills and which is stored in the frontal lobe and basal ganglia. The lexicon, in turn, is the part that uses declarative memory, which is more strongly oriented towards rote memorization and which is stored in the temporal lobe. Ullman's argument for such a separation hinges around the claim that association between meaning and form is arbitrary, therefore the acquisition of such associations must be done through memorization; meanwhile grammatical rules can be intuitively derived from knowledge that has already been learned. Furthermore, Ullman posits that whereas phonology, orthography, and semantics, as well as syntax, are largely confined to their respective memory systems, morphology overlaps between both declarative and procedural memory systems — for example, in regular affixation, the morphological component would be procedural, but for irregular conjugations of verbs (e.g. teach/taught) it is the declarative memory that would be accessed.

Since Ullman's initial 2001 proposal, several other researchers have sought to apply the declarative/procedural model to L2 acquisition of syntax. For example, a 2015 study published in Studies in Second Language Acquisition observed that native English speakers placed in an immersive environment (a strategy-based game) for the purpose of acquiring an artificial and deliberately dissimilar to English L2 tended to rely heavily on declarative memory initially even when making syntactic judgements (i.e. completing grammaticality judgement tasks, abbreviated: GJTs). This study also found that after a slightly longer period of exposure to the artificial L2, some learners would begin to engage their procedural memory in a similar manner as they do in English for syntactic judgements, whereas others would make use of extralinguistic neural circuits for this purpose. Another 2015 study, which sought to ensure an implicit acquisition environment by framing the experiment to participants as being about scrambled sentences rather than L2 acquisition, also observed declarative memory being used in the earliest stages of syntax acquisition, and found that testing participants' initial acquisition of an artificial language after a time delay of at least one week resulted in greater use of procedural memory than immediately after the initial acquisition tasks.

Studies pushing back against a declarative/procedural split relating to lexicon and grammar also exist. For example, a 2010 study on L1 acquisition of Finnish verbal morphology, which asked monolingual children aged 4–6 to conjugate both real and constructed verbs in the past tense, concluded that the correlation between declarative memory (in the form of vocabulary development) and proficiency at conjugating past-tense verbs was too strong for the declarative/procedural model to be tenable with respect to this level of morphosyntax, given the continued existence of more appropriate models which posit a stronger relationship between lexicon and grammar. Nonetheless, this study explicitly does not rule out procedural memory still playing a larger role in sentence formation.

Storage of acronyms
As research on the mental lexicon continues to expand into our modern world of abbreviations, researchers have begun to question whether the mental lexicon has the capacity to store acronyms as well as words. Using a lexical decision task with acronyms as priming words, researchers from Ghent University in 2009 saw that acronyms could in fact prime other related information. This finding suggests that acronyms are stored alongside their related information in the mental lexicon just as a word would be. The same research also demonstrated that these acronyms would still prime related information despite inaccurate capitalization (i.e. bbc had the same priming effects as BBC). A 2006 study from the University of Massachusetts Amherst concludes that at least phonologically, acronyms are stored as sequences of the names of their constituent letters. From a semantics standpoint, there is no clear consensus, as a 2008 article from the journal Lexis suggests that acronyms are their own semantic units and that their ability to inflect supports this, while another from 2010 published by the American Speech–Language–Hearing Association contends that acronyms are semantically stored as the words that they are formed from.

Shrinking
The majority of current research focuses on the acquisition and functioning of the mental lexicon, without much focus on what happens to the mental lexicon over time. There are current debates surrounding the possibility of mental lexicon shrinkage; some suggest that as individuals age, they become less capable of storing and remembering words, indicating that their "mental dictionary" is in fact shrinking. It is still unclear how much of this potential lexical shrinkage is due to age-related decline, or if the reported shrinkage is due to factors such as outdated models of learning used in various methodologies.

One study showed that the size of a Japanese woman's (referred to as AA) healthy mental lexicon of Kanji shrank at a rate of approximately 1% per year between ages 83 and 93 on average. This was tested through a simple naming task of 612 Kanji nouns, once when the subject was 83 (1998), and then again at the age of 93 (2008). This study discussed currently related findings in the literature (as of 2010), Identifying that AA's rate of lexical decline was a midpoint in the range of an identified decline rate of 0.2-1.4% per year. These discussions of the literature suggested that age 70 is a critical age, during which decline rates remain stable with no great or negative acceleration taking place. While no mental examination was conducted on AA during the time the naming experiments were performed, AA was tested using a Japanese version of the mini–mental state examination in June 2009. Her scorings indicated mild to moderate dementia, however the scores relating to language indicated that her language functions were not impaired.

In contrast, another study argues that the recorded decline in cognitive performance and mental lexicon, is rather an outcome of overestimating the evidence in support of cognitive performance declining in healthy aging. They found that, when properly evaluated, the empirical record often indicated that the opposite was true, claiming that the models of learning currently assumed in aging research are incapable of capturing paired-associative learning in an empirical base. Arguing rather than the declining of cognition in healthy aging, the way we learn and process information changes as we age. They found that when the effects of learning upon performance are controlled as variables, there is very little variance remaining that can be interpreted as cognitive decline, and that these changes in performance are better accounted for by learning models. Upon the introduction of a more accurate model of learning, it was found that the accuracy of older adults' lexical processing appears to improve continuously over their lifespan, becoming more attuned to the information structure of the lexicon. It was noted that if investigators simply attended to speed in lexical decision tasks, inevitably evidence of decline will be found. However, if investigators integrate measurements of accuracy into their analyses, a negative relationship is found between the recorded speed and lexical accuracy.