User:Amandafoort/Sandbox

History of Speech Production Research
Until the late 1960's research on speech was focused on comprehension. As greater volumes of speech error data amassed researchers began to investigate the psychological processes responsible for the production of speech sounds and to contemplate possible procedures through which people are able to speak fluently. Speech Error research made evident many truths about speech which were soon incorporated into speech production models. Evidence from speech error data meant that linguists were able to ascertain certain truths about the speech production process.

1. Speech is planned in advance. 2. The lexicon is organized both semantically and phonologically.

3. Morphologically complex words are assembled.

4. Affixes and Functors behave differently from context words in slips of the tongue.

5. Speech errors reflect rule knowledge.

Aspects of Speech Production Models
Models of speech production must contain specific elements to be viable and widely considered as accurate. These elements as listed below, are the elements from which speech is composed, and therefore must be explained by any model attempting to explain the process of speech production. The accepted models of Speech production discussed in more detail below all incorporate these stages either explicitly or implicitly, and the ones that are now outdated or disputed have been criticized for overlooking one of the following stages.

The attributes of accepted Speech Models are:

a) a conceptual stage where the speaker abstractly identified what they wish to express.

b) a syntactic stage where a frame is chosen that words will be placed into, this frame is usually sentence structure.

c)a lexical stage where a search for a word occurs based on meaning. Once the word is retrieved, information about it becomes available to the speaker involving phonology and morphology.

d)a phonological stage where the abstract information is converted into a speech like form.

e) a phonetic stage. where features and muscle instructions are prepared to be sent to muscles of articulation.

Also, models must allow for forward planning mechanisms, a buffer, and a monitoring mechanism.

Following are a few of the influential models of speech production which attempt to account for or incorporate all of the previously mentioned stages and include information discovered as a result of speech error studies and other disfluency data (such as Tip of the Tongue research).

The Utterance Generator Model of Speech Production (1971)
The Utterance Generator Model was proposed by Fromkin (1971). It is composed of six stages and was an attempt to account for the previous findings of speech error research. The stages of the Utterance Generator Model were based on possible changes in representations of a particular utterance. The first stage is where a person generates the meaning they wish to convey. The second stage involves the message being translated onto a syntactic structure. Here, the message is given an outline. The third stage proposed by Fromkin is where/when the message gains different stresses and intonations based on the meaning. The fourth stage Fromkin suggested is concerned with the selection of words from the lexicon. After the words have been selected in Stage 4, the message undergoes phonological specification. The fifth stage applies rules of pronunciation and produces syllables that are to be outputted. The sixth and final stage of Fromkin's Utterance Generator Model is the coordination of the motor commands necessary for speech. Here, phonetic features of the message are sent to the relevant muscles of the vocal tract so that the intended message can be produced. Despite the ingenuity of Fromkin's model, researchers have criticized this interpretation of Speech production. Although The Utterance Generator Model accounts for many nuances and data found by speech error studies, researchers decided it still had room to be improved.

The Garrett Model (1975)
A more recent (than Fromkin's) attempt to explain Speech Production was published in 1975. Garrett also created this model by compiling speech error data and there are many overlaps between this model and the Fromkin model off which it was based, but he did add a few things to the Fromkin model that filled some of the gaps being pointed out by other researchers. The Garrett Model and the Fromkin model both distinguish between three levels -- a conceptual level, and sentence level, and a motor level. These three levels are common to contemporary understanding of Speech Production. Other innovative aspects of the Garrett Model are his distinction between the functional and positional levels.

Places of Articulation
The physical structure of the human nose, throat, and vocal chords allows for the productions of many unique sounds, these areas can be further broken down into places of articulation. Different sounds are produced in different areas, and with different muscles and breathing techniques. Our ability to utilize these skills to create the various sounds needed to communicate effectively is essential to our speech production. Difficulties in manner of articulation can contribute to speech difficulties and impediments. As infants, it is suggested that people are capable of making the entire spectrum of possible vowel and consonant sounds. IPA has created a system for understanding and categorizing all possible speech sounds, which includes information about the way in which the sound is produced, and where the sounds is produced. This is extremely useful in the understanding of speech production because speech can be transcribed based on sounds rather than spelling, which may be misleading depending on the language being spoken. However, as we grow accustomed to a particular language we lose not only the ability to produce certain speech sounds, but also to distinguish between these sounds.

Articulation
Articulation, often associated with speech production, is the term used to describe how people physically produced speech sounds. For people who speak fluently, articulation is automatic and allows 15 speech sounds to be produced per minute.