Language acquisition by deaf children

Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.

History
Sign language has been cited in texts as early as the time of Plato and Socrates, within Plato's Cratylus. Socrates states, "Suppose that we had no voice or tongue, and wanted to communicate with one another, should we not, like the dumb, make signs with the hands and head and the rest of the body?" Western societies started adopting signed languages around the 17th century, in which gestures, hand signs, mimicking, and fingerspelling represented words. Before this time, people with hearing loss were categorized as to be "suffering" from the disease of deafness, not under any socio-cultural categories like they are today. Many believed that the deaf were inferior and should learn to speak and become "normal." Children who were born deaf prior to the 18th century were unable to acquire language in a typical way and labeled "dumb" or "mute", often completely separated from any other deaf person. Since these children were cut off from communication and knowledge, they were often forced into isolation or into work at a young age since this was the only contribution to society that they were allowed to do. Society treated them if they were intellectually incapable of anything.

The problem was lack of education and resources. Charles-Michel de l'Épée was a French educator and priest in the 18th century who first questioned why young deaf children were essentially forced into exile, declaring that deaf people should have the right to education He met two young deaf children and decided to research more on "deaf mutes", attempting to find signs that were being used in Paris so that he may teach these young children to communicate. His success resonated throughout France and he later began teaching classes for this new language he had pulled together. He soon sought out government funding and founded the first school for the deaf, the National Institute for Deaf Children of Paris around the 1750s & 1760s. This school continued to operate after his death in 1789 by his student, Abbe Roch-Ambroise Cucurron Sicard.

Sicard asked the question of the deaf, "does he not have everything he needs for having sensations, acquiring ideas, and combining them to do everything we do?" He explained in his writings that the deaf simply have no way of expressing and combining their thoughts. This gap in communication created a huge misunderstanding. With this new development of a system of sign, communication began to spread throughout the next century among the deaf communities around the world. Sicard had the opportunity to invite Thomas H. Gallaudet to Paris to study the methods of the school. Here, Gallaudet met with other faculty members,  Laurent Clerc and Jean Massieu. Through their collaboration, Gallaudet was inspired to begin his teachings for a school of the deaf in the United States, later known as Gallaudet University, founded in 1864.

Sign language has been studied by linguists in more recent years and it is clear that even with all the variations in culture and style, they are rich in expression, grammar, and syntax. They follow Noam Chomsky’s theory of ‘Universal Grammar’. This theory proposes that children will be able to acquire language when brought up under normal conditions and distinguish between nouns, verbs, and other functional words. Before the development of schools for the deaf around the world, deaf children were not exposed to "normal conditions." The similarities between signed language acquisition and spoken language acquisition, discussing process, stages, structure, and brain activity were explored in several studies L.A. Petitto, et al. For example, babbling is a stage of language acquisition also seen in signed language. Additionally, lexical units affect the learning of all languages, as proposed by B.F. Skinner related to behaviorism, which are used in both spoken and signed languages. Skinner theorized that when children made connections in word-meaning association, their language developed through this environment and positive-reinforcement.

Background
Human languages can be spoken or signed. Typically developing infants can easily acquire any language in their environment if it is accessible to them, regardless of whether the language uses the vocal mode (spoken language) or the gestural mode (signed language).

Because hearing loss exists along a spectrum and because families address the development of their children in a variety of ways, children who are deaf or hard of hearing proceed along a variety of paths toward acquiring their first language. Deaf children who are exposed to an established sign language from birth learn that language in the same manner as any other hearing child acquiring a spoken language. Acquisition of a signed language like American Sign Language (ASL) from birth is rare from a language acquisition perspective as only 5-10% of deaf children are born to deaf, signing parents in the United States. The remaining 90-95% of deaf children are born to hearing, non-signing parents/families who usually lack knowledge of signed languages. A small percentage of these families learn sign language to varying levels and their children will have access to a visual language at varying levels of fluency. Many others choose to pursue an oral mode of communication with their children with use of technology (such as hearing aids or cochlear implants) and speech therapy.

These circumstances cause unique features of sign language acquisition not usually observed in spoken language acquisition. Due to the visual/manual modality, these differences can assist in distinguishing among universal aspects of language acquisition and aspects that may be affected by early language experience.

Acquisition of signed language
Children need language from birth. Deaf infants should have access to sign language from birth or as young as possible, with research showing that the critical period of language acquisition applies to sign language too. Sign languages are fully accessible to deaf children as they are visual, rather than aural, languages. Sign languages are natural languages with the same linguistic status as spoken languages. Like other languages, sign languages are much harder to learn when the child is past the critical age of development for language acquisition. Studies have found that children who learned sign language from birth understand much more than children who start learning sign language at an older age. Also, studies indicate that the younger a child is when learning sign language, the better their language outcomes are. There is a wide range of ages at which deaf children exposed to a sign language and begin their acquisition process. Approximately 5% of deaf children acquire a sign language from birth from their deaf parents. Deaf children with hearing parents often have a delayed process of sign language acquisition, beginning at the time when the parents start learning a sign language or when the child attends a signing program.

Sign languages have natural prosodic patterns, and infants are sensitive to these prosodic boundaries even if they have no specific experience with sign languages. Six-month-old hearing infants with no sign experience also preferentially attend to sign language stimuli over complex gesture, indicating that they are perceiving sign language as meaningful linguistic input. Since infants attend to spoken and signed language in a similar manner, several researchers have concluded that much of language acquisition is universal, not tied to the modality of the language, and that sign languages are acquired and processed very similarly to spoken languages, given adequate exposure. These babies acquire sign language from birth and their language acquisition progresses through predictable developmental milestones. Babies acquiring a sign language produce manual babbling (akin to vocal babbling), produce their first sign, and produce their first two-word sentences on the same timeline as hearing children acquiring spoken language. At the same time, researchers point out that there are many unknowns in terms of how a visual language might be processed differently than a spoken language, particularly given the unusual path of language transmission for most deaf infants.

Language acquisition strategies for deaf children acquiring a sign language are different than those appropriate for hearing children, or for deaf children who use spoken language with hearing aids and/or cochlear implants. Because sign languages are visual languages, eye gaze and eye contact are critical for language acquisition and communication. Studies of deaf parents who sign with their deaf children have shed light on paralinguistic features that are important for sign language acquisition. Deaf parents are adept at ensuring that the infant is visually engaged prior to signing, and use specific modifications to their signing, referred to as child-directed sign, to gain and maintain their children's attention. To attract and direct a deaf child's attention, caregivers can break the child's line of gaze using hand and body movements, touch, and pointing to allow language input. Just as in child-directed speech (CDS), child-directed signing is characterized by slower production, exaggerated prosody, and repetition. Due to the unique demands of a visual language, child-directed signing also includes tactile strategies and relocation of language into the child's line of vision. Another important feature of language acquisition that affects eye gaze is joint attention. In spoken languages, joint attention involves the caregiver speaking about the object that the child is looking at. Deaf signing parents capitalize on moments of joint attention to provide language input. Deaf signing children learn to adjust their eye gaze to look back and forth between the object and the caregiver's signing. To reduce the child's need for divided attention between an object and the caregiver's signing, a caregiver can position themselves and objects within the child's visual field so that language and the object can be seen at the same time.

Sign languages appear naturally among deaf groups even if no formal sign language has been taught. Natural sign languages are much like spoken languages such as English and Spanish in that they are true languages, and children learn them in similar ways. They also follow the same social expectations of language systems. Some studies indicate that if a deaf child learns sign language, he or she will be less likely to learn spoken languages because they will lose motivation. However, Humphries et al. found that there is no evidence for this. One of Humphries' arguments is that many hearing children learn multiple languages and do not lose the motivation to do so. Other studies have shown that sign language actually aids spoken language development. Understanding and using sign language provides the platform that is needed to develop other language skills. It can also provide the foundation for learning the meaning of written words. There are many different sign languages used around the world. Some of the main sign languages include American Sign Language, British Sign Language and French Sign Language.

American Sign Language (ASL)
ASL is mostly used in North America, though derivative forms are used in various places around the world including most of Canada. ASL is not simply a translation of English works, this is demonstrated by the fact that words that have dual meaning in English have different signs for each individual meaning in ASL.

British Sign Language (BSL)
BSL is mainly used in Great Britain, with derivatives being used in Australia and New Zealand. British sign language has its own syntax and grammar rules. It is different from spoken English though hearing people in America and the United Kingdom share a language, ASL and BSL are different meaning that deaf children in English speaking countries do not have a shared language.

French Sign Language (LSF)
LSF is used in France as well as many other countries in Europe. The influence of French sign language is apparent in other signed languages, including American sign language.

Nicaraguan Sign Language (NSL or ISN)
NSL, or Idioma de Señas de Nicaragua (ISN), has been studied by linguists in recent years for being a more emergent sign language. After the Nicaraguan Government launched their literacy campaign in the 1980s, they were able to support more students in special education, which included deaf students who previously received little-to-no assistance in learning. New schools were built in the capital city of Managua, and hundreds of deaf students met for the first time. This is the environment where NSL was created.

Judy Shepard-Kegl, an American linguist, had the opportunity to further explore NSL and how it affected what we know about language acquisition. She was invited to observe students in Nicaraguan schools and interact with them because the educators didn't know what their gestures meant. The biggest discovery of her observations is a linguistic phenomenon called “reverse fluency."

Reverse fluency
Reverse fluency was discovered by studying Nicaraguan Sign Language and the history of acquisition. In Shepard-Kegl's observations at a Nicaraguan elementary school, she noted that “the younger the kids, the more fluent they were” in the developing language in this city. This phenomenon was recognized as “reverse fluency” in which the younger children had higher linguistic ability than the older children. These students had developed a full language together. They were still in that “critical period" of language acquisition, in which they were extremely receptive to new language around ages 4–6 years old. This reinforces the theory that language acquisition is innate in human capacity for learning.

Acquisition of spoken language
Because 90-95% of deaf children are born to hearing parents, many deaf children are encouraged to acquire a spoken language. Deaf children acquiring spoken language use assistive technology such as hearing aids or cochlear implants, and work closely with speech language pathologists. Due to hearing loss, the spoken language acquisition process is delayed until such technologies and therapies are used. The outcome of spoken language acquisition is highly variable in deaf children with hearing aids and cochlear implants. One study of infants and toddlers with cochlear implants showed that their spoken language delays were still persistent three years post implantation. Another study showed that children with cochlear implants demonstrated persistent delays into elementary  school with almost 75% of children's spoken language skills falling below the average for hearing norms (See Figure 1). For children using hearing aids, spoken language outcomes in deaf children are correlated with the amount of residual hearing the child has. For children with cochlear implants, spoken language outcomes are correlated with or the amount of residual hearing the child had before implantation, age of implantation, and other factors that have yet to be identified.

For newborns, the earliest linguistic tasks are perceptual. Babies need to determine what basic linguistic elements are used in their native language to create words (their phonetic inventory). They also need to determine how to segment the continuous stream of language input into phrases, and eventually, words. From birth, they have an attraction to patterned linguistic input, which is evident whether the input is spoken or signed. They use their sensitive perceptual skills to acquire information about the structure of their native language, particularly prosodic and phonological features.

For deaf children who: receive cochlear implants early in life, are born to parents who use spoken language in the home, and pursue spoken language proficiency, research demonstrates that L1 language and reading skills are consistently higher for those children who have not been exposed to a signed language as an L1 or L2, and have instead focused exclusively on listening and spoken language development. In fact, "Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years." Children who focused primarily on spoken language also demonstrated greater social well-being when they did not use manual communication as a supplement.

For a detailed description of spoken language acquisition in hearing children see: Language acquisition.

Cochlear implants
A cochlear implant is placed surgically inside the cochlea, which is the part of the inner ear that converts sound to neural signals. There is much debate regarding the linguistic conditions under which deaf children acquire spoken language via cochlear implantation. A singular, yet to be replicated, study concluded that long-term use of sign language impedes the development of spoken language and reading ability in deaf and hard of hearing children, and that using sign language is not at all advantageous, and can be detrimental to language development. However, studies have found that sign language exposure actually facilitates the development of spoken language of deaf children of deaf parents who had exposure to sign language from birth. These children outperformed their deaf peers who were born to hearing parents following cochlear implantation.

New parents with a deaf infant are faced with a range of options for how to interact with their newborn, and may try several methods, including different amounts of sign language, oral/auditory language training, and communicative codes invented to facilitate acquisition of spoken language. In addition, parents may decide to use cochlear implants or hearing aids with their infants. According to one US-based study from 2008, approximately 55% of eligible deaf infants received cochlear implants. A study in Switzerland found that 80% of deaf infants were given cochlear implants as of 2006 and the numbers have been steadily increasing. While cochlear implants provide auditory stimulation, not all children succeed at acquiring spoken language completely. Many children with implants often continue to struggle in a spoken-language-only environment, even when support is given. Children who received cochlear implants before twelve months old were significantly more likely to perform at age-level standards for spoken language than children who received implants later. However, there is research that shows that the information given to parents is often not correct or missing important information; this can lead them to make decisions that might not be the best for their child. Parents need time to make informed decisions. As most deaf children are born to hearing parents, they have to make decisions on topics they have never considered.

Research shows that deaf children with cochlear implants who listen and speak to communicate, but do not use sign language, have better communication outcomes and social well-being than deaf children who use sign language. However, sign language is often only recommended as a last resort, with parents being told not to use sign language with their child. Though implants offer many benefits for children, including potential gains in hearing and academic achievement, they are unable to fix deafness. A child who is born deaf will always be deaf, and they will likely still face many challenges that a hearing child will not. There is also research that shows that early deprivation of language and sign language, before an implant is fitted can affect the ability to learn language. There is no definitive research that states whether cochlear implants and spoken language, or signing has the best outcomes. Ultimately the decision comes down to the parents to make the choices that are best for their child.

Bilingual language acquisition
Some deaf children acquire both a sign language and a spoken language. This is called bimodal bilingual language acquisition. Bimodal bilingualism is common in hearing children of deaf adults (CODAs). One group of deaf children who experience bimodal bilingual language acquisition are deaf children with cochlear implants who have deaf parents. These children acquire sign language from birth and spoken language after implantation. Other deaf children who experience bimodal bilingual language acquisition are deaf children of hearing parents who have decided to pursue both spoken language and sign language. Some parents make the decision to pursue sign language while pursuing spoken language so as not to delay exposure to a fully accessible language, thereby starting the language acquisition process as early as possible. While some caution that sign language might interfere with spoken language, other research has shown that early sign language acquisition does not hinder and may in fact support spoken language acquisition.

Alternative communication approaches
For a review of educational methods including signing and spoken language approaches, see: Deaf education

Manually coded English
Manually coded English is any one of a number of different representations of the English language that uses manual signs to encode English words visually. Although MCE uses signs, it is not a language like ASL; it is an encoding of English that uses hand gestures to make English visible in a visual mode. Most types of MCE use signs borrowed or adapted from American Sign Language, but use English sentence order and grammatical construction. However, it is not possible to fully encode a spoken language in the manual modality.

Numerous systems of manually encoded English have been proposed and used with greater or lesser success. Methods such as Signed English, Signing Exact English, Linguistics of Visual English, and others use signs borrowed from ASL along with various grammatical marker signs, to indicate whole words, or meaning-bearing morphemes like -ed or -ing.

Though there is limited evidence for its efficacy, some people have suggested using MCE or other visual representations of English as a way to support English language acquisition for deaf children. Because MCE systems are encodings of English which follow English word order and sentence structure, it is possible to sign MCE and speak English at the same time. This is a technique that is used in order to teach deaf children the structure of the English language not only through the sound and lip-reading patterns of spoken English, but also through manual patterns of signed English. Because MCE uses English word order, it is hypothesized that it is easier for hearing people to learn MCE than ASL. Since MCE is not a natural language, children have difficulty learning it.

Cued speech
Cued speech is a hybrid, oral/manual system of communication used by some deaf or hard-of-hearing people. It is a technique that uses handshapes near the mouth ("cues") to represent phonemes that can be challenging for some deaf or hard-of-hearing people to distinguish from one another through speechreading ("lipreading") alone. It is designed to help receptive communicators to observe and fully understand the speaker.

Cued speech is not a signed language, and it does not have any signs in common with established signed languages such as ASL or BSL. It is a kind of augmented speechreading, making speechreading much more accurate and accessible to deaf people. The handshapes by themselves have no meaning; they only have meaning as a cue in combination with a mouth shape, so that the mouth shape 'two lips together' plus one handshape might mean an 'M' sound, the same shape with a different cue might represent a 'B' sound, and with a third cue might represent a 'P' sound.

Some research shows a link between lack of phonological awareness and reading disorders, and indicate that teaching cued speech may be an aid to phonological awareness and literacy.

Fingerspelling
Another manual encoding system used by the deaf and which has been around for more than two centuries is fingerspelling. Fingerspelling is a system that encodes letters and not words or morphemes, so is not a manual encoding of written words, but rather an encoding of the alphabet. As such, it is a method of spelling out words one letter at a time using 26 different handshapes. In the United States and many other countries, the letters are indicated on one hand and go back to the deaf school of the Abbe de l'Epee in Paris. Since fingerspelling is connected to the alphabet and not to entire words, it can be used to spell out words in any language that uses the same alphabet. It is not tied to any one language in particular, and to that extent, it is analogous to other letter-encodings, such as Morse code, or semaphore. The Rochester Method relies heavily on fingerspelling, but it is slow and has mostly fallen out of favor.

Hybrid methods
Hybrid methods use a mixture of aural/oral methods as well as some visible indicators such as hand shapes in order to communicate in the standard spoken language by making parts of it visible to those with hearing loss.

One example of this is sign supported English (SSE) which is used in the United Kingdom. It is a type of sign language that is used mainly with children who are hard of hearing in schools. It is used alongside English and the signs are used in the same order as spoken English.

Another hybrid method is called Total Communication. This method of communication allows and encourages the user to use all methods of communication. These can include spoken language, signed language and lip reading. Like sign-supported English, signs are used in spoken English order. The use of hearing aids or implants is highly recommended for this form of communication. It is only recently that ASL has become an accepted form of communication to be used in the total communication method.

Language deprivation
Language deprivation may occur when a child is not sufficiently exposed to language during the critical period of language acquisition. The majority of children with some form of hearing loss cannot easily and naturally acquire spoken language without the use of hearing aids or cochlear implants. This puts deaf children at risk for serious developmental consequences such as neurological changes, gaps in socio-emotional development, delays in academic achievement, limited employment outcomes, and poor mental and physical health.

Ethics and language acquisition
Cochlear implants have been the subject of a heated debate between those who believe deaf children should receive the implants and those who do not. Certain deaf activists believe this is an important ethical problem, claiming that sign language is their first or native language, just as any other spoken language is for a hearing person. They do not see deafness as a deficiency in any way, but rather a normal human trait amongst a variety of different ones. One issue on the ethical perspective of implantation is the possible side effects that may present after surgery. However, complications from cochlear implant surgery are rare, with some centers showing less than a three percent failure rate.

Cognitive development
Early exposure and access to language facilitates healthy language acquisition, regardless of whether or not that language is native or non-native. in turn, strong language skills support the development of the child's cognitive skills, including executive functioning. Studies have shown that executive functioning skills are extremely important, as these are the skills that guide learning and behavior. Executive functioning skills are responsible for self-regulation, inhibition, emotional control, working memory, and planning and organization, which contribute to overall social, emotional and academic development for children. Early access to a language, whether signed or spoken, from birth supports the development of these cognitive skills and abilities in deaf and hard of hearing children, and supports their development in this area.

However, late exposure to language and delayed language acquisition can inhibit or significantly delay the cognitive development of deaf and hard of hearing children, and impact these skills. Late exposure to language can be defined as language deprivation (see Language deprivation in deaf and hard of hearing children). This experience is the result of a lack of exposure to natural human language, whether that be spoken or signed language, during the critical language period. According to World Health Organization, approximately 90% of deaf children are born to hearing parents; hearing parents that more often than not, and through no fault of their own, are not prepared to provide an accessible language to their children, therefore, some degree of language deprivation occurs in these children. Language Deprivation has been found to impair deaf children's cognitive development, specifically their executive functioning skills and working memory skills, causing deficits in critical executive functioning skills and overall cognitive development. It is not deafness that causes these deficits, but delayed language acquisition that influences the cognitive development and abilities of deaf people.

Social-emotional development
Having an acquired language means an individual has had full access to at least one spoken or signed language.Typically, if a person has had this full access to language and has been able to acquire it, the foundation for their social emotional development is present. Being able to communicate is critical for those still developing their social skills. There is also evidence to suggest that language acquisition can play a critical role in developing Theory of Mind. For children who have not had this access or have not yet fully acquired a language, social development can be stunted or hindered, which in turn can affect one's emotional development as well.

The lack of socialization can significantly impact a child's emotional well-being. A child's first experience with social communication typically begins at home, but deaf and hard of hearing children in particular who are born to hearing parents tend to struggle with this interaction, due to the fact that they are a “minority in their own family". Parents who have a deaf child typically do not know a signed language, the logistical problem becomes how to give that child exposure to language that the child can access. Without a method of communication between the child and parents, facilitating their child's social skill development at home is more difficult.  By the time these children enter school, they can be behind in this area of development. All of this can lead to struggles with age appropriate emotional development. It will be hard on a child who was not given a language early in life to try and express their emotions appropriately. The problem is not caused by deafness, it is caused by lack of communication that occurs when there is a lack of language access from birth. There is evidence to suggest that language acquisition is a predictor of how a child's ability to develop theory of mind. Theory of mind can be an indicator of social and cognitive development. Without language acquisition, deaf children can become behind in theory of mind and the skills that coincide, which can lead to further social and emotional delays.

Literacy development
Second language learning is highly affected by early first language acquisition during the critical period. Research supports the correlation between proficiency in a natural signed language and proficiency in literacy skills. Deaf students who are more proficient in a signed language and who use it as their primary language tend to have higher reading and writing scores. Development of a second language also improves proficiency in the student's first language. Likewise, students who receive access to a spoken language early on through technology such as cochlear implants have been found to develop literacy skills at much more fluent levels than deaf students without cochlear implantation. These studies do not clearly state whether one approach (use of sign language versus cochlear implantation) has a higher success rate, but primarily focus on early language access through either a signed language or a technologically adapted auditory route.

Whereas a first language is acquired through socialization and access to the language modality being used in the home (spoken, visual, or tactile language), literacy skills must be taught. Most models describing reading skills are based on studies of children with typical hearing. One such widely applied model, the simple view of reading, identifies decoding (matching text to speech sounds) and fluency in a first language (its vocabulary and syntax) as being foundational for fluent reading. However, phonetic decoding has been shown to not be required for skilled reading in deaf adults. Because individuals experience deafness along a spectrum of hearing abilities, their ability to hear the phonetic components of spoken language will likewise vary. Similarly, deaf children's language skills vary depending upon how and when they acquired a first language (early vs. late, visual vs. spoken, from fluent users or new users of the language). This mix of access to phonetic and linguistic information will shape the journey a deaf child takes to literacy.

Studies have compared the eye and brain activity in equally skilled readers who are deaf and who have typical hearing. These studies show that the brain and physical processes involved in reading are nearly identical for both groups. Reading begins with the eyes scanning text during which readers who are deaf take in information from the 18 letters following the word that the eyes are looking at, versus about 14 to 15 letters for hearing readers. The textual information received by the eyes then travels by electrical impulses to the occipital lobe where the brain recognizes text in the visual word form area. This information is then sent to the parietal lobe which helps in reading words in the correct order horizontally and vertically, a region more heavily relied upon by skilled readers who are deaf than those who have typical hearing. Signals then pass to the language center of the temporal lobe. Similarly to fluent readers of the logographic writing system of Chinese, skilled readers who are deaf make use of an area in the temporal lobe in front of the visual word form area when making decisions about the meanings of words, which may represent a similarity in the visual processing of text versus phonological processing. Where as typically hearing readers rely on Broca's area and the left frontal cortex to process phonetic information during reading, skilled readers who are deaf almost solely use the inferior frontal gyrus to decode meaning rather than relying on the sounds that words would make if read aloud. Researchers conclude that these neurological and physiological differences observed in skilled readers who are deaf are likely the result of increased dependence on visual information and are not differences that detract from reading abilities.

Techniques for teaching reading skills to deaf students
Several techniques are used to help bridge the gap between signed and spoken language or the "translation process" such as sandwiching and chaining. Some sign languages, including ASL, make use of fingerspelling in everyday signing. Children who have acquired this type of sign language associate fingerspelling of words from the local spoken language with reading and writing in that same language. Methods such as sandwiching and chaining have been shown to assist students in mapping signs and fingerspelled words onto written words. Sandwiching consists of alternating between fingerspelling a word and signing it. Chaining consists of fingerspelling a word, pointing to the spoken language version of the word and using pictorial support. Chaining creates an understanding between the visual spelling of a word and the sign language spelling of the word.