User:Terra86/sandbox

Differences in processing of logographic and phonologic languages
Because much research on language processing has centered on English and other alphabet languages, many theories of language processing have stressed the role of phonology (see for instance WEAVER++) in producing speech. Contrasting logographic languages, where a single character is represented phonetically and ideographically, with phonetic languages has yielded insights into how different languages rely on different processing mechanisms. Studies on the processing of logographic languages have amongst other things looked at neurobiological differences in processing, with one area of particular interest being hemispheric lateralization. Since logographic languages are more closely associated with images than alphabet languages, several researchers have hypothesized that right-side activation should be more prominent in logographic languages. Although some studies have yielded results consistent with this hypothesis there are too many contrasting results to make any final conclusions about the role of hemispheric lateralization in orthographic versus phonetic languages.

Another topic that has been given some attention is differences in processing of homophones. Verdonschot et al. examined differences in the time it took to read a homophone out loud when a picture that was either related or unr elated to a homophonic character was presented before the character. Both Japanese and Chinese homophones were examined. Whereas word production of alphabetic languages (such as English) has shown a relatively robust immunity to the effect of context stimuli, Verdschot et al. found that Japanese homophones seem particularly sensitive to these types of effects. Specifically, reaction times were shorter when participants were presented with a phonologically related picture before being asked to read a target character out loud. An example of a phonologically related stimulus from the study would be for instance when participants were presented with a picture of an elephant, which is pronounced ‘zou’ in Japanese, before being presented with the character 造, which is also read ’zou’. No effect of phonologically related context pictures were found for the reaction times for reading Chinese words. A comparison of the logographic languages Japanese and Chinese is interesting because whereas the Japanese language consists of more than 60% homographic heterophones (characters that can be read two or more different ways) most Chinese characters only have one reading. Because both languages are logographic the difference in latency in reading aloud Japanese and Chinese due to context effects cannot be ascribed to the logographic nature of the languages. Instead, the authors hypothesize that the difference in latency times is due to additional processing costs in Japanese, where the reader cannot rely solely on a direct orthography to phonology route, but information on a lexical-syntactical level must also be accessed in order to choose the correct pronunciation. This hypothesis is corroborated by studies finding that Japanese Alzheimer’s patient whose comprehension of characters was deteriorated still could read the words out loud with no particular difficulty.

Studies contrasting the processing of English and Chinese homophones in lexical decision tasks have found an advantage for homophone processing in Chinese, and a disadvantage for processing homophones in English (see Hino for brief review of the literature ). The processing disadvantage in English is usually described in terms of the relative lack of homophones in the English language. When a homophonic word is encountered, the phonological representation of that word is first activated. However, since this is an ambiguous stimulus a matching at the orthographic/lexical (“mental dictionary”) level is necessary before the stimulus can be disambiguated, and the correct pronunciation can be chosen. In contrast, in a language, such as Chinese, where many characters with the same reading exists, it is hypothesized that the person reading the character will be more familiar with homophones, and that this familiarity will aid the processing of the character, and the subsequent selection of the correct pronunciation, leading to shorter reaction times when attending to the stimulus. In an attempt to better understand homophony effects on processing, Hino et al. conducted a series of experiments using Japanese as their target language. While controlling for familiarity, they found a processing advantage for homophones over nonhomophones in Japanese, similar to what has previously been found in Chinese. The researchers also tested whether orthographically similar homophones would yield a disadvantage in processing, as has been the case with English homophones (Ferrand & Grainger 2003, Haigh & Jared 2004, cited in ) but found no evidence for this. It is evident that there’s a difference in how homophones are processed in logographic and alphabetic languages, but whether the advantage for processing of homophones in the logographic languages Japanese and Chinese is due to the logographic nature of the scripts, or if it merely reflects an advantage for languages with more homophones regardless of script nature remains to be seen.