User:Barefoot Banana/sandbox



Phonology is the branch of linguistics that studies the systematic organization of the units of language that do not have any meaning in and of themselves. For spoken languages, such units are phones, tones, features, or larger units such as syllables and other prosodic domains. For sign languages, phonology investigates the constituent parts of signs. These are specifications for movement, location, and handshape.

What phonologists study and which position it take in within linguistics

Etymology and definition
The word phonology comes from Ancient Greek φωνή, phōnḗ, 'voice, sound', and the suffix -logy (which is from Greek λόγος, lógos, 'word, speech, subject of discussion'). It refers to one of the fundamental systems that a language is considered to comprise, like its syntax, its morphology and its lexicon. The term can also refer to the sound or sign system of a particular language variety, e.g. "the phonology of English".

Phonology is usually distinguished from phonetics, which concerns the physical production, acoustic transmission and perception of language. In general, the object of study in phonetics is something that can be measured such as tongue posture, frequencies within an acoustic signal, or the auditory/visual processing time of a language stimulus. Phonology, on the other hand, typically describes the categorical properties that linguistic units such as speech sounds have. For many linguists, phonetics belongs to descriptive linguistics and phonology to theoretical linguistics, but establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence in some theories. The distinction was not always made, particularly before the development of the modern concept of the phoneme in the mid-20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, which has resulted in specific areas like articulatory phonology or laboratory phonology.

Definitions of the field of phonology vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basically Ferdinand de Saussure's distinction between langue and parole). More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, and in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items." According to Clark et al. (2007), it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying that use. In these definitions of phonology, the apparent exclusion of sign languages and suprasegmental units (e.g. tone) reflects the relatively small amount of attention that has been given to data other than speech sounds. Despite this attentional bias, all modern linguists consider sign languages and tone to be within the domain of phonological study.

History
The earliest evidence for a systematic study of the sounds in a language appears in the 4th century BCE Ashtadhyayi, a Sanskrit grammar composed by Pāṇini. In particular, the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what may be considered a list of the phonemes of Sanskrit, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics. Another notable scholar in pre-modern times was Ibn Jinni of Mosul. He was a pioneer in phonology and wrote prolifically in the 10th century on Arabic morphology and phonology in works such as Kitāb Al-Munṣif, Kitāb Al-Muḥtasab, and  .

The study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar Jan Baudouin de Courtenay, who (together with his students Mikołaj Kruszewski and Lev Shcherba in the Kazan School) shaped the modern usage of the term phoneme in a series of lectures in 1876–1877. The word phoneme had been coined a few years earlier, in 1873, by the French linguist A. Dufriche-Desgenettes. In a paper read at 24 May meeting of the Société de Linguistique de Paris, Dufriche-Desgenettes proposed for phoneme to serve as a one-word equivalent for the German Sprachlaut. Baudouin de Courtenay's subsequent work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology) and may have had an influence on the work of Saussure, according to E. F. K. Koerner. An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology), published posthumously in 1939, is among the most important works in the field from that period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, but the concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, one of the most prominent linguists of the 20th century. Louis Hjelmslev's glossematics also contributed with a focus on linguistic structure independent of phonetic realization or semantics.

In 1968, Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for generative phonology. In that view, phonology is part of Universal Grammar and essentially an ordered list of phonological rules that transform an underlying form into a surface form. The underlying representations are sequences of segments that have an internal structure consisting of distinctive features. The features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. Each feature in SPE encodes an aspect of articulation or perception, such as vowel height or nasality, and its presence or absence is expressed with the binary values + or -, e.g. [+nasal]. Like all major principles of phonology, features in SPE were assumed to be universal and innate. The ordered rules, then, made reference to the feature values. For example, a voicing assimilation rule targeting voiceless sounds would transform underlying [-voice] into surface [+voice]. In this way, the grammar "generated" the appropriate surface form by means of the rule set. Generative phonology thus led phonologists to focus on capturing phonological processes and derivation with features and rules, and to order these rules. Furthermore, the generativists folded morphophonology into phonology by allowing rules to be applicable only in certain contexts, which both solved and created problems. As such, the generative approach downplayed the importance of the syllable as a unit and emphasised the segment.

In 1976, John Goldsmith introduced autosegmental phonology. Based on the phonological behaviour of tones, Goldsmith proposed that tones are not bound to the internal structure of segments, but rather that they are autonomous from segments (hence the theory's name) and exist on a separate tier: the tonal tier. In the representation, tones are then connected to segments via association lines. In this way, it is easy to represent one-to-many and many-to-one associations between tones on the tonal tier and segments on the segmental tier. This is useful for representing, among others, tonal spreading and (derived) contour tones. The concept of autonomous tiers was subsequently also applied to features. Until then, sequences of segments had been conceptualised as existing on a single linear string, but with autosegmental phonology, features could be represented on multiple tiers that were separate from the positional slots that they can associate to. A significant consequence of this is that certain processes could suddenly easily be represented as being local. An example is vowel harmony. Namely, from a strictly linear perspective, the vowels in a CVCV sequence are not adjacent, but they are adjacent in a representation where the vowel features exist on a tier that is separate from the consonant features. Eventually, autosegmental phonology led to feature geometries. In feature geometries, features are organised in geometrical structures that have major nodes such as "Place of Articulation" and "Laryngeal" that each contain several features in their branches. Grouping features together into nodes crucially allows nodes to spread in their entirety rather than feature-by-feature. Additionally, nodes can dominate each other, allowing for complex geometries that make specific predictions about what phonological processes are possible.

During the 80's, two frameworks of phonology developed independently that have much in common: Dependency phonology and Government phonology. The impetus behind Dependency Phonology (DP), mostly developed by John Anderson, was the fundamental idea that a relation between linguistic units is asymmetrical with a dominating component (the head) and a dominated component (the dependent). Government Phonology (GP), of which prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris, developed out of research into the internal structure of segments in the autosegmental era, and took inspiration from Government and Binding Syntax. Both DP and GP attempt to bridge the gap between syntax and phonology. For DP, this stems from Anderson's Structural Analogy Assumption, which proposes that it is desirable as a null-hypothesis that the same mechanics operate in different parts of the grammar. As such, DP uses notions such as complements and adjuncts. GP, on the other hand, being directly based on syntax, naturally employs several mechanisms that are analogous to syntactic operations, such as Proper Government and the Minimality Condition. A second similarity is that both frameworks reject syllabic constituents, DP through Head-Dependency Relations and rejecting contentless nodes, and GP through assuming lateral relations between segments in a string. The result is that both frameworks reject e.g. an Onset node that can branch, so that analyses of syllable structure are alike. A third important similarity between DP and GP is the type of phonological primes that they use as subsegmental building blocks, pioneered by DP. These are most commonly known as "elements" (though they are called "components" in DP). The use of elements to represent subsegmental structure, as opposed to the more widely used distinctive features, is technically also possible outside of DP and GP and, in fact, Element Theory has developed into a self-contained theory. Elements are different from features in a number of ways. First, they are monovalent/unary, i.e. either present or fully absent in a represenation, such that phonological processes cannot make reference to the lack of an element. Second, elements have multiple phonetic realisations that depend on their headedness status (which is similar but not identical between DP and GP). Third, formal definitions of elements are based on acoustics rather than on articulation/perception. Fourth, consonants and vowels are always represented by the same set of elements. Despite the common ground in terms of structural analogies between syntax and phonology, syllable structure, and subsegmental structure, there is one major difference that sets DP and GP apart: the importance of phonetic grounding. DP is substance-based, meaning that any phonological entity must bear some relation to its phonetic realisation and cannot be "empty". In stark opposition of this, GP assumes that only phonological behaviour can be used as evidence to support hypotheses about phonological structure. Phonology and phonetics are seen as separate modules which entails that phonological structure needs transduced or "translated" into phonetic structure. As such, the relationship between phonology and phonetics can, in principle, be as arbitrary and language-specific as the relationship between a phonological form and its lexical meaning. This means that phonological units such as elements have a very liberal phonetic implementation.

In 1987, a small conference was held at the Ohio State University that would result in the launch of a new approach to doing phonological research: Laboratory Phonology. Laboratory Phonology is essentially the enterprise of addressing phonological questions through experimental work. Throughout most of the 20th century, phonetics and phonology diverged as branches of linguistics. Phonetic mechanisms were assumed to be universal and gradient, whereas phonology was assumed to be language-specific (hence acquired) and categorical. Furthermore, phonology within generative linguistics was also predominantly substance-free. However, increasingly advanced technology facilitated phonetic research, and linguists came to understand that phonetics is also language-specific and that there is at least some gradience within phonology (e.g. incomplete neutralisation). Laboratory Phonology was thus intended to bring phonetics and phonology under one roof again. The fundamental questions it poses are how cognitive representations are mapped onto physical motoric functions, what the division of labour is between phonetics and phonology, and which methods are appropriate to study them. An example of research within Laboratory Phonology would be using electroencephalography in a perception experiment to make inferences about the featural specification or lack thereof in segments.

Concurrent with Laboratory Phonology as an approach to phonology came the inception of Articulatory Phonology, which developed in the same historical context. But whereas Laboratory Phonology is a theory-neutral approach, Articulatory Phonology, developed by Catherine Browman and Louis Goldstein, was a novel theory about the internal structure of segments. Instead of segments having primes (features or elements), there are only articulatory "gestures" such as "closed velum" or "protruded lips". Gestures are coordinated in a certain way so that they overlap or are crucially sequential, thereby creating the illusion of segments, which have no status in Articulatory Phonology. In visualisations called "gstural scores", the phonological specification of gestures is shown throughout time as bars along a horizontal axis. Importantly, a gestural specification denotes an abstract articulatory goal, and is not itself a motoric event, nor does the goal need to be attained at all times. A gestural representation of speech leads to unique analyses in which e.g. assimilation can be directly modelled as gestural overlap, and can straightforwardly explain certain alternations that make no sense from the perspective of segment-internal phonological primes. This reduces complexity of the (morpho)phonology. Furthermore, given the importance accorded to gestures, Articulatory Phonology has focussed much on the temporal organisation and coordination between the movements of the articulators. Results in this line of research have interesting implications for syllable structure in particular.

In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory (OT), an architecture for the computation of phonology that is couched within a more general theory of the relationship between brain and mind. In stark contrast to traditional generative phonology and its ordered rules, which was the dominant view of phonology up and until the 90's, OT proposed that phonology changes an underlying form (the "input") by selecting an optimal pronunciation for it (the "output"). Which pronunciation is optimal is determined by evaluating how badly each of the theoretically infinite possible pronunciations (the "candidates") violates a set of constraints. Crucially, each of the constraints is in principle violable, but any given constraint is more important than the combination of all lower-ranked constraints, so that the optimal candidate will be the one that satisfies a higher-ranked constraint than other candidates regardless of the optimal candidates' other violations. Classic OT executes this evaluation process in parallel, meaning that the computation cannot evaluate intermediate steps in the derivation. This is diametrically opposed to the step-by-step derivations of generative phonology where the output of one rule can be the input of a rule that is ordered after it. However, there exist versions of OT that involve serial processing, i.e. multiple consecutive evaluations. Constraints were originally asserted to be universal, so that the only difference between languages was the ordered "ranking" in which the constraints were ranked. This was compatible with universal grammar. The OT approach was soon extended to morphology by John McCarthy and Alan Prince and has become a dominant trend in phonology.

Computational Phonology

An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary Phonology in recent years.

Phonemes
One of the core tasks of the phonologist is to create an analysis of the phonemic inventory of a language. This is sometimes the first step in a phonological analysis, because phonemes are the building blocks of syllables, have features as their own building blocks, undergo phonological processes, and are the carriers of all suprasegmental properties of speech such as stress. A simple disagnostic for proving phonemehood is to find words that differ in meaning and phonetically differ in only one speech sound, i.e. minimal pairs. When a minimal pair is found, it is proof that the different speech sounds belong to different phonemes. Speech sounds for which no minimal pairs can be found, may be allophones of the same phoneme. In this case, one might observe that the allophones are in complementary distribution, meaning that one can only appear in phonological contexts where the other cannot, and vice versa. The allophone that appears in the most diverse environment is then taken to be the unconditioned allophone.

To find minimal pairs or establish the lack thereof, a phonologist needs a data set of accurately transcribed words, or they could attempt to elicit minimal pairs from a native speaker. How straightforward it is to establish minimal pairs will depend on several language-specific factors. If a language has many phonological processes, relationships between the underlying form and surface form will be obscured, so that a thorough examination of these processes needs to precede a definitive phonemic analysis. Otherwise, what appears to be a minimal pair on the phonological surface can be mistaken for an underlying contrast.

There are several additional analytical complications that may arise. First, the absence of a minimal pair does not always prove that two speech sounds must belong to the same phoneme. If speech sounds are in complementary distribution and thus have no minimal pairs, then they are usually still considered different phonemes if they are phonetically very different. This is the case, for example, for /h/ and /ŋ/ in German. But there do not exists any criteria for how phonetically distinct two sounds have to be in order to determine that they are different phonemes so that there are controversial cases such as Standard Mandarin /i/, which is in complementary distribution sounds that could be transcribed as [ɹ̺] and [ɻ]. It can also happen that two sounds are not in complementary distribution, do not have any minimal pairs to distinguish them, and yet are not interchangeable. This is the case for Dutch /ɣ/ (for speakers who have this sound). It does not contrast with its voiceless counterpart /x/, but both sounds occur word-initially and intervocalically without being predictable.[SOURCE] A second problem for phonemic analysis is that it is not always clear which allophone is conditioned and which one must be taken as underlying. [EXAMPLE]. Thirdly, loanwords may introduce new speech sounds or new phonotactic structures to the language, and there are no truly objective means to decide when loans must be accepted as being part of the sound system.

Despite the crucial role that phonemes have played since their conceptualisation, there is no complete consensus on whether phonemes are a convenient descriptive tool for linguists, or whether they are actual cognitive units that should have a place in formal theory. A framework in which phonemes are considered to be epiphonema is Articulatory Phonology, where phonemes/segments are created through the implementation of gestures that are not contained within or associated to phonemes in the way that features are. It has also been claimed that, when taking the logic of Autosegmental Phonology to its logical endpoint, segments are only anchoring points for features on a timing tier, so that there are no phonemes in the classical sense.[SOURCE] Neurocognitive research has likewise produced mixed results, with some studies[SOURCE] supporting the existence of phonemes whereas others find no evidence.[SOURCE]

Phonology in sign languages
The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not speech-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sublexical units are not instantiated as speech sounds.

Theoretical frameworks in Phonology

 * Autosegmental Phonology
 * Element Theory: an approach to subsegmental phonology that assumes that the building blocks of speech sounds and tones are acoustic elements.
 * Exemplar Theory
 * Generative Phonology