Language complexity

Language complexity is a topic in linguistics which can be divided into several sub-topics such as phonological, morphological, syntactic, and semantic complexity. The subject also carries importance for language evolution.

Language complexity has been studied less than many other traditional fields of linguistics. While the consensus is turning towards recognizing that complexity is a suitable research area, a central focus has been on methodological choices. Some languages, particularly pidgins and creoles, are considered simpler than most other languages, but there is no direct ranking, and no universal method of measurement although several possibilities are now proposed within different schools of analysis.

History
Throughout the 19th century, differential complexity was taken for granted. The classical languages Latin and Greek, as well as Sanskrit, were considered to possess qualities which could be achieved by the rising European national languages only through an elaboration that would give them the necessary structural and lexical complexity that would meet the requirements of an advanced civilization. At the same time, languages described as 'primitive' were naturally considered to reflect the simplicity of their speakers. On the other hand, Friedrich Schlegel noted that some nations "which appear to be at the very lowest grade of intellectual culture", such as Basque, Sámi and some native American languages, possess a striking degree of elaborateness.

Equal complexity hypothesis
During the 20th century, linguists and anthropologists adopted a standpoint that would reject any nationalist ideas about superiority of the languages of establishment. The first known quote that puts forward the idea that all languages are equally complex comes from Rulon S. Wells III, 1954, who attributes it to Charles F. Hockett. While laymen never ceased to consider certain languages as simple and others as complex, such a view was erased from official contexts. For instance, the 1971 edition of Guinness Book of World Records featured Saramaccan, a creole language, as "the world's least complex language". According to linguists, this claim was "not founded on any serious evidence", and it was removed from later editions. Apparent complexity differences in certain areas were explained with a balancing force by which the simplicity in one area would be compensated with the complexity of another; e.g. David Crystal, 1987: "All languages have a complex grammar: there may be relative simplicity in one respect (e.g., no word-endings), but there seems always to be relative complexity in another (e.g., word-position)."

In 2001 creolist John McWhorter argued against the compensation hypothesis. McWhorter contended that it would be absurd if, as languages change, each had a mechanism that calibrated it according to the complexity of all the other 6,000 or so languages around the world. He underscored that linguistics has no knowledge of any such mechanism. Revisiting the idea of differential complexity, McWhorter argued that it is indeed creole languages, such as Saramaccan, that are structurally "much simpler than all but very few older languages". In McWhorter's notion this is not problematic in terms of the equality of creole languages because simpler structures convey logical meanings in the most straightforward manner, while increased language complexity is largely a question of features which may not add much to the functionality, or improve usefulness, of the language. Examples of such features are inalienable possessive marking, switch-reference marking, syntactic asymmetries between matrix and subordinate clauses, grammatical gender, and other secondary features which are most typically absent in creoles. McWhorter's notion that "unnatural" language contact in pidgins, creoles and other contact varieties inevitably destroys "natural" accretions in complexity perhaps represents a recapitulation of 19th-century ideas about the relationship between language contact and complexity.

During the years following McWhorter's article, several books and dozens of articles were published on the topic. As to date, there have been research projects on language complexity, and several workshops for researchers have been organised by various universities. Among linguists who study this, there is still no universally accepted consensus on this issue.

Complexity metrics
At a general level, language complexity can be characterized as the number and variety of elements, and the elaborateness of their interrelational structure. This general characterisation can be broken down into sub-areas:
 * Syntagmatic complexity: number of parts, such as word length in terms of phonemes, syllables etc.
 * Paradigmatic complexity: variety of parts, such as phoneme inventory size, number of distinctions in a grammatical category, e.g. aspect
 * Organizational complexity: e.g. ways of arranging components, phonotactic restrictions, variety of word orders.
 * Hierarchic complexity: e.g. recursion, lexical–semantic hierarchies.

Measuring complexity is considered difficult, and the comparison of whole natural languages as a daunting task. On a more detailed level, it is possible to demonstrate that some structures are more complex than others. Phonology and morphology are areas where such comparisons have traditionally been made. For instance, linguistics has tools for the assessment of the phonological system of any given language. As for the study of syntactic complexity, grammatical rules have been proposed as a basis, but generative frameworks, such as the minimalist program and the Simpler Syntax framework, have been less successful in defining complexity and its predictions than non-formal ways of description.

Many researchers suggest that several different concepts may be needed when approaching complexity: entropy, size, description length, effective complexity, information, connectivity, irreducibility, low probability, syntactic depth etc. Research suggests that while methodological choices affect the results, even rather crude analytic tools may provide a feasible starting point for measuring grammatical complexity.

Computational tools

 * Coh-Metrix
 * L2 Syntactic Complexity Analyzer