Monosemy

Monosemy means 'one-meaning' and is a methodology primarily for lexical semantic analysis, but which has widespread applicability throughout the various strata of language.

Originator
Despite several precursors, monosemy as a theoretical model was developed most prominently by the transformational-generative linguist, Charles Ruhl.

Principles
Monosemy as a methodology for analysis is based on the recognition that almost all cases of polysemy (where a word is understood to have multiple meanings) require context in order to differentiate these supposed meanings.

Since context is an indispensable part of any polysemous meaning, Ruhl argues that it is better to locate the variation in meaning where it actually resides: in the context and not in the word itself. Wallis Reid has demonstrated that a polysemous definition does not actually add any additional information that is not already located in the context, such that a polysemous definition is exactly as informative as a monosemous definition when the effects of context are "controlled" for (i.e. systematically factored out of a definition).

'''A monosemous analysis assumes that any sign in a sign system signals one value within its paradigm, with a substance that arises out of its diachronic history. '''

There are some cases where a word genuinely has two meanings that cannot be brought under a singular, more abstract sense, but these are better understood as instances of homonymy.

Recent Applications
Monosemy has been used in work by the Columbia School of Linguistics, in areas of cognitive linguistics, and in linguistic research into Ancient Greek.

Other Understandings of Monosemy
Monosemy can also be understood as an attribute of a language (though this is not precisely what Charles Ruhl's theory articulates), namely the absence of semantic ambiguity in language. The artificial language Lojban and its predecessor Loglan represent attempts at creating monosemous languages. Monosemy is important for translation and semantic computing.