Cross-serial dependencies



In linguistics, cross-serial dependencies (also called crossing dependencies by some authors ) occur when the lines representing the dependency relations between two series of words cross over each other. They are of particular interest to linguists who wish to determine the syntactic structure of natural language; languages containing an arbitrary number of them are non-context-free. By this fact, Dutch and Swiss-German have been proven to be non-context-free.

Example
As Swiss-German allows verbs and their arguments to be ordered cross-serially, we have the following example, taken from Shieber: That is, "we help Hans paint the house."

Notice that the sequential noun phrases em Hans (Hans) and es huus (the house), and the sequential verbs hälfed (help) and aastriiche (paint) both form two separate series of constituents. Notice also that the dative verb hälfed and the accusative verb aastriiche take the dative em Hans and accusative es huus as their arguments, respectively.

Non-context-freeness
Let $$L_{SG}$$ to be the set of all Swiss-German sentences. We will prove mathematically that $$L_{SG}$$ is not context-free.

In Swiss-German sentences, the number of verbs of a grammatical case (dative or accusative) must match the number of objects of that case. Additionally, a sentence containing an arbitrary number of such objects is admissible (in principle). Hence, we can define the following formal language, a subset of $$L_{SG}$$:$$ L =\text{De Jan} \text{ s}\ddot{\mathrm{a}}\text{it} \text{ das} \text{ mer} \text{ (d'chind)}{}^m \text{ (em} \text{ Hans)}{}^n \text{ s} \text{ huus} \text{ h}\ddot{\mathrm{a}}\text{nd} \text{ wele} \text{ (laa)}{}^m \text{ (h}\ddot{\mathrm{a}}\text{lfe)}{}^n \text{ aastriiche.} $$Thus, we have $$L = L_{SG} \cap L_r$$, where $$L_r$$ is the regular language defined by $$ L =\text{De Jan} \text{ s}\ddot{\mathrm{a}}\text{it} \text{ das} \text{ mer} \text{ (d'chind)}{}^+ \text{ (em} \text{ Hans)}{}^+ \text{ s} \text{ huus} \text{ h}\ddot{\mathrm{a}}\text{nd} \text{ wele} \text{ (laa)}{}^+ \text{ (h}\ddot{\mathrm{a}}\text{lfe)}{}^+ \text{ aastriiche.} $$where the superscript plus symbol means "one or more copies". Since the set of context-free languages is closed under intersection with regular languages, we need only prove that $$L$$ is not context-free (, pp 130–135).

After a word substitution, $$L$$ is of the form $$\{x a^m b^n y c^m d^n z | m, n \geq 1\}$$. Since $$L$$ can be mapped to $$L'$$ by the following map: $$x, y, z \mapsto \epsilon; a \mapsto a; b \mapsto b; c \mapsto c$$, and since the context-free languages are closed under mappings from terminal symbols to terminal strings (that is, a homomorphism) (, pp 130–135), we need only prove that $$L'$$ is not context-free.

$$L' =\{ a^m b^n c^m d^n | m, n \geq 1\} $$ is a standard example of non-context-free language (, p. 128). This can be shown by Ogden's lemma. "Suppose the language is generated by a context-free grammar, then let $p$ be the length required in Ogden's lemma, then consider the word $a^pb^pc^pd^p$ in the language, and mark the letters $b^pc^p$. Then the three conditions implied by Ogden's lemma cannot all be satisfied."All known spoken languages which contain cross-serial dependencies can be similarly proved to be not context-free. This led to the abandonment of Generalized Phrase Structure Grammar once cross-serial dependencies were identified in natural languages in the 1980s.

Treatment
Research in mildly context-sensitive language has attempted to identify a narrower and more computationally tractable subclass of context-sensitive languages that can capture context sensitivity as found in natural languages. For example, cross-serial dependencies can be expressed in linear context-free rewriting systems (LCFRS); one can write a LCFRS grammar for {anbncndn | n ≥ 1} for example.