User:S hp8azy/sandbox

Europarl Corpus
The Europarl Corpus is a corpus consisting of the proceedings of the European Parliament from 1996 up to now. In its first release in 2001 it covered eleven official languages of the European Union (Danish, Dutch, English, Finnish, French, German, Greek, Italian, Portuguese, Spanish and Swedish)2. With the political expansion of the EU the official languages of the ten new member states have been added to the corpus data2. The latest release (2012)1 comprised up to 50 million words per language with the newly added languages being slightly underrepresented as data for them is only available from 2007 onwards2.

The data that makes up the corpus was extracted from the website of the European Parliament and then prepared for linguistic research2. After sentence splitting and tokenization the sentences were aligned across languages with the help of an algorithm developed by Gale & Church (1993)2.

The corpus has been compiled and expanded by a group of researchers led by Philip Koehn at Edinburgh University. Initially it was designed for research purposes in statistical machine translation (SMT). However, since its first release it has been used for multiple other research purposes, including for example word sense disambiguation.

In his paper “Europarl: A Parallel Corpus for Statistical Machine Translation” (2005) Koehn sums up in how far the Europarl corpus is useful for research in SMT. He uses the corpus to develop SMT systems translating each language into each of the other ten languages of the corpus making it 110 systems. This enables Koehn to establish SMT systems for uncommon language pairs that have not been considered by SMT developers beforehand, such as Finnish-Italian for example.

However, the Europarl corpus may not only be used for developing SMT systems but also for their assessment. By measuring the output of the systems against the original corpus data for the target language the adequacy of the translation can be assessed. Koehn uses the BLEU metric by Papineni et al (2002) for this, which counts the coincidences of the two versions to be compared – SMT output and corpus data – and calculates a score on this basis2. The more similar the two versions are the higher the score and therefore the quality of the translation2. The results reflect that some SMT systems perform better than others, eg. Spanish–French (40.2) in comparison to Dutch–Finnish (10.3)2. Koehn assumes that the reason for this is that languages that are related are easier to translate into each other than those which are not2.

Furthermore Koehn uses the SMT systems and the Europarl corpus data in order to investigate whether back translation is an adequate method for the evaluation of machine translation systems. For each language except English he compares the BLEU scores for translation that language from and into English (eg. English > Spanish, Spanish > English) with those that can be achieved by measuring the original English data against the output obtained by translation from English into each language and back translation into English (eg. English > Spanish > English)2. The results indicate that the scores for back translation are far higher than those for monodirectional translation and what is more important they do not correlate at all with the monodirectional scores. For example the monodirectional scores for Englisch<>Greek (27.2 and 23.2) are lower than those for English<>Portuguese (30.1 and 27.2). Yet the back translation score of 56.5 for Greek is higher than the one for Portuguese, which gets 53.6 2. Koehn explains this with the fact that errors committed in the translation process might simply be reversed by back translation resulting in high coincidences of in- and output2. This, however, does not allow any conclusions about the quality of the text in the actual target language2. Therefore Koehn does not consider back translation as an adequate method for the assessment of machine translation systems. #