User:Tresoldi/WMT



The purpose of the Wiki(pedia) Machine Translation Project is to develop ideas, methods and tools that can help translate Wikipedia articles (and Wikimedia pages) from one language to another, particularly out of English and into languages with small numbers of fluent speakers.

Remember to read the current talk page and particularly what is stated it the Wikipedia Translation page:

Wikipedia is a multilingual project. Articles on the same subject in different languages can be edited independently; they do not have to be translations of one another or correspond closely in form, style or content. Still, translation is often useful to spread information between articles in different languages.

Translation takes work. Machine translation, especially between unrelated languages (e.g. English and Japanese), produces very low quality results. Wikipedia consensus is that an unedited machine translation, left as a Wikipedia article, is worse than nothing. The translation templates have links to machine translations built in automatically, so all readers should be able to access machine translations easily.

Motivation
Small languages can't produce articles as fast as Wikimedia projects in in languages such as English, Japanese, German or Spanish, because the number of wikipedians is too low and some prefer to contribute to bigger projects. One potential solution for this problem, in discussion since 2002, is the translation of Wikimedia projects. As some languages will not have enough translators, Machine Translation can improve the productivity of the community. This sort of automatic translation would be a first step for manual translations to be added and corrected later, while the local communities develop.

A second, but very important motivation, is the development of free tools for Computational Linguistics and Natural Language Processing. These fields are very important, but resources for small languages are usually inexistent, low-quality, expensive and/or restricted in their usage. Even for "big" languages such as English, free resources are still short. We could develop...

Interlingua approach
A different, but related approach would be translating articles by hand into a machine translation interlingua like UNL, then writing software modules to translate automatically from that interlingua into each target language. This only saves work with respect to direct translation if there are several target languages whose modules are well enough developed, but those modules are much easier to write, and expectably more accurate, than full real-language to real-language automatic translation systems.

Translating between closely related languages
I would imagine it would be an easier task to translate between similar languages than non-similar ones. For example, we have Wikipedias in Catalan and Spanish and Macedonian and Bulgarian, perhaps even Dutch and Afrikaans (some more studies would have to be done to evaluate which would be most appropriate). There is some free software being produced in Spain called en:Apertium that might be useful here. - FrancisTyers 21:44, 2 March 2006 (UTC)

Suggested statistical approach

 * In a first step, a series of language corpora and statistical models are generated from the various Wikipedias. The results are particularly interesting because, besides the extraction of text needed for the project, they also allow us to make public under a permissive license a kind of data generally unavailable for smaller languages, or at most only available after expensive purchases. Many of these have already been released and are hosted at SourceForge:

(for more information on these data, see my talk page at the English Wikipedia tresoldi (talk) 15:30, 13 March 2010 (UTC))


 * As most of the minor Wikipedias would likely be populated, at least at first, by articles in the English one, the first pairs to be developed would be the English/Foreign language ones. Thus, for each language an initial list of random sentences is drawn from the English corpus (as above) and, if such a system is available, translated with some existing machine translation software (such as Google Translate or Apertium). Hopefully, wikipedians will start revising the translations collaboratively, just like the normal Wikipedia articles.


 * After the small list of random sentences is adequately translated/revised, a number of other sentences will be selected to be gradually added to the developing parallel corpus. By using statistics collected with the language models built above, particularly with the English one, two different approaches will be followed:
 * The first is to gradually cover all common n-grams of all orders (descending from 5-grams), so that the most common structures in the English wikipedia will be covered by the corpus (in other words, more pages should be translated with fewer problems -- think about very similar pages such as the ones about towns and cities, short biographies, etc.).
 * The second will be to gradually cover the n-grams found in the parallel corpus, in order to cover the different contexts of both those n-grams and particularly of the contexts of them in the already included sentences.


 * After the translation of about 1,000 sentences, an actual system will start to be build weekly. Everyone with some experience in statistical machine translation will agree that 1,000 sentences is a ridiculously low number for statistical translation, but the idea is to have a baseline set up and gradually increment it. Besides that, while the adding of sentences as described above would go on, from time to time the system would be used to translate some of top articles of the English Wikipedia not covered by the foreign one, correct it (with a lot of pain in the first times) and add it back to the corpus. After some time, we should finally be able to eat our own dog food, i.e., to do the first raw translation with our statistical systems, not relying anymore in non free systems (however, the usage of free software like Apertium for related languages is likely to still be a better alternative in the foreseeable future).


 * There will be a gradual integration with the Wiktionary and Wikipedia's intralinguistic links, covering not only the the basic lemmas but, hopefully, the most common inflected forms for each language -- the results could be then retributed to Wiktionary with carefully set bots.

Evaluating with Wikipedias
(as originally found in the Apertium Wiki

One of the ways of improving an MT system, and at the same time improve and add content in Wikipedias, is to use Wikipedias as a test bed. You can translate text from one Wikipedia to another, then either post-edit yourself, or wait for, or ask other people to post-edit the text. One of the nice things is that MediaWiki (the software Wikipedia is based on) allows you to view diffs between the versions (see the 'history' tab).

This strategy is beneficial both to Wikipedia and to any machine translation system, such as Apertium or a statistical one based in Moses. Wikipedia gets new articles in languages which might not otherwise have them, and the machine translation system gets information on how we can improve the software. It is important to note that Wikipedia is a community effort, and that rightly people can be concerned about machine translation. To get an idea of this, put yourself in the place of people having to fix a lot of "hit and run" SYSTRAN (a.k.a. BabelFish) or Google Translate translations, with little time and not much patience.

Guidelines

 * Don't just start translating texts and waiting for people to fix them. The first thing you should do, is create an account on the Wikipedia, and then find the "Community notice board". Ask there how regular contributors would feel about you using the Wikipedia for tests. The community notice board should be linked from the front page. It might be called something like "La tavèrna" in Occitan, or "Geselshoekie" in Afrikaans. When you are asking them, make the following clear:


 * This is free software / open source machine translation.
 * You would like to help the community and are doing these translations both to help their Wikipedia expand the range of articles, and to improve the translation software.
 * The translations will be added only with the consent of the community, you do not intend to flood them with poorly translated articles.
 * The translations will be added by a human not by a bot.
 * Ask them if there are any subjects that they prefer you would cover, perhaps they have a page of "requested translations".
 * One way of looking at it might be as a non-native speaker of the language trying to learn the language. Point out that the initial translation will be done by machine, then you will try and fix the translation, but anything that you don't fix you would be grateful for other people to fix.

An example of the kind of conversation you might have is found here.

How to translate
In order to be more useful, when you create the page, first paste in the uneditted machine translation output. Save the page with an edit summary saying that you're still working on it. Then proceed to post-edit the output. After you've finished, save the page again. If you go to the history tab at the top of the page and do "Compare selected versions" you will see the differences (diff) between the machine translation and the post-editted output. This gives a good indication of how good the original Apertium output was.

Existing free software

 * Apertium
 * Apertium is an open-source platform (engine, tools) to build machine translation systems, mainly between related languages. Code, documentation, language pairs available from the Apertium website.
 * Moses (Moses is licensed under the LGPL.)
 * " Moses is a statistical machine translation system that allows you to automatically train translation models for any language pair. All you need is a collection of translated texts (parallel corpus)." (from their website Moses website.
 * Wikipedia translation
 * Tool designed to help populate smaller Wikipedias, for example translating country templates quickly

General

 * Google Translate - Online free machine statistical translator.
 * Bing Translator - Online free machine translator.
 * Babel Fish - First free online translator, largely superseded.
 * GramTrans - Free online tranlator, focus seems to be Scandinavian languages.
 * Promt - Free online translator.
 * WordLingo - Online translator, free for up to 500 words.

Dictionaries

 * Wiktionary

Corpora

 * Europarl - EU12 languages up to 44 million words per language (to be used only with English as source or target language, as many of the non-English sentences are translations of translations).
 * JRC-Acquis - EU22 languages.
 * Southest European Times - English, Turkish, Bulgarian, Macedonian, Serbo-Croatian, Albanian, Greek, Romanian (approx. 9,000 aligned paragraphs, 90,000-120,000 words).
 * South African Government Services - English and Afrikaans (approx. 2,500 aligned sentences, 49,375 words).
 * IJS-ELAN - English-Slovenian.
 * Open Source multilingual corpora - Despite the name, some resources might not be eligible for Wikipedia given their license.
 * OpenTran - single point of access to translations of open-source software in many languages (downloadable as SQLite databases).
 * Tatoeba Project - Database of example sentences translated into several languages.