Wikipedia:Pending changes/Testing/1

Wiktionary is a multilingual, web-based project to create a free content dictionary of all words in all languages. It is collaboratively edited via a wiki, and its name is a blend of the words wiki and dictionary. It is available in 173 languages and in Simple English. Like its sister project Wikipedia, Wiktionary is run by the Wikimedia Foundation, and is written collaboratively by volunteers, dubbed "Wiktionarians". Its wiki software, MediaWiki, allows almost anyone with access to the website to create and edit entries.

Because Wiktionary is not limited by print space considerations, most of Wiktionary's language editions provide definitions and translations of words from many languages, and some editions offer additional information typically found in thesauri and lexicons. The English Wiktionary includes a Wikisaurus (thesaurus) of synonyms of various words.

Wiktionary data are frequently used in various natural language processing tasks.

History and development
Wiktionary was brought online on December 12, 2002, following a proposal by Daniel Alston and an idea by Larry Sanger, co-founder of Wikipedia. On March 28, 2004, the first non-English Wiktionaries were initiated in French and Polish. Wiktionaries in numerous other languages have since been started. Wiktionary was hosted on a temporary domain name (wiktionary.wikipedia.org) until May 1, 2004, when it switched to the current domain name. , Wiktionary features over 25.9 million entries across its editions. The largest of the language editions is the English Wiktionary, with over 5 million entries, followed by the Malagasy Wiktionary with over 3.9 million bot-generated entries and the French Wiktionary with over 3 million. Forty-one Wiktionary language editions now contain over 100,000 entries each.



Most of the entries and many of the definitions at the project's largest language editions were created by bots that found creative ways to generate entries or (rarely) automatically imported thousands of entries from previously published dictionaries. Seven of the 18 bots registered at the English Wiktionary created 163,000 of the entries there.

Another of these bots, "ThirdPersBot," was responsible for the addition of a number of third-person conjugations that would not have received their own entries in standard dictionaries; for instance, it defined "smoulders" as the "third-person singular simple present form of smoulder." Of the 648,970 definitions the English Wiktionary provides for 501,171 English words, 217,850 are "form of" definitions of this kind. This means its coverage of English is slightly smaller than that of major monolingual print dictionaries. The Oxford English Dictionary, for instance, has 615,000 headwords, while Merriam-Webster's Third New International Dictionary of the English Language, Unabridged has 475,000 entries (with many additional embedded headwords). Detailed statistics exist to show how many entries of various kinds exist.

The English Wiktionary does not rely on bots to the extent that some other editions do. The French and Vietnamese Wiktionaries, for example, imported large sections of the Free Vietnamese Dictionary Project (FVDP), which provides free content bilingual dictionaries to and from Vietnamese. These imported entries make up virtually all of the Vietnamese edition's contents. Almost all non-Malagasy-language entries of the Malagasy Wiktionary were copied by bot from other Wiktionaries. Like the English edition, the French Wiktionary has imported the approximately 20,000 entries from the Unihan database of Chinese, Japanese, and Korean characters. The French Wiktionary grew rapidly in 2006 thanks in large part to bots copying many entries from old, freely licensed dictionaries, such as the eighth edition of the Dictionnaire de l'Académie française (1935, around 35,000 words), and using bots to add words from other Wiktionary editions with French translations. The Russian edition grew by nearly 80,000 entries as "LXbot" added boilerplate entries (with headings, but without definitions) for words in English and German.

Logos
Wiktionary has historically lacked a uniform logo across its numerous language editions. Some editions use logos that depict a dictionary entry about the term "Wiktionary", based on the previous English Wiktionary logo, which was designed by Brion Vibber, a MediaWiki developer. Because a purely textual logo must vary considerably from language to language, a four-phase contest to adopt a uniform logo was held at the Wikimedia Meta-Wiki from September to October 2006. Some communities adopted the winning entry by "Smurrayinchester", a 3×3 grid of wooden tiles, each bearing a character from a different writing system. However, the poll did not see as much participation from the Wiktionary community as some community members had hoped, and a number of the larger wikis ultimately kept their textual logos.

In April 2009, the issue was resurrected with a new contest. This time, a depiction by "AAEngelman" of an open hardbound dictionary won a head-to-head vote against the 2006 logo, but the process to refine and adopt the new logo then stalled. In the following years, some wikis replaced their textual logos with one of the two newer logos. In 2012, 55 wikis that had been using the English Wiktionary logo received localized versions of the 2006 design by "Smurrayinchester". In July 2016, the English Wiktionary adopted a variant of this logo. , 135 wikis, representing 61% of Wiktionary's entries, use a logo based on the 2006 design by "Smurrayinchester", 33 wikis (36%) use a textual logo, and three wikis (3%) use the 2009 design by "AAEngelman".

Accuracy
To ensure accuracy, the English Wiktionary has a policy requiring that terms be attested. Terms in major languages such as English and Chinese must be verified by:


 * 1) clearly widespread use, or
 * 2) use in permanently recorded media, conveying meaning, in at least three independent instances spanning at least a year.

For smaller languages such as Creek and extinct languages such as Latin, one use in a permanently recorded medium or one mention in a reference work is sufficient verification.

Critical reception
Critical reception of Wiktionary has been mixed. In 2006 Jill Lepore wrote in the article "Noah's Ark" for The New Yorker,

There's no show of hands at Wiktionary. There's not even an editorial staff. "Be your own lexicographer!", might be Wiktionary's motto. Who needs experts? Why pay good money for a dictionary written by lexicographers when we could cobble one together ourselves?

Wiktionary isn't so much republican or democratic as Maoist. And it's only as good as the copyright-expired books from which it pilfers.

Keir Graff's review for Booklist was less critical:

"Is there a place for Wiktionary? Undoubtedly. The industry and enthusiasm of its many creators are proof that there's a market. And it's wonderful to have another strong source to use when searching the odd terms that pop up in today's fast-changing world and the online environment. But as with so many Web sources (including this column), it's best used by sophisticated users in conjunction with more reputable sources."

References in other publications are fleeting and part of larger discussions of Wikipedia, not progressing beyond a definition, although David Brooks in The Nashua Telegraph described it as "wild and woolly". One of the impediments to independent coverage of Wiktionary is the continuing confusion that it is merely an extension of Wikipedia. In 2005, PC Magazine rated Wiktionary as one of the Internet's "Top 101 Web Sites", although little information was given about the site.

The measure of correctness of the inflections for a subset of the Polish words in the English Wiktionary showed that this grammatical data is very stable. Only 131 out of 4748 Polish words have had their inflection data corrected.

Wiktionary data in natural language processing
Wiktionary has semi-structured data. Wiktionary lexicographic data should be converted to machine-readable format in order to be used in natural language processing tasks.

Wiktionary data mining is a complex task. There are the following difficulties: (1) the constant and frequent changes to data and schemata, (2) the heterogeneity in Wiktionary language edition schemata and (3) the human-centric nature of a wiki.

There are several parsers for different Wiktionary language editions:


 * DBpedia Wiktionary: a subproject of DBpedia, the data are extracted from English, French, German and Russian wiktionaries; the data includes language, part of speech, definitions, semantic relations and translations. The declarative description of the page schema, regular expressions and finite state transducer are used in order to extract information.
 * JWKTL (Java Wiktionary Library): provides access to English Wiktionary and German Wiktionary dumps via a Java Wiktionary API. The data includes language, part of speech, definitions, quotations, semantic relations, etymologies and translations. JWKTL is available for non-commercial use.
 * wikokit: the parser of English Wiktionary and Russian Wiktionary. The parsed data includes language, part of speech, definitions, quotations, semantic relations and translations. This is a multi-licensed open-source software.
 * Etymological entries have been parsed in the Etymological WordNet project.

The various natural language processing tasks were solved with the help of Wiktionary data:


 * Rule-based machine translation between Dutch language and Afrikaans; data of English Wiktionary, Dutch Wiktionary and Wikipedia were used with the Apertium machine translation platform.
 * Construction of machine-readable dictionary by the parser NULEX, which integrates open linguistic resources: English Wiktionary, WordNet, and VerbNet. The parser NULEX scrapes English Wiktionary for tense information (verbs), plural form and part of speech (nouns).
 * Speech recognition and synthesis, where Wiktionary was used to automatically create pronunciation dictionaries. Word-pronunciation pairs were retrieved from 6 Wiktionary language editions (Czech, English, French, Spanish, Polish, and German). Pronunciations are in terms of the International Phonetic Alphabet. The ASR system based on English Wiktionary has the highest word error rate, where each third phoneme has to be changed.
 * Ontology engineering and semantic network constructing.
 * Ontology matching.
 * Text simplification. Medero & Ostendorf assessed vocabulary difficulty (reading level detection) with the help of Wiktionary data. Properties of words extracted from Wiktionary entries (definition length and POS, sense, and translation counts) were investigated. Medero & Ostendorf expected that (1) very common words will be more likely to have multiple parts of speech, (2) common words to be more likely to have multiple senses, (3) common words will be more likely to have been translated into multiple languages. These features extracted from Wiktionary entries were useful in distinguishing word types that appear in Simple English Wikipedia articles from words that only appear in the Standard English comparable articles.
 * Part-of-speech tagging. Li et al. (2012) built multilingual POS-taggers for eight resource-poor languages on the basis of English Wiktionary and Hidden Markov Models.
 * Sentiment analysis.