Wikipedia:Wikipedia Signpost/2017-08-05/Recent research

"Wikipedia matters": a significant impact of user-generated content on real-life choices

 * Reviewed by Marco Chemello and Federico Leva

Improving Wikipedia articles may contribute to increasing local tourism. That's the result of a study published as preprint a few weeks ago by M. Hinnosaar, T. Hinnosaar, M. Kummer and O. Slivko. This group of scholars from various universities – including Collegio Carlo Alberto, the Center for European Economic Research (ZEW) and Georgia Institute of Technology – led a field experiment in 2014: they expanded 120 Wikipedia articles regarding 60 Spanish cities and checked the impact on local tourism, by measuring the increased number of hotel stays in the same cities from each country. The result was an average +9 % (up to 28 % in best cases). Random city articles were expanded mainly by translating contents taken from the Spanish or the English edition of Wikipedia into other languages, and by adding some photos. The authors wrote: "We found a significant causal impact of user-generated content in Wikipedia on real-life choices. The impact is large. A well-targeted two-paragraph improvement may lead to a 9 % increase in the visits by tourists. This has significant implications both in macroeconomic and microeconomic scale."

The study revises an earlier version which declared the data was inconclusive (not statistically relevant yet) although there were hints of a positive effect. It's not entirely clear to this reviewer how the statistical significance was ascertained, but the method used to collect data was sound:
 * 240 similar articles were selected and 120 kept as control (by not editing them);
 * the sample only included mid-sized cities (big cities would be harder to impact and small ones would be more susceptible to unrelated oscillations of tourism);
 * hotel stays are measured by country of provenance and city, allowing to measure only the subset of tourists affected by the edits (in their language);
 * as expected, the impact is larger on the cities whose article was especially small at the beginning;
 * the authors cared about making contributions consistent with local policies and expectations and checked the acceptance of their edits by measuring content persistence (about 96 % of their text survived in the long-term).

Curiously, while the authors had no problems adding their translations and images to French, German and Italian Wikipedia, all their edits were reverted on the Dutch Wikipedia. Local editors may want to investigate what made the edits unacceptable: perhaps the translator was not as good as those in the other languages, or the local community is prejudicially hostile to new users editing a mid-sized group of pages at once, or some rogue user reverted edits which the larger community would accept? [PS: One of our readers from the Dutch Wikipedia has provided some explanations.]

Assuming that expanding 120 stubs by translating existing articles in other languages takes few hundreds hours of work and actually produces about 160,000 € in additional revenue per year as estimated by the authors, it seems that it would be a bargain for the tourism minister of every country to expand Wikipedia stubs in as many tourist languages as possible, also making sure they have at least one image, by hiring experienced translators with basic wiki editing skills. Given that providing basic information is sufficient and neutral text is generally available in the source/local language's Wikipedia, complying with neutral point of view and other content standards seems to be sufficiently easy.

Improved article quality predictions with deep learning

 * Reviewed by Morten Warncke-Wang

A paper at the upcoming OpenSym conference titled "An end-to-end learning solution for assessing the quality of Wikipedia articles" combines the popular deep learning approaches of recurrent neural networks (RNN) and long short-term memory (LSTM) to make substantial improvements in our ability to automatically predict the quality of Wikipedia's articles.

The two researchers from Université de Lorraine in France first published on using deep learning for this task a year ago (see our coverage in the June 2016 newsletter), where their performance was comparable to the state-of-the-art at the time, the WMF's own Objective Revision Evaluation Service (ORES) (disclaimer: the reviewer is the primary author of the research upon which ORES' article quality classifier is built). Their latest paper substantially improves the classifier's performance to the point where it clearly outperforms ORES. Additionally, using RNNs and LSTM means the classifier can be trained on any language Wikipedia, which the paper demonstrates by outperforming ORES in all three of the languages where it's available: English, French, and Russian.

The paper also contains a solid discussion of some of the current limitations of the RNN+LSTM approach. For example, the time it takes to make a prediction is too slow to deploy in a setting such as ORES where quick predictions are required. Also, the custom feature sets that ORES has allow for explanations on how to improve article quality (e.g. "this article can be improved by adding more sources"). Both are areas where we expect to see improvements in the near future, making this deep learning approach even more applicable to Wikipedia.

Recent behavior has a strong impact on content quality

 * Reviewed by Morten Warncke-Wang

A recently published journal paper by Michail Tsikerdekis titled "Cumulative Experience and Recent Behavior and their Relation to Content Quality on Wikipedia" studies how factors like an editor's recent behavior, their editing experience, experience diversity, and implicit coordination relate to improvements in article quality in the English Wikipedia.

The paper builds upon previous work by Kittur and Kraut that studied implicit coordination, where they found that having a small group of contributors doing the majority of the work was most effective. It also builds upon work by Arazy and Nov on experience diversity, which found that the diversity of experience in the group was more important.

Arguing that it is not clear which of these factors is the dominant one, Tsikerdekis further extends these models in two key areas. First, experience diversity is refined by measuring accumulated editor experience in three key areas: high quality articles, the User and User talk namespaces, and the Wikipedia namespace. Secondly, editor behavior is refined by measuring recent participation in the same three key areas. Lastly he adds interaction effects, for example between these two new refinements and implicit coordination.

Using the more refined model of experience diversity results in a significant improvement over baseline models, and an interaction effect shows that high coordination inequality (few editors doing most of the work) is only effective when contributors have low experience editing the User and User talk namespaces. However, the models that incorporate recent behavior are substantial improvements, indicating that recent behavior has a much stronger impact on quality than overall editor experience and experience diversity. Again studying the interaction effects, the findings are that implicit coordination is most effective when contributors have not recently participated in high quality articles, and that contributors make a stronger impact on content quality when they edit articles that match their experience levels.

These findings ask important questions about how groups of contributors in Wikipedia can most effectively work together to improve article quality. Future work is needed to understand more about when explicit coordination is most useful, and the paper points to the possibility of using recommender systems to route contributors to groups where their experience level can make a difference.

Predicting book categories for Wikipedia articles

 * Reviewed by Morten Warncke-Wang

"Automatic Classification of Wikipedia Articles by Using Convolutional Neural Network" is the title of a paper published at this year's Qualitative and Quantitative Methods in Libraries conference. As the title describes, the paper applies convolutional neural networks (CNN) to the task of predicting the Nippon Decimal Classification (NDC) category that a Japanese Wikipedia article belongs to. This NDC category can then be used for example to suggest further reading, providing a bridge between the online content of Wikipedia and the books that are available in Japan's libraries.

In the paper, a Wikipedia article is represented as a combination of Word2vec vectors: one vector for the article's title, one each for the categories it belongs to, and one for the entire article text. These vectors combine to form a two-dimensional matrix, which the CNN is trained on. Combining the title and category vectors results in the highest performance, with 87.7% accuracy in predicting the top-level category and 74.7% accuracy for the second-level category. The results are promising enough that future work is suggested where these will be used for book recommendations.

The work was motivated by "recent research findings [indicating] that relatively few students actually search and read books," and "aims to encourage students to read library books as a more reliable source of information rather than relying on Wikipedia article."

Conferences and events
See the research events page on Meta-wiki for upcoming conferences and events, including submission deadlines.

Other recent publications
''Other recent publications that could not be covered in time for this issue include the items listed below. contributions are always welcome for reviewing or summarizing newly published research.''
 * Compiled by Tilman Bayer


 * "Open strategy-making at the Wikimedia Foundation: A dialogic perspective" From the abstract: "What is the role of dialogue in open strategy processes? Our study of the development of Wikimedia’s 5-year strategy plan through an open strategy process [in 2009/2010] reveals the endemic nature of tensions occasioned by the intersection of dialogue as an emergent, nonhierarchical practice, and strategy, as a practice that requires direction, focus, and alignment."
 * "Wikipedia: a complex social machine" From the abstract: "We examine the activity of Wikipedia by analysing WikiProjects [...] We harvested the content of over 600 active Wikipedia projects, which comprised of over 100 million edits and 15 million Talk entries, associated with over 1.5 million Wikipedia articles and Talk pages produced by 14 million unique users. Our analysis reveals findings related to the overall positive activity and growth of Wikipedia, as well as the connected community of Wikipedians within and between specific WikiProjects. We argue that the complexity of Wikipedia requires metrics which reflect the many aspects of the Wikipedia social machine, and by doing so, will offer insights into its state of health." (See also earlier coverage of publications by the same authors)
 * "Expanding the sum of all human knowledge: Wikipedia, translation and linguistic justice" From the abstract: "This paper.. begins by assessing the [Wikimedia Foundation’s' Language Proposal Policy and Wikipedia’s translation guidelines. Then, drawing on statistics from the Content Translation tool recently developed by Wikipedia to encourage translation within the various language versions, this paper applies the concept of linguistic justice to help determine how any future translation policies might achieve a better balance between fairness and efficiency, arguing that a translation policy can be both fair and efficient, while still conforming to the ‘official multilingualism’ model that seems to be endorsed by the Wikimedia Foundation." (cf. earlier paper by the same author)
 * "Nation image and its dynamic changes in Wikipedia" From the abstract: "An ontology of nation image was built from the keywords collected from the pages directly related to the big three exporting countries in East Asia, i.e. Korea, Japan and China. The click views on the pages of the countries in two different language editions of Wikipedia, Vietnamese and Indonesian were counted."
 * "'A wound that has been festering since 2007': The Burma/Myanmar naming controversy and the problem of rarely challenged assumptions on Wikipedia" From the abstract: "The author’s approach to the study of the Wikipedia talk pages devoted to the Burma/Myanmar naming controversy is qualitative in nature and explores the debate over sources through textual analysis. Findings: Editors brought to their work a number of underlying assumptions including the primacy of the nation-state and the nature of a 'true' encyclopedia. These were combined with a particular interpretation of neutral point of view (NPOV) policy that unnecessarily prolonged the debate and, more importantly, would have the effect, if widely adopted, of reducing Wikipedia’s potential to include multiple perspectives on any particular topic."
 * "The double power law in human collaboration behavior: The case of Wikipedia" From the abstract: "We study [..] the inter-event time distribution of revision behavior on Wikipedia [..]. We observe a double power law distribution for the inter-editing behavior at the population level and a single power law distribution at the individual level. Although interactions between users are indirect or moderate on Wikipedia, we determine that the synchronized editing behavior among users plays a key role in determining the slope of the tail of the double power law distribution."
 * "Wikidata: la soluzione wikimediana ai linked open data" ("Wikidata: the Wikimedian solution for linked open data, in Italian)
 * "Open-domain question answering framework using Wikipedia" From the abstract: "This paper explores the feasibility of implementing a model for an open domain, automated question and answering framework that leverages Wikipedia’s knowledgebase. While Wikipedia implicitly comprises answers to common questions, the disambiguation of natural language and the difficulty of developing an information retrieval process that produces answers with specificity present pertinent challenges. [...] Using DBPedia, an ontological database of Wikipedia’s knowledge, we searched for the closest matching property that would produce an answer by applying standardised string matching algorithms[...]. Our experimental results illustrate that using Wikipedia as a knowledgebase produces high precision for questions that contain a singular unambiguous entity as the subject, but lowered accuracy for questions where the entity exists as part of the object."
 * "Textual curation: Authorship, agency, and technology in Wikipedia and Chambers's Cyclopædia" (book) From the publisher's announcement: "Wikipedia is arguably the most famous collaboratively written text of our time, but few know that nearly three hundred years ago Ephraim Chambers proposed an encyclopedia written by a wide range of contributors—from illiterate craftspeople to titled gentry. Chambers wrote that incorporating information submitted by the public would considerably strengthen the second edition of his well-received Cyclopædia, which relied on previously published information. In Textual Curation, Krista Kennedy examines the editing and production histories of the Cyclopædia and Wikipedia, the ramifications of robot-written texts, and the issues of intellectual property theory and credit."