Wikipedia:Wikipedia Signpost/2023-05-08/Recent research

"How gender and race cloud notability considerations on Wikipedia" – or do they?

 * Reviewed by XOR'easter

The March 2023 paper "'Too Soon' to count? How gender and race cloud notability considerations on Wikipedia", by Lemieux, Zhang, and Tripodi claims to have unearthed quantitative evidence for gender and race biases in English Wikipedia's article deletion processes:  Applying a combination of web-scraping, deep learning, natural language processing, and qualitative analysis to pages of academics nominated for deletion on Wikipedia, we demonstrate how Wikipedia’s notability guidelines are unequally applied across race and gender. Specifically, the authors  "[...] explored how metrics used to assess notability on Wikipedia (WP:Search Engine Test; “Too Soon”) are applied across biographies of academics. To do so, we first web-scraped biographies of academics nominated for deletion from 2017 to 2020 (n = 843). Next, we created a numerical proxy for each subject's online presence score. This value is meant to emulate Wikipedia's “Search Engine Test,” (WP:Search Engine Test) a convenient and common way editors can determine probable notability before nominating a biography for deletion. [...] We also conducted a qualitative analysis of the discussions surrounding deleted biographies labeled “Too Soon,” (WP:Too soon). Doing so allowed our research team to assess if gender and/or racial discrepancies existed in deciding whether a biography was considered notable enough for Wikipedia. We find that both metrics are implemented idiosyncratically."

However, this is making a manifestly and indefensibly incorrect claim about how Wikipedia editors judge topics for notability. Also, the paper attempts to back it up with misleading quotations and numbers that are dubious on multiple levels.

Background
On English Wikipedia, the status of being significant enough to warrant an article is, in the community lingo, notability. This is related to, but more specialized than, the everyday idea of "noteworthiness". Notability is evaluated with the aid of various guidelines that codify community norms and experience, prominent among which is the General Notability Guideline (GNG). Also of relevance for the matter at hand is the specialized guideline applicable to scholars, academics, and educators, known by various abbreviations like WP:PROF. This guideline lays out eight criteria, meeting any one of which is sufficient qualification for notability.

Notability is not based on counting raw search-engine results. As the documentation page for "Arguments to avoid in deletion discussions" succinctly puts it: 
 * Although using a search engine like Google can be useful in determining how common or well-known a particular topic is, a large number of hits on a search engine is no guarantee that the subject is suitable for inclusion in Wikipedia. Similarly, a lack of search engine hits may only indicate that the topic is highly specialized or not generally sourceable via the internet. WP:BIO, for instance, specifically states, Avoid criteria based on search engine statistics (e.g., Google hits or Alexa ranking). One would not expect to find thousands of hits on an ancient Estonian god.

Misrepresentation of Wikipedia policies, guidelines, and essays
The problems begin early in the paper. Referring to the article Katie Bouman, Lemieux et al. observe that it was put up for deletion a mere two days after its creation. They write, "Eventually, her page received a “snow keep” decision, indicating that her notability might be questionable but that deleting her page would be too much of an uphill battle to pursue (WP:SNOW)." This is not the meaning of the snowball clause. Instead, invoking WP:SNOW is a statement that the conclusion of a discussion is already obvious: the chance of its changing direction is the proverbial snowball's chance in hell, and letting it run for a deletion debate's typical period of a full week would be a waste of everyone's time. The conclusion is not that Bouman's notability was "questionable", but rather that it was obvious. The deletion debate for Bouman's biography did not end "eventually". It ended one hour and seventeen minutes after it began. Multiple !voters found the nomination sexist, either in intent (Jealous bros should not cry each time a woman is part of an achievement) or in effect (Shit like this is exactly why en.wp is such a sausage party).

In their literature review, Lemieux et al. state incorrectly that Of the more than 1.5 million biographies about notable writers, inventors, and academics on English-language Wikipedia, less than 20% are about women. There are over 1.5 million biographies total; most of them are not about writers, inventors, or academics. They provide two sources for this claim. The first, a primary source, states that the figure is for the total number. The second, a 2021 paper by Tripodi, makes the incorrect claim in its abstract and provides no citation for it.

Policies, guidelines, and essays are, in Wikipedia jargon, different types of behind-the-scenes documents. The terms denote a descending sequence:
 * Policies have wide acceptance among editors and describe standards all users should normally follow. [...] Guidelines are sets of best practices supported by consensus. Editors should attempt to follow guidelines, though they are best treated with common sense, and occasional exceptions may apply. [...] Essays are the opinion or advice of an editor or group of editors for which widespread consensus has not been established. They do not speak for the entire community and may be created and written without approval.

Lemieux et al. describe both WP:Search engine test and WP:Too soon as metrics used to assess notability on Wikipedia. Neither page is such a thing.

The "Search Engine Test"
Lemieux et al. write that the "Search Engine Test" (WP:Search engine test) is a convenient and common way editors can determine probable notability before nominating a biography for deletion. Despite its supposed convenience and commonality, they provide no examples where it is actually invoked by name. Any attempt to find such examples, or even a cursory inspection of the WP:Search engine test page itself, would reveal the truth: their description of it is completely erroneous. Lemieux et al. claim to emulate the WP:Search engine test by computing a numerical proxy for each subject's online presence score. The page contains no such procedure. As observed above, none exists, or ever should exist. WP:Search engine test merely explains some uses of search engines and some reasons why their results must be treated with caution. It explicitly states, "A raw hit count should never be relied upon to prove notability", and it provides multiple reasons why. Checking the discussions in which it is invoked is illuminating. For example, in one debate, a !voter admonished, articles are not assessed based on the number of search results that come up after Googling them. For obvious reasons, this is a poor criteri[on] as you may get tens of thousands of results for inconsequential searches, and some noteworthy topics may not receive a particularly large amount of search results. See Search_engine_test. Instead, articles should be assessed based on the criteria such as WP:GNG. In other words, pointing to WP:Search engine test has exactly the opposite meaning as Lemieux et al. make it out to have. This misrepresentation of WP:Search engine test also occurs in Tripodi's earlier paper, but it is more prominent in this one.

As to its commonality, the numbers are stark enough to be indicative. The page WP:Search engine test was viewed 1,190 times over the year prior to this writing. The page WP:Notability, which includes the actual General Notability Guideline, was viewed 143,574 times over the same period. Even the specialized guideline WP:PROF was viewed 12,675 times, an order of magnitude more than the page that Lemieux et al. say is commonly used. As of this writing, 8,297 other pages link to the WP:PROF guideline, while only 1,090 link to the search engine test page, though according to Lemieux et al. the latter is what applies to any subject, not just the niche domain of academic biographies. If the goal is to determine whether or not Wikipedia's self-proclaimed standards are being applied equitably, then the evaluation should consider the standards that are actually being proclaimed.

A central assertion of Lemieux et al. is that they quantified whether the "Search engine test" is being equitably utilized using a metric they call the "Primer Index", after the software employed to calculate it. This metric is based upon the number of times that an individual is mentioned in a news article and in which context, which evidently is at best an indirect perspective upon any of the WP:PROF criteria. To confirm the validity of the “Primer Index,” we also created a "Google Index" to approximate the total number of hits that appear when an academic's full name and occupation are searched on Google. Using a custom Google Sheets code, we extracted an academic's full name and occupation from Wikidata and automatically searched Google for every instance of “full name + occupation” for each academic in our dataset on the same day. At this juncture, see again the cautionary words of "Search engine test" itself, and the more popular link for making the same point, the "Arguments to avoid" page quoted above. Lemieux et al. nominally validated their "Primer Index" based on an argument that is literally one of the arguments we tell everyone to avoid. This supposed confirmation rests on no foundation at all. Having made this claim, Lemieux et al. write that our data indicate that BIPOC biographies who meet Wikipedia's criteria (i.e., above the White Male Keep median Primer Index of 12.00) were among those deleted. Because the "Primer Index" and "Google Index" are completely unmoored from Wikipedia practice, there is no reason to equate passing some threshold of them with meeting "Wikipedia's criteria".

The evident limitations of the "Google Index" are, in fact, an excellent indication of why a guideline like WP:PROF is necessary. Any search of the form "full name + occupation" will omit, or at best provide a paucity of hits from within books, the text of paywalled journal articles, and bibliographies including citations to the person in question. Furthermore, it will penalize those who are written about in languages other than English. The various criteria of WP:PROF, which start with citation databases but decidedly do not end there, provide a far less superficial understanding of article-worthiness. It is also the case that the consensus-building in AfD's reliant upon WP:PROF allows for greater discernment and flexibility where differences between specializations are concerned. (Lemieux et al. do not individuate between disciplines, despite the strong likelihood that some fields are more heavily advertised and more charismatic than others. Think of pop psychology versus pure mathematics, for example.)

Even if we set aside the concerns about the basic premises, a problem with the analysis remains. Suppose, for the sake of the argument, that we grant that the "Primer Index" measures something of interest. Lemieux et al. write that white males whose biographies were kept rated significantly higher on the Primer Index than those whose biographies were deleted. They compare the median Primer Indices of these two groups using the Kruskal–Wallis test and report that the test gives a small p-value. In contrast, the p-values for white women, for BIPOC men, and for BIPOC women were all large. Lemieux et al conclude that There was no statistically significant difference in the median Primer Index between kept and deleted pages for white women or for BIPOC academics, and thus that the Primer Index is not an accurate predictor of Wikipedia persistence for female and BIPOC academics. But a large p-value on the Kruskal–Wallis test is not itself sufficient to conclude that the distributions being compared are the same. (The documentation for the software the authors use says as much. ) Indeed, per their figure 2, the difference in sample medians between kept and deleted biographies for BIPOC women was even larger than that for white men. The apparent differences in the statistics across these groups may be due, in whole or in part, to the large variations in sample sizes: 419 white men, but only 185 white women, 171 BIPOC men, and 69 BIPOC women.

The Titular "Too Soon"
The page Too soon is an essay, not a guideline and certainly not a policy. It is largely a collection of pointers to other documentation pages with some commentary. The gist is given in the first section: Sometimes, a topic may appear obviously notable to you, but there may not be enough independent coverage of it to confirm that. In such cases, it may simply be too soon to create the article. Lemieux et al. emphasize this page from their title onward, but most of what they have to say about it is misguided.

“Too Soon” is a technical label developed by Wikipedians indicating that a subject lacks sufficient coverage in independent, high-quality news sources to have a page. It is hardly "technical": the meaning is an example of the everyday sense of the phrase. Moreover, the language of the essay is not news sources, but independent secondary reliable sources, with a clarifying link to the Reliable sources guideline. The latter phrasing is more general than the former. A mathematics textbook or a medical review article is most likely independent, secondary, and reliable, but they are not news. Not only do Lemieux et al. drastically oversell the importance of WP:Too soon, they fail to summarize its contents accurately. It is puzzling how anyone would come to believe that "too soon" means anything other than what it says on the tin. How, one wonders, would Wikipedia editors then describe topics that look like they will never be notable?

The initialism "AfD" is short for "Articles for deletion", the area in which deletion debates about articles occur. Lemieux et al. follow this usage, and so shall this comment. First, let us illustrate the usage of WP:Too soon per Wikipedia guidelines. The following excerpt is from the AfD for the biography of a white, male, assistant professor who was nominated for deletion under the tag WP:Too soon: “Most of the newspaper articles cited in the main article are not directly related to the subject, and apart from this brief article in the Dainik Jagran that borders on being a hagiography of the subject, there's no real coverage for WP:GNG. WP:Too soon perhaps.” The biography in question was for a Shivendu Ranjan. Moreover, the tag WP:TOOSOON was not used in the nomination. Instead, the nominator stated that Ranjan failed the GNG. The !voter being quoted began their rationale by saying that there is no evidence that the subject meets WP:ACADEMIC (another pointer to the same guideline as WP:PROF). Most of the !vote is a point-by-point explanation of why the editor believes that the WP:PROF criteria are not met. The plain reading of the WP:TOOSOON perhaps at the end is that the !voter believed Ranjan's situation could change in the future. Further examples of misleading quotation will be noted below.

They go on: Despite the academic being an assistant professor, the moderator [sic] focused on media coverage, not the career stage, of the subject which is in accordance with the Wikipedia guidelines of the tag WP:Too soon. The essay WP:TOOSOON says nothing specifically about academic career stages. Most of it is about movies. The only profession that is specifically discussed is acting.

Lemieux et al. state that they collected the career stages of each individual designated WP:Too soon, using a pair of research assistants to identify these stages manually. Since perceived notability among academics is highly contingent on their rank (Adams et al., 2019; WP:Notability (academics)), academic careers were scored based on stage, with assistant professors being scored as 1, associate professors with 2, and so on. This methodology is flawed from the outset. It confuses correlation with contingency: one cannot neglect citation metrics and the other success indicators described in WP:PROF when talking about how academic-biography AfD's evaluate career status. The only time that "career stage" as Lemieux et al. think of it factors into a WP:PROF judgment is if the subject has attained the named chair/Distinguished Professor level, because these indicate high levels of accomplishment. Lemieux et al. confuse both the status of WP:PROF versus WP:TOOSOON (guideline versus essay) and the logical roles those pages play in the very !votes they quote. WP:TOOSOON is not the means by which notability is evaluated; instead, invoking it is a means of speculating about the background of why the actual notability guideline is not met and noting that the situation may change.

The claim that Wikipedia does not count trainees, research scientists, and/or government workers as “academics” is flatly untrue. Plenty of IEEE Fellows work in industry and are notable per WP:PROF, for example. And when articles on students appear at AfD, the community files them with the other AfD's on academics and educators. Students and trainees are academics and are evaluated as such. They often fall short of notability, not because of the label "student", but because students infrequently stand out by the criteria that are actually relied upon. Because Lemieux et al. erroneously believe that Wikipedia regards all these people as "non-academics," they score all biographies thereof with a 0 on their career-stage scale, regardless of the subject's actual career stage. For example, a graduate student at a university would be scored the same as a senior research scientist at a major corporation. Upon these meaningless numbers, Lemieux et al. then do bad arithmetic, computing an average career stage [...] by calculating the sum of the career stage scores and dividing this by the total number of entries. Career stage is a qualitative or categorical variable, so taking the numerical mean is not likely to be indicative. What job is 0.76 of the way between assistant and associate professor?

David Eppstein, a computer-science professor, is a longtime participant in academic-bio AfD's and a member of the Women in Red project which aims to improve Wikipedia's biographical coverage of women. He writes,
 * [S]ince the article quotes me as invoking TOOSOON, perhaps I should explain what I generally mean by it. It is never the actual reason for a delete opinion, at least from me. In the cases under discussion, my choices are always grounded in notability guidelines and policies, not essays. When I use TOOSOON, it is not intended to strengthen the case for deletion. Maybe it is the opposite: it is a ray of hope in an otherwise negative opinion. If I think someone is not likely to ever be notable, I am probably just going to say delete, and explain why. If I think an academic does not currently meet our notability standards, but is on a trajectory on which they might well eventually do so, years later, I will say TOOSOON. We often see re-creations of the same articles, years apart, and including this in an opinion is a suggestion that if we discuss the same case again sometime we should check their accomplishments again more carefully instead of relying on past opinions.

In fairness, this was written in response to Lemieux et al., but it is also a natural implication of the everyday meaning of the phrase "too soon": not now, but maybe later. For an example of this involving David Eppstein and others, see Articles for deletion/Charles Steinhardt.

Quote mining
The quotes given after These examples from AfD discussions all failed to mention the presence or depth of media coverage are misleadingly presented. Two of the three are from Articles for deletion/Kate Killick. One of the !votes quoted actually began, Sadly, fails WP:NPROF. This is a significant omission, since it removes the actual reason the editor gave for believing the article should be deleted. Moreover, that AfD did discuss the presence or depth of media coverage, insofar as editors noted that there wasn't any. (The Irish Times reference is an opinion piece of which Killick's name is only mentioned once among many, many other names. I cannot locate any significant coverage from reliable sources that indicate notability.) The third quote is also truncated; the original is from here and concluded Tiny citations on GS do not pass WP:Prof and lack of independent in-depth sources fails WP:GNG.

Another !vote they quote also began with a rationale that Lemieux et al. do not reproduce: The only form of notability claimed in the article is academic, but our standards for academic notability explicitly exclude student awards. Merely having written a few review papers is inadequate for notability; the papers need to be heavily cited, and here they appear not to be.

Misrepresentation of career status and other AfD aspects
The sentence immediately after the blockquote that includes snippets from the K. Killick AfD is Our dataset revealed that men at similar early career stages were present on Wikipedia. Their example is Colin G. DeYoung, who at the time of his AfD had an h-index of 44. That is not "similar" to a postdoc who coauthored a respectable but unremarkable number of papers (no more than 10, according to Web of Science). At the times of their respective AfD's, Killick had possessed a PhD for roughly six years, and DeYoung had possessed his for roughly thirteen. Without making any suppositions about the worth of their research on some notional absolute scale of intellectual achievement, it is safe to say that these AfD's were not for individuals at "similar" stages of their careers.

For example, Tonya Foster, a professor of creative writing and Black feminist scholar at San Francisco State University had a high Primer Index of 41 yet her Wikipedia page was deleted. It is worth mentioning that the article Tonya Foster does exist today. At the time of the AfD in 2017, the consensus was that WP:AUTHOR was not met, but the article was recreated in July 2020 without complaint. Foster did not join San Francisco State University until 2020, at which point she was named one of the George and Judy Marcus Endowed Chairs, so the argument for her passing WP:PROF got significantly stronger. The AfD process can hardly be faulted for failing to consider evidence that would not exist for another three years.

Another example is the late Sudha Shenoy, an economist and professor of economic history at the University of Newcastle, Australia, who had a high Primer Index of 198 yet her page was also deleted. According to her profile at the Mises Institute, she was a "lecturer" at Newcastle, and "lecturer" in Australia is a lower academic rank than a professor. In any case, that AfD looked at WP:PROF and WP:GNG and found that neither was met; citation counts were low, no other academic notability criteria could be argued for, and the sources about her were unreliably published. This data point appears to be more an indictment of the "Primer Index" than anything else.

Lemieux et al. mention the drama surrounding the article about nuclear chemist Clarice Phelps. They state that her biography was deleted three times in the span of one week. While its existence was indeed contentious, that specific claim is not in the cited source, or in the source upon which that website relied. Examining the deletion log for the article and the "article milestones" list at Talk:Clarice Phelps indicates that there is no one-week span that could match. Instead, it was deleted once in February 2019 and twice in April, before being incubated as a draft page and then restored for good in February 2020. The cause of diversity on Wikipedia is a marathon, not a sprint; the inaccurate timeline and the omission of the eventual success make for a misleading portrayal of the challenge that is of no help in resolving it.

Conclusion
Whatever revolutionary claims have been made on its behalf, Wikipedia has a fundamentally institutionalist character. It layers on top of existing academic and journalistic systems of legitimacy. The same rhetorical fences that keep Wikipedia from being a toxic waste dump of advertising and conspiracy theories also mean that it is a bad place for social change to begin. The encyclopedia can only be as non-sexist as the least sexist institution. The question of which articles should exist and what they should say is a question of content moderation at scale, a task that is "impossible to do well". The failure modes of Wikipedia's written rules and subcultural practices deserve study. But ill-conceived studies can lead to ill-conceived advocacy that makes real problems no easier to solve.

Lemieux et al. misrepresent Wikipedia policies, guidelines, and essays; the content of deletion debates; news reporting; and the prior academic literature. How these errors could have transpired is, in a word, baffling. How they passed through peer review is likewise a puzzle (and a discouraging sign), but pass through they did. The problems are too pervasive to be addressed by an erratum or an expression of concern. The literature on the important subject of systemic bias in Wikipedia would be best served by a retraction and a careful re-examination of the editorial process.

Briefly

 * See the page of the monthly Wikimedia Research Showcase for videos and slides of past presentations.
 * Registrations are open for Wiki Workshop 2023 on May 11th, 2023. The virtual event is the tenth in this annual series (formerly part of The Web Conference), and is organized by members of the Wikimedia Foundation's research team with other collaborators including academics Francesca Tripodi (University of North Carolina) and Robert West (EPFL). Registration is free and subject to a WMF privacy statement.
 * "Infolettre wikil@b" is a newly launched French-language newsletter by Wikimedians-in-residence at French universities, covering research alongside open science and the use of Wikimedia projects in education.

Other recent publications
''Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.''
 * Compiled by Tilman Bayer

"Towards a Digital Reflexive Sociology: Using Wikipedia's Biographical Repository as a Reflexive Tool"
From the abstract:  "[...] we employ Wikipedia as a ‘reflexive tool’, i.e., an external artefact of self-observation that can help sociologists to notice conventions, biases, and blind spots within their discipline. We analyse the collective patterns of the 500 most notable sociologists on Wikipedia, performing structural, network, and text analyses of their biographies. Our exploration reveals patterns in their historical frequency, gender composition, geographical concentration, birth-death mobility, centrality degree, biographical clustering, and proximity between countries, also stressing institutions, events, places, and relevant dates from a biographical point of view. Linking these patterns in a diachronic way, we distinguish five generations of sociologists recorded on Wikipedia and emphasise the high historical concentration of the discipline in geographical areas, gender, and schools of thought."

"How can the social sciences benefit from knowledge graphs? A case study on using Wikidata and Wikipedia to examine the world’s billionaires"
From the abstract:  "This study examines the potentials of Wikidata and Wikipedia as knowledge graphs for the social sciences. The study demonstrates how social science research may benefit from these knowledge bases by examining what we can learn from Wikidata and Wikipedia about global billionaires (2010-2022). [...] We show that the English Wikipedia and, to a lesser extent, Wikidata exhibit gender and nationality biased in the coverage and information about global billionaires. Using the genealogical information that Wikidata provides, we examine the family webs of billionaires and show that at least 15% of all billionaires have a family member also being a billionaire."

"Can you trust Wikidata?"
From the abstract:  "The present work aims to assess how well Wikidata (WD) supports the trust decision process implied when using its data. WD provides several mechanisms that can support this trust decision, and our KG [Knowledge Graph] Profiling, based on WD claims and schema, elaborates an analysis of how multiple points of view, controversies, and potentially incomplete or incongruent content are presented and represented."

"A Study of Concept Similarity in Wikidata"
From the abstract: :  "In light of the adoption of Wikidata for increasingly complex tasks that rely on similarity, and its unique size, breadth, and crowdsourcing nature, we propose that conceptual similarity should be revisited for the case of Wikidata. In this paper, we study a wide range of representative similarity methods for Wikidata, organized into three categories, and leverage background information for knowledge injection via retrofitting. We measure the impact of retrofitting with different weighted subsets from Wikidata and ProBase. Experiments on three benchmarks show that the best performance is achieved by pairing language models with rich information, whereas the impact of injecting knowledge is most positive on methods that originally do not consider comprehensive information. The performance of retrofitting is conditioned on the selection of high-quality similarity knowledge. A key limitation of this study, similar to prior work lies in the limited size and scope of the similarity benchmarks. While Wikidata provides an unprecedented possibility for a representative evaluation of concept similarity, effectively doing so remains a key challenge."

Matching non-notable Wikidata "orphans" to Wikipedia sections
From the abstract and paper:  "We present a transformer-based model, ParaGraph, which, given a Wikidata entity as input, retrieves its corresponding Wikipedia section. To perform this task, ParaGraph first generates an entity summary and compares it to sections to select an initial set of candidates. The candidates are then ranked using additional information from the entity’s textual description and contextual information. Our experimental results show that ParaGraph achieves 87% Hits@10 when ranking Wikipedia sections given a Wikidata entity as input. [...]

This mapping between Wikipedia and Wikidata is beneficial for both projects. On the one hand, it facilitates information extraction and standardization of Wikipedia articles across languages, which can benefit from the standard structure and values of their Wikidata counterpart, e.g., for populating infoboxes. On the other hand, Wikipedia articles are routinely updated, which in turn keeps Wikidata fresh and useful for online applications. However, the Wikipedia editorial guidelines require that an entity be notable or worthy of notice to be added to the encyclopedia, which does not hold for all Wikidata entities. [...] We refer to the remaining entities, which do not have an article in [any language] Wikipedia, as orphans. In the absence of a textual counterpart, orphans often suffer from incompleteness and lack of maintenance. Our present effort stems from the observation that a substantial number of orphan entities are indeed represented in Wikipedia, but not at the page level; orphan entities are often described within existing Wikipedia articles in the form of sections, subsections, and paragraphs of a more generic concept or fact. For example, the English Wikipedia does not have a dedicated page about “Tennis racket”, it is instead embedded in the “Racket” page as a section, whereas it can be found as a standalone (orphan) entity on Wikidata (“Q153362”)."