Wikipedia talk:Articles for deletion/Mikhail Katz

Bibliometry for mathematical scientists
There is a discussion and good bibliography on the use and abuse of bibliometry, particularly as concerns mathematicians in this report (and associated comments by distinguished mathematical scientists):
 * MR2561124 Hall, Peter Gavin Comment: Citation statistics [MR2561120]. Statist. Sci. 24  (2009),  no. 1, 25–26.
 * MR2561123 Spiegelhalter, David; Goldstein, Harvey Comment: Citation statistics. Statist. Sci. 24  (2009),  no. 1, 21–24.
 * MR2561122 Lehmann, Sune; Lautrup, Benny E.; Jackson, Andrew D. Comment: Citation statistics Statist. Sci. 24  (2009),  no. 1, 17–20.
 * MR2561121 Silverman, Bernard W.(4-OXSP) Comment: Bibliometrics in the context of the UK research assessment exercise. Statist. Sci.  24  (2009),  no. 1, 15–16.
 * MR2561121 Silverman, Bernard W.(4-OXSP) Comment: Bibliometrics in the context of the UK research assessment exercise. Statist. Sci.  24  (2009),  no. 1, 15–16.

The first paper cites another paper by Amim, that notes the abnormally low number of citations in mathematics, even less than social science. Kiefer .Wolfowitz 01:23, 23 May 2011 (UTC)


 * Thanks, I for one will definitely take a gander. I think we are all agreed about the fact mathematics has a low citation rate.  But giving at a quick read through can only help I suppose. Thenub314 (talk) 02:12, 23 May 2011 (UTC)
 * Sadly the first one is behind a pay wall so not accessible to many of us. Certainly the view that has come up frequently before on these pages is that the citation rate for mathematicians is lower than in many other subjects. Xxanthippe (talk) 02:24, 23 May 2011 (UTC).
 * I added the a free version. Kiefer .Wolfowitz 02:26, 23 May 2011 (UTC)
 * Thanks, this is immensely valuable and indeed makes it clear that mathematicians get less cites than others (ps 7-8). It helps us compare the apples with oranges that appear in these AfD debates. However I am sure that all of will subscribe to the view expressed in the article "Numbers are not inherently superior to sound judgments". Xxanthippe (talk) 02:50, 23 May 2011 (UTC).

Predictions using formula or judgment, following Paul Meehl
The important question is whether we should use our minds or a formula, and that has been studied by psychology (Paul Meehl, Robyn Dawes): The answer is that it saves time and offers some improvement to use linear regression, in nearly all cases! Kiefer .Wolfowitz 02:59, 23 May 2011 (UTC)

His 1954 book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, analyzed the claim that mechanical (formal, algorithmic) methods of data combination outperformed clinical (e.g., subjective, informal, "in the head") methods when such combinations are used to arrive at a prediction of behavior. The analysis favored mechanical modes of combination and caused a considerable stir amongst clinicians. Meehl (1954) argued that mechanical methods of prediction would, used correctly, make more efficient decisions about patients' prognosis and treatment. Still today, however, clinicians make such decisions based on their professional judgment, that is, they combine all kinds of information "in their head" and arrive at a conclusion/prediction about a patient. Meehl (1954) theorized that clinicians would make more mistakes than a mechanical prediction tool created for a similar decision purpose. Mechanical prediction methods are simply a mode of combination of data to arrive at a decision/prediction concerning the emission of behavior. Mechanical prediction does not exclude any type of data from being combined. Indeed, mechanical prediction tools often incorporate clinical judgments, properly coded, in their predictions. The defining characteristic is that, once the data to be combined is given, the mechanical tool will make a prediction that is 100% reliable. That is, it will make exactly the same prediction for exactly the same data every time. Clinical prediction, on the other hand, does not guarantee this.

A meta-analysis comparing clinical and mechanical prediction efficiency vindicates Meehl's (1954) claim that mechanical data combination and prediction outperforms clinical combination and prediction.


 * OK, you certainly know who Gromov is and have a the appropriate respect, which hints your pretty savvy mathematically. I wish I shared your feeling that Gromov endorsed Katz to establish his notability, but so far I have only seen that he has written a section in one of his advisors publications.  That is a pretty common activity for former/current students, in my subjective experience.  Mathematics is a low citation field, and on this fact everyone where agrees.  Of course it is not uniform, as some subfields of mathematics cite even less then others.  Graph theory versus Harmonic analysis, etc. As Katz has not won any awards, nor been written about in other places, so his notability seems to rest solely on how we estimate the impact of his work.
 * The real difficulty I see is that we have no real data with which to compare. There is Perchloric's average of citations, and Xxanthippe's estimate of what has passed before.  The first has the issue that maybe everyone that he looked should have a page, and the second suffers from the same problem, in the other direction.  To be honest if you had a subjective way to come up with a formula I would love to hear it.  I will do my best to stay open minded. Thenub314 (talk) 03:34, 23 May 2011 (UTC)


 * Comment. The data that have been made available to us by Wolf indicate that mathematicians get roughly half as many cites as the average. In my view this moves the current BLP from borderline to clear keep. We now and again get dire warnings that the sky will fall in if every eligible academic gets an article about themselves. Those outside the academic world will be surprised to learn that it is far from the case that every academic is desperate to get an article about themselves in Wikipedia. Xxanthippe (talk) 03:54, 23 May 2011 (UTC).
 * Thanks! Kiefer .Wolfowitz 15:20, 23 May 2011 (UTC)
 * @Xxanthippe: Who is making dire predictions, and why do you insist on treating those who disagree so dismissively and with so little respect? Might I ask: If math gets half of the average, can you tell me what the average is that qualifies for a wikipedia page? Thenub314 (talk) 15:30, 23 May 2011 (UTC)
 * The average is indicated in at least one respect on page 8 of the report drawn to our attention by Kiefer.Wolfowitz. The average paper in mathematics gets cited about once, in life sciences about six times, with several subjects in between those limits. As noted above by me, GS shows that the subject of the BLP has 46 hits with 670 cites i.e. 14 cites per hit. This is way above the average for mathematicians. In the more comparable WoS, Agricola44 finds 17 papers with 100 citations. Again, with a cites/papers ratio of 6, this is also well above average. It is hard to escape the conclusion that the research output of the present subject is scoring well above average for his field and thereby passes the above average professor test. Xxanthippe (talk) 23:10, 23 May 2011 (UTC).
 * Thanks, I am still not convinced but I won't get into the details as to why. I think I have said enough at this point, but I appreciate the care put into your reply. Thenub314 (talk) 23:22, 23 May 2011 (UTC)

Peter Orno and John Rainwater
Since you BLP and WP:Professor experts are here, I note two articles about mathematicians.

I wrote articles about two pseudonyms, under whose names mathematicians publish papers. The lesser mathematician, Peter Orno, is well documented because Drmies asked a lot of questions, and so I gave a lot of citations. I could easily add more citations to Rainwater's article.

These articles have tentatively been accepted for April Fool's Day.

I mention them because to me it's obvious that Katz is more notable than either, and I hope that somebody adds a citations to source statements that might be challenged by a good-faith reader. Kiefer .Wolfowitz 00:10, 24 May 2011 (UTC)