User talk:Karlapalemiiit

June 2023
Hello, Karlapalemiiit. We welcome your contributions, but if you have an external relationship with the people, places or things you have written about on the page Author-level metrics, you may have a conflict of interest (COI). Editors with a conflict of interest may be unduly influenced by their connection to the topic. See the conflict of interest guideline and FAQ for organizations for more information. We ask that you:


 * avoid editing or creating articles about yourself, your family, friends, colleagues, company, organization, clients, or competitors;
 * propose changes on the talk pages of affected articles (you can use the request edit template);
 * disclose your conflict of interest when discussing affected articles (see Conflict of interest);
 * avoid linking to your organization's website in other articles (see Spam);
 * do your best to comply with Wikipedia's content policies.

In addition, you are required by the Wikimedia Foundation's terms of use to disclose your employer, client, and affiliation with respect to any contribution which forms all or part of work for which you receive, or expect to receive, compensation. See Paid-contribution disclosure.

Also, editing for the purpose of advertising, publicising, or promoting anyone or anything is not permitted. Thank you. XOR&#39;easter (talk) 00:13, 22 June 2023 (UTC)


 * I have written about recent research conducted by us and that which was published. Hence it is public research statement. There is no personal agenda in this change/addition. Kindly get a neutral check done. Karlapalemiiit (talk) 04:33, 22 June 2023 (UTC)
 * Wikipedia is not the place to write about your own research. Moreover, because the arXiv lacks peer review, the Wikipedia community does not consider it a reliable source. XOR&#39;easter (talk) 16:45, 22 June 2023 (UTC)
 * Actually, the paper has been published - and ArXiv is an extended version. See below.
 * https://dl.acm.org/doi/abs/10.1145/3543873.3587597
 * The reviews of the paper are below. The work is very current and fits in the author-level metrics
 * Dear Kamalakar,
 * Congratulations! We are very pleased to inform you that your submission "hp-frac: An index to determine Awarded Researchers" has been accepted at the 3rd International Workshop on Scientific Knowledge: Representation, Discovery, and Assessment (Sci-K 2023).
 * The reviews for your submission are enclosed. Please read them carefully and modify your paper accordingly. The submission deadline for the camera ready version is set for:
 * ** March 15, 2023 (AOE) SHARP **
 * Below, there is a link to a Google document [1] containing further instructions on how to prepare the camera ready, where to submit it and also information about the rights.
 * Please note that addressing the reviewer comments and presenting your work at the workshop is mandatory for being included in the proceedings.
 * More information about the program and presentations will be provided shortly. Please do not hesitate to get in touch if you have further questions.
 * Congratulations again!
 * Best regards,
 * Angelo, Francesco, Ágnes, Andrea, Yujia, Dimitris, Feng, Thanasis, Daniel, Paolo, Yi, Meijun, Ying, Misha, Yong Organisers of Sci-K @ TheWebConf2023
 * [1] https://docs.google.com/document/d/1iNXY_qoaYqkmetuPqPml9nMleY9tuLF2D2dvnLzRfVw/edit
 * HERE FOLLOW THE REVIEWS:
 * SUBMISSION: 560
 * TITLE: hp-frac: An index to determine Awarded Researchers
 * --- REVIEW 1 -
 * SUBMISSION: 560
 * TITLE: hp-frac: An index to determine Awarded Researchers
 * AUTHORS: Aashay Singhal and Kamalakar Karlapalem
 * --- Overall evaluation ---
 * SCORE: 2 (accept)
 * - TEXT:
 * This paper proposes two new indices to evaluate the scholar's academic capacity, which aims to mitigate the defect of the common metric h-index. The effectiveness of the proposed index is verified by comparing the ranking list of awardees in different research fields. There are some problems need to be solved in this paper:
 * 1. The references are too old, some publications in the last 5 years should be added.
 * 2. The full name of RBO should be introduced when it first appears in the paper.
 * 3. The experimental results show that the proposed metric have higher value than traditional metric for awardees, which demonstrates the effectiveness of the proposed metrics. However, there lacks some statistical analysis of why the proposed metric shows better results. This should be discussed from the perspective of calculation formulation and definition.
 * 4. There are still some spelling mistakes in this paper, please double check it and make sure all mistakes are corrected. e.g., large scale -> large-scale, 'This can be mitigated by using h-index of a paper to computer h-index of an author' -> 'This can be mitigated by using h-index of a paper to compute h-index of an author'
 * --- REVIEW 2 -
 * SUBMISSION: 560
 * TITLE: hp-frac: An index to determine Awarded Researchers
 * AUTHORS: Aashay Singhal and Kamalakar Karlapalem
 * --- Overall evaluation ---
 * SCORE: 1 (weak accept)
 * - TEXT:
 * The paper presents h-index variations, and demonstrates their use is ranking schemes that perform better in cases of awarded researchers. The evaluation results show merit, especially in cases of awarded researchers. There are some issues to consider, though:
 * (1) Some of the arguments used to back up the work (h-index derawbacks) still hold, e.g., self-citations, inclusion of articles whose conclusions are later discoreved.
 * (2) The definition of hp index is vague: h-index is defined on a paper and not on a set of values (eq 1).
 * (3) There are many surveys out there on h-index and its variants that somehow could be used to better argue aboutpros and cons of the suggested variation. E.g.:
 * https://sci2s.ugr.es/hindex
 * https://link.springer.com/article/10.1007/s11192-017-2633-2
 * (4) Can the h-index drawbacks resolved by just combining h-index with other sciento-metrics? A discussion would be useful.
 * --- REVIEW 3 -
 * SUBMISSION: 560
 * TITLE: hp-frac: An index to determine Awarded Researchers
 * AUTHORS: Aashay Singhal and Kamalakar Karlapalem
 * --- Overall evaluation ---
 * SCORE: 1 (weak accept)
 * - TEXT:
 * This paper proposes hp-index and hp-frac-index, a variations of the h-index which are based on finding the h-index of individual papers of a researcher. The authors analyze the proposed indices using a dataset of top 1000 most cited researchers in Google Scholar from the fields of Computer Science, Economics, and Biology. They also compare different indices in their ability to identify researchers who received major awards like Nobel Prize, Turing Award, etc.
 * This paper is well written and easy to follow. The topic is interesting and well within the scope of the Sci-K workshop. I appreciated the analysis the authors performed as evaluating any research metrics is notoriously difficult. I do have two qualms with the paper. Firstly, the metrics the authors propose are not entirely novel. As the authors point out themselves, h-index of a single paper was proposed previously by Egghe, for hp-index the authors simply use a different aggregation function than Egghe did, hp-frac-index is an extension that additionally accounts for the number of authors on a paper. I was a bit bothered by the fact that this was not stated earlier in the paper, reading the paper currently gives the impression that both hp and hp-frac are entirely novel (the authors refer Egghe’s paper in the last paragraph before the conclusion).
 * Secondly, the analysis using awarded researchers was done on such a small set of individuals that I’m not sure the results can be considered statistically significant and I wish the authors addressed the limitations of their approach in the paper. For example, for biology, the authors were able to match only 13 (according to table 5) out of the 842 researchers they have in their dataset to award recipients. They then evaluate all four metrics by looking at what percentage of the matched researchers is ranked in top 5%, 10%, etc. percent. Given that the set is so small, this means one metric may have placed 5 vs. the other metric may have placed 8 researchers in the top 5%, but when talking about percentages (32 vs 61%), it sounds like a big difference (hopefully my understanding of this is correct). There are some more questions I had about the approach, such as why these particular awards and not others? Also, the fact that hp-frac ranks researchers differently from the ot her indices is in my opinion not necessarily interesting in itself, and it may be helpful if the authors discussed this in more detail. Finally, I would have appreciated a comparison with plain citation counts as well.
 * Despite these limitations, I think this is an interesting and well written paper that has the potential to spark a discussion at the workshop. Karlapalemiiit (talk) 05:41, 23 June 2023 (UTC)
 * It is still against Wikipedia policy for you to write about your own work to increase its visibility. Furthermore, your paper is a primary source, which makes it unsuitable for most purposes here regardless of who tries to add it, and we have no indication that it is due weight for inclusion. XOR&#39;easter (talk) 16:01, 23 June 2023 (UTC)
 * ok 49.205.253.80 (talk) 03:38, 24 June 2023 (UTC)
 * ok 49.205.253.80 (talk) 03:38, 24 June 2023 (UTC)