User talk:Rspublication

RS PUBLICATION sets the standard for innovative, interdisciplinary and international scholarship.RS PUBLICATION recognize that high quality scholarship requires dedication and commitment.RS PUBLICATION work closely with authors and editors to produce most outstanding work in various fields of research and education.

RS publication works on online, open access and interdisciplinary range of journals. RS PUBLICATION focus on theories, methodologies, implementations, research and applications in various fields of education. With all journals we are trying to give our contribution for enhancement of research activities. The publication covers all areas of engineering and management.

RS PUBLICATION publish original research articles, review articles and notes. With joint efforts of our International, national and Regional Editorial Boards and Advisory Board the review procedure is very fast. At the end RS PUBLICATION have a pleasure to frankly appreciate all members of our Editorial and Advisory Boards for their contribution, as member of RS Publication from the very beginning. journals and h index of different journal(http://www.rspublication.com/)
 * INTERNATIONAL JOURNAL OF EMERGING TRENDS IN ENGINEERING AND DEVELOPMENT(IJETED) h index-5


 * INTERNATIONAL JOURNAL OF RESEARCH IN MANAGEMENT (IJRM): h index-4.8


 * INTERNATIONAL JOURNAL OF ADVANCED SCIENTIFIC AND TECHNICAL RESEARCH (IJAST) h index-6.2


 * INTERNATIONAL JOURNAL PHARMACEUTICAL SCIENCE AND HEALTH CARE (IJPHC) h index-4.6


 * INTERNATIONAL JOURNAL OF COMPUTER APPLICATION (IJCA) h index-4.9


 * American Journal Of sustainable city and society (Ajscs) h index-3.2

The h-index is an index that attempts to measure both the productivity and impact of the published work of a scientist or scholar. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department or university or country. The index was suggested by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining theoretical physicists' relative quality[1] and is sometimes called the Hirsch index or Hirsch number. The index is based on the distribution of citations received by a given researcher's publications. Hirsch writes:

A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have no more than h citations each.

In other words, a scholar with an index of h has published h papers each of which has been cited in other papers at least h times.[2] Thus, the h-index reflects both the number of publications and the number of citations per publication. The index is designed to improve upon simpler measures such as the total number of citations or publications. The index works properly only for comparing scientists working in the same field; citation conventions differ widely among different fields.

The h-index serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher. Because only the most highly cited articles contribute to the h-index, its determination is a relatively simpler process. Hirsch has demonstrated that h has high predictive value for whether a scientist has won honors like National Academy membership or the Nobel Prize. In physics, a moderately productive scientist should have an h equal to the number of years of service[citation needed] while biomedical scientists tend to have higher values. The h-index grows as citations accumulate and thus it depends on the 'academic age' of a researcher.

Hirsch suggested (with large error bars) that, for physicists, a value for h of about 12 might be typical for advancement to tenure (associate professor) at major research universities. A value of about 18 could mean a full professorship, 15–20 could mean a fellowship in the American Physical Society, and 45 or higher could mean membership in the United States National Academy of Sciences.[3] Little systematic investigation has been made on how academic recognition correlates with h-index over different institutions, nations and fields of study.

Among the 22 scientific disciplines listed in the Thomson Reuters Essential Science Indicators Citation Thresholds, physics has the second most citations after space science.[4] During the period January 1, 2000-February 28, 2010, a physicist had to receive 2073 citations to be among the most cited 1% of physicists in the world.[4] The threshold for Space Science is the highest (2236 citations), and Physics is followed by Clinical Medicine (1390) and Molecular Biology & Genetics (1229). Most disciplines, such as Environment/Ecology (390), have fewer scientists, fewer papers, and fewer citations.[4] Therefore, these disciplines have lower citation thresholds in the Essential Science Indicators, with the lowest citation thresholds observed in Social Sciences (154), Computer Science (149), and Multidisciplinary Sciences (147).[4] Calculation

The h-index can be manually determined using citation databases or using automatic tools. Subscription-based databases such as Scopus and the Web of Knowledge provide automated calculators. Harzing's Publish or Perish program calculates the h-index based on Google Scholar entries. In July 2011 Google trialled a tool which allows a limited number of scholars to keep track of their own citations and also produces a h-index and an i10-index. Each database is likely to produce a different h for the same scholar, because of different coverage: Google Scholar has more citations than Scopus and Web of Science but the smaller citation collections tend to be more accurate. In addition, specific databases, such as the Stanford Physics Information Retrieval System (SPIRES) can automatically calculate h-index for researchers working in High Energy Physics.

The topic has been studied in detail by Lokman I. Meho and Kiduk Yang.[5][6] Web of Knowledge was found to have strong coverage of journal publications, but poor coverage of high impact conferences. Scopus has better coverage of conferences, but poor coverage of publications prior to 1996; Google Scholar has the best coverage of conferences and most journals (though not all), but like Scopus has limited coverage of pre-1990 publications.[6] The exclusion of conference preprints is a problem for scholars in computer science, where conference preprints are considered an important part of the literature, but reflects common practice in most scientific fields where conference preprints are unrefereed and are accorded less weight in evaluating academic productivity. The Scopus and Web of Knowledge calculations also fail to count the citations that a publication gathers while 'in press' (i.e. after being accepted for publication but before being printed on paper);[citation needed][dubious – discuss] with electronic pre-publication and very long printing lags for some journals, these 'in press' citations can be considerable. Google Scholar has been criticized for producing "phantom citations," including gray literature in its citation counts, and failing to follow the rules of Boolean logic when combining search terms.[7] For example, the Meho and Yang study found that Google Scholar identified 53% more citations than Web of Knowledge and Scopus combined, but noted that because most of the additional citations reported by Google Scholar were from low-impact journals or conference proceedings, they did not significantly alter the relative ranking of the individuals. It has been suggested that in order to deal with the sometimes wide variation in h for a single academic measured across the possible citation databases, that one should assume false negatives in the databases are more problematic than false positives and take the maximum h measured for an academic.[8] Advantages

Hirsch intended the h-index to address the main disadvantages of other bibliometric indicators, such as total number of papers or total number of citations. Total number of papers does not account for the quality of scientific publications, while total number of citations can be disproportionately affected by participation in a single publication of major influence (for instance, methodological papers proposing successful new techniques, methods or approximations, which can generate a large number of citations), or having many publications with few citations each. The h-index is intended to measure simultaneously the quality and quantity of scientific output. Criticism

There are a number of situations in which h may provide misleading information about a scientist's output:[9] (However, most of these are not exclusive to the h-index.)

The h-index does not account for the number of authors of a paper. In the original paper, Hirsch suggested partitioning citations among co-authors. Even in the absence of explicit gaming, the h-index and similar indexes tend to favor fields with larger groups, e.g. experimental over theoretical. The h-index does not account for the typical number of citations in different fields. Different fields, or journals, traditionally use different numbers of citations. The h-index discards the information contained in author placement in the authors' list, which in some scientific fields (but not in high energy physics, where Hirsch works) is significant.[10][11] The h-index is bounded by the total number of publications. This means that scientists with a short career are at an inherent disadvantage, regardless of the importance of their discoveries. For example, Évariste Galois' h-index is 2, and will remain so forever. Had Albert Einstein died after publishing his four groundbreaking Annus Mirabilis papers in 1905, his h-index would be stuck at 4 or 5. This is also a problem for any measure that relies on the number of publications. However, as Hirsch indicated in the original paper, the index is intended as a tool to evaluate researchers in the same stage of their careers. It is not meant as a tool for historical comparisons. The h-index does not consider the context of citations. For example, citations in a paper are often made simply to flesh out an introduction, otherwise having no other significance to the work. h also does not resolve other contextual instances: citations made in a negative context and citations made to fraudulent or retracted work. This is also a problem for regular citation counts. The h-index gives books the same count as articles making it difficult to compare scholars in fields that are more book-oriented such as the humanities. The h-index does not account for confounding factors such as "gratuitous authorship", the so-called Matthew effect, and the favorable citation bias associated with review articles. Again, this is a problem for all other metrics using publications or citations. The h-index has been found to have slightly less predictive accuracy and precision than the simpler measure of mean citations per paper.[12] However, this finding was contradicted by another study.[13] The h-index is a natural number which reduces its discriminatory power. Ruane and Tol therefore propose a rational h-index that interpolates between h and h + 1.[14] The h-index can be manipulated through self-citations,[15] and if based on Google Scholar output, then even computer-generated documents can be used for that purpose, e.g. using SCIgen.[16]

Alternatives and modifications

Various proposals to modify the h-index in order to emphasize different features have been made:[17][18][19][20][21][22]. As the variants have proliferated, comparative studies have become possible and they demonstrate that most proposals do not differ significantly from the original h-index as they remain highly correlated with it[23].

An individual h-index normalized by the average number of co-authors in the h-core has been introduced by Batista et al.[17] They also found that the distribution of the h-index, although it depends on the field, can be normalized by a simple rescaling factor. For example, assuming as standard the hs for biology, the distribution of h for mathematics collapse with it if this h is multiplied by three, that is, a mathematician with h = 3 is equivalent to a biologist with h = 9. This method has not been readily adopted, perhaps because of its complexity. It might be simpler to divide citation counts by the number of authors before ordering the papers and obtaining the h-index, as originally suggested by Hirsch. The m-index is defined as h/n, where n is the number of years since the first published paper of the scientist;[1] also called m-quotient.[24][25] A generalization of the h-index and some other indices that gives additional information about the shape of the author's citation function (heavy-tailed, flat/peaked, etc.) was proposed by Gągolewski and Grzegorzewski.[26] Successive Hirsch-type-index introduced independently by Kosmulski[27] and Prathap.[28] A scientific institution has a successive Hirsch-type-index of i when at least i researchers from that institution have an h-index of at least i.   Bornmann, Mutz, and Daniel recently proposed three additional metrics, h2 lower, h2 center, and h2 upper, to give a more accurate representation of the distribution shape. The three h2 metrics measure the relative area within a scientist's citation distribution in the low impact area, h2 lower, the area captured by the h-index, h2 center, and the area from publications with the highest visibility, h2 upper. Scientists with high h2 upper percentages are perfectionists, whereas scientists with high h2 lower percentages are mass producers. As these metrics are percentages, they are intended to give a qualitative description to supplement the quantitative h-index. [29]   The g-index proposed by Egghe [30] can be seen as the h-index for an averaged citations count. K. Dixit and colleagues argue that "For an individual researcher, a measure such as Erdős number captures the structural properties of network whereas the h-index captures the citation impact of the publications. One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking." Several author ranking systems such as eigenfactor (based on eigenvector centrality) have been proposed already, for instance the Phys Author Rank Algorithm.[31] The c-index accounts not only for the citations but for the quality of the citations in terms of the collaboration distance between citing and cited authors. A scientist has c-index n if n of [his/her] N citations are from authors which are at collaboration distance at least n, and the other (N − n) citations are from authors which are at collaboration distance at most n. [32] An s-index, accounting for the non-entropic distribution of citations, has been proposed and it has been shown to be in a very good correlation with h[33]

See also

Bibliometrics impact factor g-index h-b index Eddington number (cycling) An earlier metric of the same form. Durfee square, a quantity defined in the same way for integer partitions

References

Scholarly Communications description of Open J-Gate United Kingdom Serials Special Interest Group description of Open J-Gate Chemical Informatics Letters description References

^ a b Hirsch, J. E. (15 November 2005). "An index to quantify an individual's scientific research output". PNAS 102 (46): 16569–16572. arXiv:physics/0508025. Bibcode 2005PNAS..10216569H. doi:10.1073/pnas.0507655102. PMC 1283832. PMID 16275915. ^ McDonald, Kim (8 November 2005). "Physicist Proposes New Way to Rank Scientific Output". PhysOrg. Retrieved 13 May 2010. ^ Peterson, Ivars (December 2, 2005). "Rating Researchers". Science News. Retrieved 13 May 2010. ^ a b c d "Citation Thresholds". Science Watch. May 1, 2010. Retrieved 13 May 2010. ^ Meho, L. I.; Yang, K. (2007). "Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar". Journal of the American Society for Information Science and Technology 58 (13): 2105–2125. doi:10.1002/asi.20677. ^ a b Meho, L. I. and Yang, K (23 December 2006). "A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar". arXiv:cs/0612132 (preprint of paper published as 'Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar', in Journal of the American Society for Information Science and Technology, Vol. 58, No. 13, 2007, 2105-2125) ^ Jacsó, Péter (2006). "Dubious hit counts and cuckoo's eggs". Online Information Review 30 (2): 188–193. doi:10.1108/14684520610659201. ^ Sanderson, Mark (2008). "Revisiting h measured on UK LIS and IR academics". Journal of the American Society for Information Science and Technology 59 (7): 1184–1190. doi:10.1002/asi.20771. ^ Wendl, Michael (2007). "H-index: however ranked, citations need context". Nature 449 (7161): 403. Bibcode 2007Natur.449..403W. doi:10.1038/449403b. PMID 17898746. ^ Sekercioglu, Cagan H. (2008). "Quantifying coauthor contributions". Science 322 (5900): 371. doi:10.1126/science.322.5900.371a. PMID 18927373.. ^ Zhang, C. -T. (2009). "A proposal for calculating weighted citations based on author rank". EMBO Reports 10 (5): 416–417. doi:10.1038/embor.2009.74. PMC 2680883. PMID 19415071. edit ^ Sune Lehmann, Andrew D. Jackson, Benny E. Lautrup (2006). "Measures for measures". Nature 444 (7122): 1003–4. Bibcode 2006Natur.444.1003L. doi:10.1038/4441003a. PMID 17183295. ^ Hirsch J. E. (2007). "Does the h-index have predictive power?". PNAS 104 (49): 19193–19198. Bibcode 2007PNAS..10419193H. doi:10.1073/pnas.0707962104. PMC 2148266. PMID 18040045. (pdf version) ^ Frances Ruane & Richard S. J. Tol (2008). "Rational (successive) h -indices: An application to economics in the Republic of Ireland". Scientometrics 75 (2): 395–405. doi:10.1007/s11192-007-1869-7. ^ Christoph Bartneck & Servaas Kokkelmans (2011). "Detecting h-index manipulation through self-citation analysis". Scientometrics 87 (1): 85–98. doi:10.1007/s11192-010-0306-5. PMC 3043246. PMID 21472020. ^ Cyril Labbe (2010) Ike Antkare one of the great stars in the scientific firmament, Laboratoire d'Informatique de Grenoble RR-LIG-2008 (technical report), Joseph Fourier University ^ a b Batista P. D. et al (2006). "Is it possible to compare researchers with different scientific interests?". Scientometrics 68 (1): 179–189. doi:10.1007/s11192-006-0090-4. ^ Sidiropoulos, Antonis; Katsaros, Dimitrios; Manolopoulos, Yannis (2007). "Generalized Hirsch h-index for disclosing latent facts in citation networks". Scientometrics 72 (2): 253–280. doi:10.1007/s11192-007-1722-z. ^ Jayant S Vaidya (December 2005). "V-index: A fairer index to quantify an individual's research output capacity". BMJ 331: 339–c–1340–c. ^ Katsaros D., Sidiropoulos A., Manolopous Y., (2007), Age Decaying H-Index for Social Network of Citations in Proceedings of Workshop on Social Aspects of the Web Poznan, Poland, April 27, 2007 ^ Anderson, T.R.; Hankin, R.K.S and Killworth, P.D. (2008). "Beyond the Durfee square: Enhancing the h-index to score total publication output". Scientometrics 76 (3): 577–588. doi:10.1007/s11192-007-2071-2. ^ Baldock, C.; Ma, R.M.S and Orton, C.G. (2009). "The h index is the best measure of a scientist's research productivity". Medical Physics 36 (4): 1043–1045. Bibcode 2009MedPh..36.1043B. doi:10.1118/1.3089421. PMID 19472608. ^ Bornmann L., et al.,(2011), A multilevel meta-analysis of studies reporting correlations between the h-index and 37 different h-index variants, J. of Informetrics, Vol.5, Iss. 3, July 2011, p.346-59 ^ http://www.harzing.com/pop_hindex.htm, Harzing.com, Reflections on the h-index, Anne-Wil Harzing, University of Melbourne, 23 April 2008, Retrieved March 1, 2011. ^ von Bohlen und Halbach O (2011). "How to judge a book by its cover? How useful are bibliometric indices for the evaluation of "scientific quality" or "scientific productivity"?". Annals of Anatomy 193 (3): 191–6. doi:10.1016/j.aanat.2011.03.011. PMID 21507617. ^ Gągolewski, M.; Grzegorzewski, P. (2009). "A geometric approach to the construction of scientific impact indices". Scientometrics 81 (3): 617–634. doi:10.1007/s11192-008-2253-y. ^ Kosmulski, M. (2006). "I—a bibliometric index". Forum Akademickie 11: 31. ^ Prathap, G. (2006). "Hirsch-type indices for ranking institutions' scientific research output". Current Science 91 (11): 1439. ^ Bornmann, L.; Mutz, R. D.; Daniel, H. D. (2010). "The h index research output measurement: Two approaches to enhance its accuracy". Journal of Informetrics 4 (3): 407–414. doi:10.1016/j.joi.2010.03.005. edit ^ Egghe L., Theory and practice of the g-index, Scientometrics, 69(2006),No 1,p.131–52 ^ Kashyap Dixit, S Kameshwaran, Sameep Mehta, Vinayaka Pandit, N Viswanadham, Towards simultaneously exploiting structure and outcomes in interaction networks for node ranking, IBM Research Report R109002, February 2009; also appeared as Kameshwaran, S.; Pandit, V.; Mehta, S.; Viswanadham, N.; Dixit, K. (2010). "Outcome aware ranking in interaction networks". Proceedings of the 19th ACM international conference on Information and knowledge management (CIKM '10): 229-238. doi:10.1145/1871437.1871470. ISBN 978-1-4503-0099-5. edit ^ Bras-Amorós, M.; Domingo-Ferrer, J.; Torra, V; (2011). "A bibliometric index based on the collaboration distance between cited and citing authors". Journal of Informetrics 5 (2): 248-264. doi:10.1016/j.joi.2010.11.001. edit ^ Silagadze Z., Citation entropy and research impact estimation, Acta Phys. Polon. B41 (2010), 2325-2333, arxiv