EDITORIAL

**To Share the fame in a fair way, h_m modifies h for multi-authored manuscripts**

Michael Schreiber

**Abstract.** The *h*-index has been introduced by Hirsch as a useful measure to characterize the scientific output of a researcher. I suggest a simple modification in order to take multiple co-authorship appropriately into account, by counting each paper only fractionally according to (the inverse of) the number of authors. The resulting *h*_{m}-indices for eight famous physicists lead to a different ranking from the original *h*-indices.

Received 16 November 2007

Published 10 April 2008

OK, OK, ninguém disse que o h_I era perfeito, mas sim um degrau na busca de índices melhores. Agora já existem três tipos de índice h individuais (existe um, não publicado, mas usado pelo site Publish or Perish, que calcula vários índices usando o Google Scholar). A popularização dos mesmos é importante, pois são os únicos índices que combatem a corrupção de incluir gente demais nos papes.

Precisaríamos fazer um estudo sobre se existe uma curva universal para P(h_m), como foi encontrada para P(h_I).

Overall, I’m more interested in physics than citations

By Jorge Hirsch, Physics Professor, University of California San Diego, USA

Many people ask me why I came up with the “highly cited index” or *h-index*, a method for quantifying a scientist’s publication productivity and impact. Basically, the truth is that I dislike impact factors because, due to the controversial nature of my articles and research, I’m unable to get my work published in journals with high impact factors. Despite this, many of my articles have received large numbers of citations.

**Background of the ***h-index*

At many institutions, including my own, citation counts are considered during decisions relating to hiring, promotion and tenure. Despite the fact that citation counts can contain misinformation, for example, when many co-authors or self-citations are involved, they form a basic quantitative measure of a researcher’s output and impact. Hence citation counts should play an important role in evaluations, even if (or maybe especially when) the papers are not published in “high-impact journals.”

The h-index is about providing a simple objective measure for research evaluation.

In the summer of 2003, I first discussed the concept and mathematical calculation of the *x-coefficient*, as I initially called the *h-index*, with some colleagues at UCSD, and started to use it informally in evaluations. I wrote up a draft paper but wasn’t sure it would be of sufficient interest for publication. In the spring of 2005, I sent the paper to some colleagues and asked for comments. Some time later, a colleague from Germany emailed me inquiring about the index and expressing great interest. Then I decided to upload my *h-index* paper^{1} onto the Los Alamos server, which I did on August 3, 2005. I was still not sure whether to publish it in a refereed journal. To my surprise, the preprint received a very high level of interest. Before long, I found my email box filled with comments related to the article.

In essence, the *h-index* is about providing a simple objective measure for research evaluation. Since it is not related to the popularity of a journal, this index is a way to put more democracy into research performance measurements. In fact, papers that receive high numbers of citations in “low-impact” journals should be especially noteworthy.

**Possible improvements to the ***h-index*

Naturally no single quantitative measurement is sufficient on its own. One can add other features of the citation distribution besides the *h-index* to reflect additional citation information. For example, one may also consider the slope (first derivative) and curvature (second derivative) of the distribution, as well as the integral (total number of citations), as additional criteria. In the relation *N*_{Total} = ah^{2}, a is normally between 3-5, but deviations do occur.

The *h-index* does not normalize for the number of years that a researcher has been active. This can be done by dividing by the time since graduation or receipt of a PhD: *h(t) = mt* (where *m* is expected to be approximately time independent). It is also interesting to normalize the *h-index* taking into account the number of co-authors. Furthermore there are variations in the *h-index* between different disciplines and subdisciplines.

Will I continue my investigation into indicators of research evaluation? To some extent, yes; however overall, I am more interested in physics than citations.

## Comente!