Lists are only valuable if they are correct. The H top-20 in the last issue of Resource did a few scientists an injustice. To put that right, here's a second attempt.
Get it right first time. Wasn't that possible? In principle yes, but it is no easy matter to compile a list like this. Wageningen UR doesn't keep its own records of scores like these. What's more, a scientist who wants to know her own h-rating, has to investigate it herself. In fact, there is only one good way to compile the Top-20: expert judgement. In other words, to sound out a broad selection of people who could or should be in the know. So that is what has been done. But it's a fallible method. Among those thousands of scientists, it is easy to overlook someone. Which means that even the amended and appended list is falsifiable.
Besides, in no way does an h-score tell the whole story. Total output, the total number of citations, the average per article and the number of citations for the most frequently cited article provide a much more complete picture of a researcher's scientific worth. Such statistics in the table below reveal striking differences. For example, Peter Holman achieves an h=48 with just 120 articles. This is reflected in the improbable average of more than 100 citations per article. Although, obviously, 2288 (!) citations for a top article bumps up the score rather nicely.
The total number of citations also shows great variation, with Daan Kromhout the real star with over 25,000 citations. Willem de Vos isn't far behind, but he's needed 180 more articles to get to his position. Incidentally, the data come from Web of Science, currently the most used and most complete source in the h-field. Upon request, WoS will publish each scientist's citation score per year in graph form. Staff of Wageningen UR have access to WoS via the library website (www.library.wur.nl).