Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers
BackgroundThe h-index is a commonly used metric for evaluating the publication performance of researchers. However, in a multidisciplinary field such as medical informatics, interpreting the h-index is a challenge because researchers tend to have diverse home disciplines, ran...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
JMIR Publications
2012-10-01
|
Series: | Journal of Medical Internet Research |
Online Access: | http://www.jmir.org/2012/5/e144/ |
id |
doaj-fdd870ebd9b74908833990a007d44526 |
---|---|
record_format |
Article |
spelling |
doaj-fdd870ebd9b74908833990a007d445262021-04-02T19:20:27ZengJMIR PublicationsJournal of Medical Internet Research1438-88712012-10-01145e14410.2196/jmir.2177Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics ResearchersEl Emam, KhaledArbuckle, LukJonker, ElizabethAnderson, Kevin BackgroundThe h-index is a commonly used metric for evaluating the publication performance of researchers. However, in a multidisciplinary field such as medical informatics, interpreting the h-index is a challenge because researchers tend to have diverse home disciplines, ranging from clinical areas to computer science, basic science, and the social sciences, each with different publication performance profiles. ObjectiveTo construct a reference standard for interpreting the h-index of medical informatics researchers based on the performance of their peers. MethodsUsing a sample of authors with articles published over the 5-year period 2006–2011 in the 2 top journals in medical informatics (as determined by impact factor), we computed their h-index using the Scopus database. Percentiles were computed to create a 6-level benchmark, similar in scheme to one used by the US National Science Foundation, and a 10-level benchmark. ResultsThe 2 benchmarks can be used to place medical informatics researchers in an ordered category based on the performance of their peers. A validation exercise mapped the benchmark levels to the ranks of medical informatics academic faculty in the United States. The 10-level benchmark tracked academic rank better (with no ties) and is therefore more suitable for practical use. ConclusionsOur 10-level benchmark provides an objective basis to evaluate and compare the publication performance of medical informatics researchers with that of their peers using the h-index.http://www.jmir.org/2012/5/e144/ |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
El Emam, Khaled Arbuckle, Luk Jonker, Elizabeth Anderson, Kevin |
spellingShingle |
El Emam, Khaled Arbuckle, Luk Jonker, Elizabeth Anderson, Kevin Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers Journal of Medical Internet Research |
author_facet |
El Emam, Khaled Arbuckle, Luk Jonker, Elizabeth Anderson, Kevin |
author_sort |
El Emam, Khaled |
title |
Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers |
title_short |
Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers |
title_full |
Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers |
title_fullStr |
Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers |
title_full_unstemmed |
Two h-Index Benchmarks for Evaluating the Publication Performance of Medical Informatics Researchers |
title_sort |
two h-index benchmarks for evaluating the publication performance of medical informatics researchers |
publisher |
JMIR Publications |
series |
Journal of Medical Internet Research |
issn |
1438-8871 |
publishDate |
2012-10-01 |
description |
BackgroundThe h-index is a commonly used metric for evaluating the publication performance of researchers. However, in a multidisciplinary field such as medical informatics, interpreting the h-index is a challenge because researchers tend to have diverse home disciplines, ranging from clinical areas to computer science, basic science, and the social sciences, each with different publication performance profiles.
ObjectiveTo construct a reference standard for interpreting the h-index of medical informatics researchers based on the performance of their peers.
MethodsUsing a sample of authors with articles published over the 5-year period 2006–2011 in the 2 top journals in medical informatics (as determined by impact factor), we computed their h-index using the Scopus database. Percentiles were computed to create a 6-level benchmark, similar in scheme to one used by the US National Science Foundation, and a 10-level benchmark.
ResultsThe 2 benchmarks can be used to place medical informatics researchers in an ordered category based on the performance of their peers. A validation exercise mapped the benchmark levels to the ranks of medical informatics academic faculty in the United States. The 10-level benchmark tracked academic rank better (with no ties) and is therefore more suitable for practical use.
ConclusionsOur 10-level benchmark provides an objective basis to evaluate and compare the publication performance of medical informatics researchers with that of their peers using the h-index. |
url |
http://www.jmir.org/2012/5/e144/ |
work_keys_str_mv |
AT elemamkhaled twohindexbenchmarksforevaluatingthepublicationperformanceofmedicalinformaticsresearchers AT arbuckleluk twohindexbenchmarksforevaluatingthepublicationperformanceofmedicalinformaticsresearchers AT jonkerelizabeth twohindexbenchmarksforevaluatingthepublicationperformanceofmedicalinformaticsresearchers AT andersonkevin twohindexbenchmarksforevaluatingthepublicationperformanceofmedicalinformaticsresearchers |
_version_ |
1721549226296475648 |