Summary: | Metrics of journal quality (e.g., impact factors) are often used to make important judgements regarding journal quality and importance. It is well known that reviews are more highly cited than original research articles. Therefore, it is not surprising that review journals within a field tend to have the highest scores on measures of journal impact/quality. However, many journals publish both reviews and original research, which may lead to a misleading ranking system because published metrics are a mixture of two potentially independent measures with different means. In addition, journals under pressure to increase their impact factors have suggested that changing publication practices to include more reviews is a legitimate manipulation. However, the proportion of reviews published is not directly related to journal quality. Using 20 top ecology journals, we measure the influence of reviews on impact factor and clearly show that the proportion of reviews published by a journal can explain more than 75% of the observed variability in measures of journal quality. We suggest that these measures will be more useful if they are reported separately for articles and reviews. In contrast to other articles published on the problems with impact factors, we suggest a clear, simple solution that could be readily instituted with little change to the existing system.
|