Reader Comments

Post a new comment on this article

Response from F1000

Posted by rpgrant on 19 Jun 2009 at 15:47 GMT

The impact factor is a well established measurement of research ‘quality’, particularly in context of assessment exercises such as the Research Excellence Framework http://www.hefce.ac.uk/Re.... Its deficiencies, however, are serious and well-known (see, for example, http://dx.doi.org/10.1073... and http://network.nature.com...).
Allen et al’s analysis is a valuable contribution to this debate. Perhaps its most important finding is that purely quantitative indicators based on what is essentially a meta-analysis of research (that is, requiring research to wait for a further around of research and publication before being assessed), are in danger of missing important publications. The assessment is also slow and necessarily retrospective. This is true for any citation-based metric, including the Hirsch Index or the Eigenfactor. Furthermore, the assumption that any single paper’s quality is necessarily linked to that of the journal is unwarranted.
At the Faculty of 1000 we assess the quality of individual articles through our network of over five thousand, peer-selected researchers and clinicians. Our Faculty members score each article they review and provide contextual comment and brief analysis, in less than two months (on average) from publication of the original article. We are naturally pleased that our approach has been vindicated by the current study’s expert review college, and that this qualitative assessment has predictive power in the context of citation rates.
Although there is broad overall agreement between the F1000 assessment and that of the review college, Allen et al level the very valid criticism that F1000 did not highlight the same individual papers as the college. They point out that this is at least in part because we do not cover all the literature.
This, and the perception that F1000 only covers ‘top tier’ journals (this claim is unfounded: fewer than 20% of our evaluations come from articles published in such journals, and we have evaluated articles from over two thousand different journals), has lead us to initiate a scanning project, in which we systematically review over four hundred journals across biology and medicine (in addition to the current system). This list is constantly growing, and to aid our Faculty in their task we have recently started recruiting associate members to identify and evaluate important articles.
Nonetheless, we are well aware that any single system will have its disadvantages, and to increase our own value to researchers and clinicians we are currently examining how we might integrate machine-harvested bibliometric data with a human, qualitative assessment.

Competing interests declared: I am the Iformation Architect at the Faculty of 1000