Advertisement
Research Article

Systematic Variation in Reviewer Practice According to Country and Gender in the Field of Ecology and Evolution

  • Olyana N. Grod mail,

    olyanag@yorku.ca

    Affiliation: Department of Biology, York University, Toronto, Ontario, Canada

    X
  • Amber E. Budden,

    Affiliation: National Center for Ecological Analysis and Synthesis (NCEAS), Santa Barbara, California, United States of America

    X
  • Tom Tregenza,

    Affiliation: Centre for Ecology and Conservation, University of Exeter, Tremough, Penryn, United Kingdom

    X
  • Julia Koricheva,

    Affiliation: School of Biological Sciences, Royal Holloway University of London, Egham, Surrey, United Kingdom

    X
  • Roosa Leimu,

    Affiliation: Section of Ecology, University of Turku, Turku, Finland

    X
  • Lonnie W. Aarssen,

    Affiliation: Department of Biology, Queens University, Kingston, Ontario, Canada

    X
  • Christopher J. Lortie

    Affiliation: Department of Biology, York University, Toronto, Ontario, Canada

    X
  • Published: September 12, 2008
  • DOI: 10.1371/journal.pone.0003202

Reader Comments (3)

Post a new comment on this article

These data seem flawed

Posted by tomwebb on 16 Sep 2008 at 13:12 GMT

The authors continue to present a strong argument for reform of the peer review system by publishing an article riddled with poor analysis and suspect data. Leaving aside the fact that the questionnaire respondents constitute a self-selected group, and that their responses generally are estimates with error (so that 'significant' differences of an hour or so in the time spent reviewing a paper might easily be swamped by 'measurement error' in the estimate of how long they take to review papers), what really concerns me is that some respondents have clearly misread some questions, leading to nonsensical data reported without comment in Table S1. As a first guess one might expect that 12 hours is about the maximum that anyone - on average - spends reviewing a manuscript. About 10% of respondents report that they spend 12 or more hours reviewing each paper. Fine, some people might be really conscientious, particularly if they've only reviewed one or two papers - but the tail just keeps growing, so that maximum value (reported 5 times) is 72 hours per paper! And one North American male apparently spends 3500 hours a year reviewing papers (50 papers a year, 70 hours a paper). So that's 70 hour weeks, every week, just reviewing! Clearly, he read the question as 'how much time in total do you spend reviewing' and several other respondents who report lots of reviews and many hours spend on each one have surely done the same. It is worrying that there appears to have been so little quality control of the data prior to analysis, or indeed prior to sending out the questionnaire. For instance, how should I respond to the question, 'what proportion of the manuscripts that you review do you reject'? I don't reject any papers that I review. I make a recommendation. Is this what they mean? Or are they referring to the (editor's) final decision?

Peer review is vital in assuring the quality of the published scientific record. We must therefore critically assess the system as it is, and keep an open mind as to how it might be improved. Unfortunately, as in their previous work promoting double-blind review (thoroughly rebutted now several times: Webb et al. 2008 TREE 23: 351-353; Whittaker 2008 TREE 23: 478-479, Hammerschmidt et al. 2008 Frontiers Ecol Env 6: 354, Engqvist & Frommen 2008 Anim Behav 76: e1-e2), this paper does not inspire great confidence that these authors have the data, the analytical techniques, or the necessary knowledge of the publication process to address this important issue.