Reader Comments

Post a new comment on this article

The "Hierarchy of the Sciences" May Reflect Publication Resources, Not Methodological Rigor

Posted by SLKoole on 09 Mar 2012 at 16:05 GMT

The different rates of confirmatory publications between the different scientific disciplines may have little to do with the "hardness" of the relevant disciplines. Instead, they may reflect different levels of publication resources. The natural sciences have much greater numbers of publications than the behavioral sciences. For instance, Suls and Martin (2009) compared the numbers of journal articles in chemisty and psychology that were listed in Web of Science in 2006. There were 426 chemistry journals that together published 99,253 articles, compared to 112 psychology journals, which together published no more than 28,883 articles. Chemistry journals thus published over three times more articles than psychology journals in the same year. Importantly, psychology's restricted publication resources seem to translate into a greater selectiveness of its journals. For instance, the average rejection rate of journals of the American Psychological Association (APA), one of the largest publishers in the field, varied between 69% and 76% from 2005 to 2010 (APA Summary Reports of Journal Operations, 2005-2010). By comparison, average rejection rates in the physical sciences vary between between 20% and 50% (Aarssen et al., 2008; Zuckerman & Mertin, 1971; Schulz, 2010). Given the much smaller publication resources in the "softer" sciences, it is likely that these sciences will set higher criteria for such publications. In particular, editors and reviewers will likely demand more substantive results and greater innovativeness. Comparative studies indeed show that editors in the behavioral sciences make higher demands on innovativeness of studies than editors in the natural sciences (Madden, Easly, & Dunn, 1995). Thus, the apparent evidence for a 'hierarchy of sciences" in the article by Franelle may in fact reflect only the greater access to publication resources (journal space) in the natural sciences. rigor. The natural sciences may be wealthier than other types of sciences, but they are not necessarily more methodologically rigorous.

No competing interests declared.

RE: The "Hierarchy of the Sciences" May Reflect Publication Resources, Not Methodological Rigor

dfanelli replied to SLKoole on 23 Mar 2012 at 10:51 GMT

Thank you for the interesting and constructive comments.
Differences in rejection rates (along with all possible differences in editorial practices) are surely an important factor to consider. As you point out, at least some studies did suggest that rejection rates and editorial practices differ between “hard” and “soft” disciplines.
However, I am not convinced that “shortages of resources” are a truly alternative explanation to differences in underlying hardness.
For one thing, I am not sure we have sufficient evidence to establish that the social sciences experience and actual shortage of space: as we know, the WOS database does not cover all existing journals (it might cover a smaller proportion of journals from the social than the physical sciences), and we don’t know how many papers are actually submitted in different disciplines, so differences in ratios of submitted/available space are, as far as I know, just hypothesised.
As you note, a few studies do suggest higher rejection rates for the social sciences, but this evidence seems far from conclusive and, more importantly, it has in itself been linked to differences in the level of consensus between disciplines (e.g. see Hargens 1988 and Cole, Simon and Cole 1988). Evidence suggests that these differences in rejection rates are likely to be partially explained by different philosophies of error avoidance (“softer” disciplines tend to avoid any contribution of unclear importance, whereas the harder will only reject papers that are clearly flawed). But where would such philosophical differences come from, if not from intrinsic differences in the level of certainty on theories and methods?
Most important of all, equating “substantive” results with “positive” ones is a theoretical fallacy. In principle, if you “test” a hypothesis, what you are looking for are conclusive results, not positive ones. In practice, a negative result tends to be seen with suspicion when there is low confidence around methods, theories etc… In other words, when consensus is lower (i.e. when a discipline is “softer”) you expect high rejection rates to create a higher positive-outcome bias.
In sum, I agree on the importance of editorial practices and rejection rates, but I don’t think these count as an alternative explanation, at least given the present evidence.
Intriguingly, recent results suggest that the overall proportion of positive results has grown rapidly in many disciplines since 1990, particularly in the social sciences. This would support your point about shortage of space (Negative results are disappearing from most disciplines and countries, SCIENTOMETRICS Volume: 90 Issue: 3 Pages: 891-904 DOI: 10.1007/s11192-011-0494-7).

No competing interests declared.