Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Role of Editorial and Peer Review Processes in Publication Bias: Analysis of Drug Trials Submitted to Eight Medical Journals

  • Marlies van Lent ,

    Marlies.vanLent@radboudumc.nl

    Affiliation Clinical Research Centre Nijmegen, Department of Pharmacology – Toxicology, Radboud University Medical Centre, Nijmegen, The Netherlands

  • John Overbeke,

    Affiliation Department of Primary and Community Care, Radboud University Medical Centre, Nijmegen, The Netherlands

  • Henk Jan Out

    Affiliations Clinical Research Centre Nijmegen, Department of Pharmacology – Toxicology, Radboud University Medical Centre, Nijmegen, The Netherlands, Teva Pharmaceuticals, Amsterdam, The Netherlands

Abstract

Background

Publication bias is generally ascribed to authors and sponsors failing to submit studies with negative results, but may also occur after submission. We evaluated whether submitted manuscripts on randomized controlled trials (RCTs) with drugs are more likely to be accepted if they report positive results.

Methods

Manuscripts submitted from January 2010 through April 2012 to one general medical journal (BMJ) and seven specialty journals (Annals of the Rheumatic Diseases, British Journal of Ophthalmology, Gut, Heart, Thorax, Diabetologia, and Journal of Hepatology) were included, if at least one study arm assessed the efficacy or safety of a drug and a statistical test was used to evaluate treatment effects. Publication status was retrospectively retrieved from submission systems or provided by journals. Sponsorship and trial results were extracted from manuscripts and classified according to predefined criteria. Main outcome measure was acceptance for publication.

Results

Of 15,972 manuscripts submitted, 472 (3.0%) were drug RCTs, of which 98 (20.8%) were published. Among submitted drug RCTs, 287 (60.8%) had positive and 185 (39.2%) negative results. Of these, 60 (20.9%) and 38 (20.5%), respectively, were published. Manuscripts on non-industry trials (n = 213) reported positive results in 138 (64.8%) manuscripts, compared to 71 (47.7%) on industry-supported trials (n = 149), and 78 (70.9%) on industry-sponsored trials (n = 110). Twenty-seven (12.7%) non-industry trials were published, compared to 27 (18.1%) industry-supported and 44 (40.0%) industry-sponsored trials. After adjustment for other trial characteristics, manuscripts reporting positive results were not more likely to be published (OR, 1.00; 95% CI, 0.61 to 1.66). Submission to specialty journals, sample size, multicentre status, journal impact factor, and corresponding authors from Europe or US were significantly associated with publication.

Conclusions

For the selected journals, there was no tendency to preferably publish manuscripts on drug RCTs that reported positive results, suggesting that publication bias may occur mainly prior to submission.

Introduction

Publication bias refers to the selective publication of research findings depending on the nature and direction of results [1] and has been widely studied. Studies reporting positive results are more likely to be published [2][4], which may cause meta-analyses based on published reports to overestimate the size of apparent treatment effects. Pharmaceutical industry sponsorship has particularly been associated with publication of favourable outcomes.[5][8] Publication bias is generally ascribed to authors and sponsors failing to submit studies with negative results, but may also occur once manuscripts have been submitted to journals.[9], [10]

A limited number of studies have systematically evaluated publication bias in editorial decision making. Olson et al. assessed manuscripts submitted to JAMA, and found no difference in publication rates between manuscripts with positive versus negative results.[11] Lee et al. found similar results for manuscripts submitted to BMJ, the Lancet and Annals of Internal Medicine.[12] Lynch et al. and Okike et al. assessed submissions to The Journal of Bone and Joint Surgery, and found no evidence for publication bias by editors.[13], [14] Overall, these studies suggest that submitted manuscripts with positive results are not more likely to be published, which was confirmed by a recent meta-analysis.[15]

However, these studies had certain limitations. Most were prospective studies, so editors and reviewers may have been aware that some investigation was in progress.[11][13] This possibly influenced their decision making, even if they were not informed about the study hypothesis. Olson et al. and Lee et al. included large general medical journals with high impact factors, and their results may not be generalizable to specialty journals or journals with fewer submissions, fewer editors or lower circulation.[11] Two studies were limited to orthopaedic journals, and resulting findings may not apply to other specialties.[13], [14] Moreover, publication bias may affect studies with various designs and interventions differently. Olson et al. included manuscripts on controlled trials, while others enrolled manuscripts reporting original research, regardless of study design.[12][14] None of the studies that followed manuscripts submitted to journals included papers based on the intervention tested, while publication bias has predominantly been researched and described for drug trials.[4], [6], [7], [16], [17]

Acceptance rates may also depend on sponsorship, next to study results. Publication of industry-sponsored trials has been associated with an increase in journal impact factors [18], as impact factors depend on citation rates and industry-sponsored trials are more frequently cited than non-profit trials.[19], [20] Moreover, journals create revenue through reprint sales, and industry funding of trials has been associated with high numbers of reprint orders.[21], [22] Lynch et al. found that commercially funded research was more likely to be published, while Olson et al. reported no difference according to funding source.[11], [13] However, neither of these studies focused on drug research, in which industry funding appears to be most abundant.

In this study, we retrospectively assessed manuscripts on randomized controlled trials (RCTs) with drugs submitted to one general medical journal and seven specialty journals, and evaluated acceptance rates of manuscripts reporting positive versus negative results. We hypothesized that negative trials were less likely to be published. Submission rates of positive versus negative studies were compared by sponsor type and the influence of sponsorship on acceptance rates was determined.

Methods

Selection of journals

Editors of six major general medical journals were asked for their cooperation to provide access to submitted manuscripts, peer review comments, and final decisions on publication. BMJ agreed to participate and the BMJ Group also provided access to data of BMJ specialty journals. In addition, other European specialty journals were asked to participate. All journals were selected based on 1. impact factor (journals indexed with the highest impact factors within subject categories, according to the Institute for Scientific Information Journal Citation Report 2011); and 2. the number of drug RCTs published in 2010–2011, determined on the basis of a PubMed search. As a result, publication outcomes were studied for one general medical journal and seven specialty journals: BMJ, Annals of the Rheumatic Diseases, British Journal of Ophthalmology, Gut, Heart, Thorax (all from the BMJ Group), Diabetologia, and Journal of Hepatology.

Selection of submitted manuscripts

Original research manuscripts submitted between January 1, 2010 and April 30, 2012 were screened for eligibility by one author. The study time frame per journal was based on the retrospective period for which all required data, regardless of the publication status of manuscripts, was completely available in manuscript submission systems at the time of data extraction. Manuscripts reporting results of RCTs were selected, if at least one study arm assessed the efficacy or safety of a drug intervention (including vaccines, biologics, dietary supplements, and herbal medicinal products) and a statistical test was used to evaluate treatment effects. Post-hoc and subgroup analyses and follow-up studies of drug RCTs were included.

Data extraction

Data were extracted retrospectively by one author using a standardized data extraction form. Primary outcome was acceptance for publication. Publication status and peer review details were retrieved from submission systems or provided by journals. Manuscripts were assessed as outright rejected, rejected after external peer review, or accepted for publication. Information on trial results and sponsorship was extracted from manuscripts. Data on study characteristics previously examined for association with publication (sample size, number of centres, corresponding author's country of residence [11][13]) were also retrieved. Manuscripts were searched for registration numbers to determine whether studies were registered in a trial registry that complies with requirements of the International Committee of Medical Journal Editors (ICMJE).[23] All included journals required trial registration in their instructions to authors.

Classification of results and sponsorship

Trial results and sponsorship were classified based on consensus between two authors according to predefined criteria.[24] Briefly, outcomes were scored as positive if results reported for the primary endpoint were statistically significant (p<0.05 or 95% confidence interval [CI] for difference excluding 0 or 95% CI for ratio excluding 1) and supported the efficacy of the test drug, and negative if they did not. For equivalence and non-inferiority trials, results were classified as positive if treatments were equivalent. If the primary endpoint was a safety parameter, trials were classified as positive if the test drug was as safe as or safer than control. When explicitly hypothesized that the test drug was expected to be safer than control, results were categorized as negative if treatments were equally harmful. If no primary outcome was stated for a trial or multiple primary endpoints were selected, results were classified based on the statistical significance and direction of most (primary) outcomes (>50%). Studies were classified as non-industry, industry-supported or industry-sponsored trials. For non-industry trials, no associations with pharmaceutical companies were reported in the manuscript. Studies reporting donation of study medication or placebos by a manufacturer, studies stating receipt of financial support from a pharmaceutical company and studies with authors affiliated to industry were classified as industry-supported trials. For industry-sponsored trials, a pharmaceutical company was explicitly described as the study sponsor, or the company funding the trial was reported to have participated in the study design, data collection, analysis, preparation of the manuscript, and/or the decision to publish. When doubt remained over sponsorship, information in the trial registry took precedence over other information (if registered).

Statistical analysis

The association between publication and trial results and other characteristics was first analyzed using univariate logistic regression. Associations between acceptance (versus rejection) and trial characteristics were estimated with odds ratios (ORs) and 95% CIs. P-values were not adjusted for multiple comparisons and P<.05 was considered statistically significant. To control for several characteristics simultaneously, multiple logistic regression was used and ORs were calculated. As 98 submitted manuscripts were accepted in this study, nine predictors could be entered in the model simultaneously, with ten acceptances per predictor. Besides the primary analysis (accepted vs all rejected manuscripts), two additional multivariable analyses were performed to compare accepted manuscripts with those outright rejected or rejected after peer review. These sensitivity analyses were conducted to assess whether the effects of the covariates were dependent on the type of rejection, i.e. whether the decision to reject manuscripts after initial editorial screening versus after peer review was of influence on the association between positive results and acceptance. Statistical analyses were performed using SPSS software (version 20; Chicago, Illinois).

Ethics

To assure confidentiality of information in manuscripts and submission systems, the authors signed confidentiality agreements before gaining access to the data. As standard editorial processes were unchanged, authors and peer reviewers were not informed about this study. Approval from a research ethics committee was not required, as this study involved no human participants.

Results

From January 2010 through April 2012, 15,972 manuscripts reporting original research were submitted to eight journals, of which 472 (3.0%) met all inclusion criteria. Ninety-eight manuscripts (20.8%) were published, 221 (46.8%) were outright rejected and 152 (32.2%) were rejected after peer review. One manuscript (0.2%) was withdrawn by authors before editorial decisions were made (Figure 1).

thumbnail
Figure 1. Publication status of manuscripts submitted to eight medical journals during the study time frame.

https://doi.org/10.1371/journal.pone.0104846.g001

Among 472 drug RCTs, 287 (60.8%) had positive results and 185 (39.2%) had negative results (Table 1). Of these, 135 (47.0%) and 86 (46.5%), respectively, were rejected immediately, and 91 (31.7%) and 61 (33.0%) after peer review. In total, compared to the number of submitted manuscripts, 60 (20.9%) positive studies were published compared to 38 (20.5%) negative studies. Publication outcomes of manuscripts submitted to each individual journal are shown in Table 1. For all journals except Thorax, the proportion of submitted manuscripts with positive results outnumbered those with negative results. In the BMJ, British Journal of Ophthalmology, Diabetologia, Gut, Heart, and Journal of Hepatology, a higher proportion of submitted manuscripts with negative results were published, while in Annals of the Rheumatic Diseases and Thorax a higher proportion of positive studies were published.

thumbnail
Table 1. Publication status of submitted manuscripts reporting positive vs negative results by journal.

https://doi.org/10.1371/journal.pone.0104846.t001

Submitted manuscripts reporting non-industry trials (n = 213) had positive results in 138 manuscripts (64.8%), compared to 71 manuscripts (47.7%) on industry-supported trials (n = 149), and 78 manuscripts (70.9%) on industry-sponsored trials (n = 110) (Table 2). When all trials with industry involvement (n = 259) were taken together, 149 submitted manuscripts (57.5%) reported positive results. Twenty-seven (12.7%) non-industry trials were published, compared to 27 (18.1%) industry-supported trials, and 44 (40.0%) industry-sponsored trials.

thumbnail
Table 2. Publication status of submitted manuscripts reporting positive versus negative results by sponsor type.

https://doi.org/10.1371/journal.pone.0104846.t002

In the univariate analysis, manuscripts reporting positive results were not more likely to be published compared to those with negative results (OR, 1.03; 95% CI, 0.65–1.62) (Table 3). Sponsorship was significantly associated with publication; industry-sponsored trials were more likely to be published than non-industry trials (OR, 4.59; 95% CI 2.64–8.00). Trial registration, sample size, being a multicentre trial or follow-up study of an RCT, a corresponding author from Europe or the US, and the journal to which manuscripts are submitted were associated with the chance of publication (Table 3).

thumbnail
Table 3. Characteristics of submitted manuscripts and their association with publication: univariate analysis (accepted vs all rejected).

https://doi.org/10.1371/journal.pone.0104846.t003

In the multivariable analysis, accepted versus rejected manuscripts were compared after controlling for characteristics that were significantly associated with publication in the univariate analysis, or otherwise deemed important in relation to publication (Table 4). After adjustment for these variables, acceptance rates were not higher for trials with positive results than for trials with negative results (OR, 1.00; 95% CI, 0.61–1.66). The association of other factors with publication is shown in Table 4. In the multivariable analysis, industry-sponsorship and trial registration were no longer significantly associated with publication, while journal impact factor and submission to specialty journals were associated with an increased chance of acceptance. In the multivariable analyses comparing accepted manuscripts with those outright rejected or rejected after peer review, positive studies were not more likely to be published (Table 4). Findings of these analyses confirmed the primary analysis, as the direction of effects found was equal in all analyses. However, most associations were not statistically significant when comparing accepted manuscripts with those rejected after peer review.

thumbnail
Table 4. Characteristics of submitted manuscripts associated with publication: multivariable analysis.

https://doi.org/10.1371/journal.pone.0104846.t004

Discussion

This is the first study that evaluated publication bias of manuscripts submitted to both a general medical journal and multiple specialty journals. Submitted manuscripts on drug RCTs were not more likely to be published if they reported positive results, regardless of whether rejected manuscripts were peer reviewed or not. This confirms findings of previous studies that followed manuscripts submitted to journals.[11][14] The proportion of submitted manuscripts with positive results outnumbered those with negative results, suggesting that publication bias mainly occurs prior to submission. This corresponds to findings of surveys among investigators on reasons for non-publication of results showing that studies primarily remained unpublished due to investigator-related factors.[25], [26]

Both submitted non-industry and industry-sponsored trials were more likely to report positive results, in contrast to study findings indicating that particularly industry sponsorship is associated with favourable outcomes.[5], [6] Interestingly, industry-sponsorship was associated with publication in the univariate analysis, as was previously found by Lynch et al.[13] This could be related to editorial decisions, as incentives such as citation rates [19], [20] and reprint revenue [21], [22] could favour the acceptance of these studies. Trial registration resulted in an increased unadjusted OR for publication, which may reflect that included journals adhere to ICMJE policy requiring registration as a condition of consideration for publication. Multicentre trials and studies enrolling more than 100 participants were more likely to be published, which was in agreement with findings of previous studies.[11], [12]

Previous studies found that manuscripts whose corresponding author was from the same country as the publishing journal were more likely to be accepted.[12], [13], [27] We included European journals only and found that having a corresponding author from either Europe or the US increased the chance of publication. This may result from a ‘familiarity effect’, leading reviewers and editors to be more accepting of trials with familiar interventions, clinical relevance, and language use.[13], [28]

After adjustment for other trial characteristics, submission to specialty journals was associated with publication. This seems plausible, as acceptance rates of general medical journals are known to be lower than those of specialty journals. A higher journal impact factor increased the chance of publication, though high impact journals generally have low acceptance rates. The direction of this association may be explained by relatively high acceptance rates found for two journals (Annals of the Rheumatic Diseases, Journal of Hepatology). Studies with negative results submitted to Annals of the Rheumatic Diseases and Thorax seemed less likely to be published. In view of the fact that BMJ was the only general medical journal that was included in this study and the number of accepted manuscripts per journal was relatively low, these data need to be interpreted with caution.

The retrospective design of this study overcomes limitations that prospective studies on publication bias in editorial decision making have. To study publication bias after manuscript submission, collaboration from editors is essential. In prospective studies, the decision-making behaviour of editors may be influenced by awareness of an ongoing investigation [11], introducing bias into the selection of manuscripts that are published. However, due to this retrospective design, our study time frame was limited by the retrospective availability of data in manuscripts submission systems.

We included a general medical journal and specialty journals across different medical specialties, which increases the generalizability of our results compared to studies that only included large general medical journals or an orthopaedic journal.[11][14] However, we acknowledge that the journals included in our study are merely a sample of all peer reviewed medical journals. It might be that those journals that agreed to participate, did so based on existing editorial policy to publish papers of scientific worth regardless of the direction of results. As both BMJ and 5 of the 7 included specialty journals are published by the BMJ Group, the results of this study may have been affected by clustering effects based on publisher policy. Furthermore, investigators may prefer to submit large, multi-centre, well-conducted studies to high impact journals like those included in our study. If publication bias is more likely to affect smaller studies, the inclusion of lower impact journals that more commonly receive smaller, single-center or negative studies might have influenced our results. However, no study has found evidence for publication bias in editorial decision making, irrespective of its design or included journals.

Other strengths include the objective selection criteria for journals and manuscripts, analysis of confounding characteristics, and classification of results and sponsorship based on predefined criteria.[24] Assessment of results and establishing the role of the funding source may appear to be straightforward, but in most studies on publication bias, methods for classification of results and sponsorship are only reported to a limited extent and definitions used are inconsistent across studies.[24]

This study has certain limitations. During the assessment of results and other characteristics, there was no blinding for publication status. In a retrospective study, blinding for publication status would require editors to redact information made available to investigators, which could introduce substantial bias. Furthermore, the screening and selection of manuscripts and the extraction of manuscript characteristics were performed by one author, while this would ideally have been done by two independent assessors. We have focused on drug RCTs, and our results may not be generalizable to studies with different designs or interventions. The number of submitted drug RCTs varied between journals. This could be related to medical specialty and journal impact factor, but may also vary due to differences in retrospective availability of data in submission systems of journals. However, the proportion of drug RCTs among submitted manuscripts was comparable for all journals. We included European journals only, and editorial processes might slightly differ compared to US journals. Our study included a representative sample of drug RCTs though, as more than half of all submitted trials were from outside Europe.

In this study, we evaluated the overall editorial process after manuscript submission and have not specifically examined the role of peer reviewers in publication bias. Abbot and Ernst tested whether publication bias was present during peer review, and found that reviewers were no more likely to recommend publication of a fictitious manuscript with positive results.[29] However, Emerson et al. showed that a fabricated manuscript reporting positive results was more often recommended for publication than an otherwise identical manuscript reporting no effect.[10] It is difficult to assess the extent to which editors' decisions were reliant on reviewers' comments in this study. Kravitz et al. found that editors tend to place considerable weight on reviewers' recommendations.[30]

Finally, we have not determined quality scores for included trials. Lee et al. found an increased chance of acceptance for manuscripts with high quality scores.[12] The fact that multicentre and large (>100 participants) trials were more likely to be published can be seen as a proxy for quality. However, Lynch et al. found no relation between quality scores and publication.[13] Though observed acceptance rates did not favour manuscripts with positive results in our study, negative studies may have been of higher quality than positive studies, as was found by Lynch et al.[13] This could result from authors believing that negative papers are less likely to be accepted, therefore only submitting those of high quality. As a consequence, submitted negative manuscripts may be of higher quality than positive manuscripts. Editorial bias occurs if submitted negative studies, although superior in quality, are not more likely to be published.[31] However, we found no differences between positive and negative manuscripts regarding sample size and multicentre status.

To reduce potential publication bias after submission, editors and peer reviewers could be blinded to results and discussion sections of manuscripts.[9], [32], [33] Preliminary decisions would be based on review of introduction and methods sections, and if manuscripts pass this initial stage, the full article could be provided to make a final evaluation. An RCT in which submitted manuscripts are randomized to either traditional review or review with initial blinding to results could confirm whether editors are not more likely to accept positive studies. However, no journals have implemented this two-stage review so far.

In conclusion, we found that for the sample of selected journals, there was no tendency to preferably publish submitted manuscripts on drug RCTs that reported positive results. The proportion of submitted manuscripts with positive results outnumbered those with negative results irrespective of sponsor type, suggesting that publication bias may occur mainly before manuscripts are submitted to journals.

Acknowledgments

We thank the BMJ, Annals of the Rheumatic Diseases, British Journal of Ophthalmology, Gut, Heart, Thorax, Diabetologia, and Journal of Hepatology for participating in this study. We thank Sara Schroter (BMJ) for her suggestions regarding the study design and for being our contact for the BMJ Group. We thank Joanna IntHout (Radboud university medical center) for her statistical advice. This research was orally presented on September 9, 2013 at the Seventh International Congress on Peer Review and Biomedical Publication in Chicago.

Author Contributions

Conceived and designed the experiments: HJO JO MvL. Performed the experiments: HJO MvL. Analyzed the data: MvL. Wrote the paper: HJO JO MvL.

References

  1. 1. Sterne J, Egger M, Moher D, the Cochrane Bias Methods Group (2011) Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration.
  2. 2. Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group (2013) Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS One 8: e66844.
  3. 3. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (2009) Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev: MR000006.
  4. 4. Rising K, Bacchetti P, Bero L (2008) Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med 5: e217.
  5. 5. Lundh A, Sismondo S, Lexchin J, Busuioc OA, Bero L (2012) Industry sponsorship and research outcome. Cochrane Database Syst Rev 12: MR000033.
  6. 6. Bourgeois FT, Murthy S, Mandl KD (2010) Outcome reporting among drug trials registered in ClinicalTrials.gov. Ann Intern Med 153: 158–166.
  7. 7. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL (2003) Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events? JAMA 290: 921–928.
  8. 8. Bhandari M, Busse JW, Jackowski D, Montori VM, Schunemann H, et al. (2004) Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ 170: 477–480.
  9. 9. Sridharan L, Greenland P (2009) Editorial policies and publication bias: the importance of negative studies. Arch Intern Med 169: 1022–1023.
  10. 10. Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, et al. (2010) Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med 170: 1934–1939.
  11. 11. Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, et al. (2002) Publication bias in editorial decision making. JAMA 287: 2825–2828.
  12. 12. Lee KP, Boyd EA, Holroyd-Leduc JM, Bacchetti P, Bero LA (2006) Predictors of publication: characteristics of submitted manuscripts associated with acceptance at major biomedical journals. Med J Aust 184: 621–626.
  13. 13. Lynch JR, Cunningham MR, Warme WJ, Schaad DC, Wolf FM, et al. (2007) Commercially funded and United States-based research is more likely to be published; good-quality studies with negative outcomes are not. J Bone Joint Surg Am 89: 1010–1018.
  14. 14. Okike K, Kocher MS, Mehlman CT, Heckman JD, Bhandari M (2008) Publication bias in orthopaedic research: an analysis of scientific factors associated with publication in the Journal of Bone and Joint Surgery (American Volume). J Bone Joint Surg Am 90: 595–601.
  15. 15. Song F, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, et al. (2009) Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Med Res Methodol 9: 79.
  16. 16. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358: 252–260.
  17. 17. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003) Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 326: 1171–1173.
  18. 18. Lundh A, Barbateskovic M, Hrobjartsson A, Gotzsche PC (2010) Conflicts of interest at medical journals: the influence of industry-supported randomised trials on journal impact factors and revenue - cohort study. PLoS Med 7: e1000354.
  19. 19. Conen D, Torres J, Ridker PM (2008) Differential citation rates of major cardiovascular clinical trials according to source of funding: a survey from 2000 to 2005. Circulation 118: 1321–1327.
  20. 20. Kulkarni AV, Busse JW, Shams I (2007) Characteristics associated with citation rate of the medical literature. PLoS One 2: e403.
  21. 21. Handel AE, Patel SV, Pakpoor J, Ebers GC, Goldacre B, et al. (2012) High reprint orders in medical journals and pharmaceutical industry funding: case-control study. BMJ 344: e4212.
  22. 22. Hopewell S, Clarke M (2003) How important is the size of a reprint order? Int J Technol Assess Health Care 19: 711–714.
  23. 23. International Committee of Medical Journal Editors (2013) Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Available: http://www.icmje.org/icmje-recommendations.pdf. Accesssed 2014 Jul 22.
  24. 24. van Lent M, Overbeke J, Out HJ (2013) Recommendations for a uniform assessment of publication bias related to funding source. BMC Med Res Methodol 13: 120.
  25. 25. Song F, Parekh S, Hooper L, Loke YK, Ryder J, et al. (2010) Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 14: iii, ix–xi, 1–193.
  26. 26. Dickersin K, Min YI, Meinert CL (1992) Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 267: 374–378.
  27. 27. Okike K, Kocher MS, Mehlman CT, Heckman JD, Bhandari M (2008) Nonscientific factors associated with acceptance for publication in The Journal of Bone and Joint Surgery (American Volume). J Bone Joint Surg Am 90: 2432–2437.
  28. 28. Wager E, Williams P, the OPEN Project Consortium (2013) "Hardly worth the effort"? Medical journals' policies and their editors' and publishers' views on trial registration and publication bias: quantitative and qualitative study. BMJ 347: f5248.
  29. 29. Abbot NC, Ernst E (1998) Publication bias: direction of outcome is less important than scientific quality. Perfusion 11: 182–184.
  30. 30. Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, et al. (2010) Editorial peer reviewers' recommendations at a general medical journal: are they reliable and do editors care? PLoS One 5: e10072.
  31. 31. Senn S (2012) Misunderstanding publication bias: editors are not blameless after all. F1000Research 1.
  32. 32. Glymour MM, Kawachi I (2005) Review of publication bias in studies on publication bias: here's a proposal for editors that may help reduce publication bias. BMJ 331: 638.
  33. 33. Smulders YM (2013) A two-step manuscript submission process can reduce publication bias. J Clin Epidemiol 66: 946–947.