Journals, by which I mean Editors, have shaped modern science, particularly in medicine. The publication policies of journals now direct the kinds of ideas that are acceptable, how to present the ideas and the ethical frameworks that should govern data collection, authorship, treatment of participants, and data sharing. Journals will refuse to publish a paper if they are not satisfied that the authors have fulfilled those requirements. The journals have become both arbiters and gatekeepers of sound scientific practice. A recent journal issue on conflicts of interest appearing in the Journal of the American Medical Association (JAMA) is a case in point (2 May 2017).
Editors will also self-publish encyclicals of good conduct, laying down the rules of engagement for the future. The JAMA editorial supporting the recent special issue is such an example. When the journals involved are at the top of their fields, these views reverberate. In medicine, the Big Five journals in general and internal medicine are the New England Journal of Medicine (NEJM), Lancet, JAMA, British Medical Journal, and Annals of Internal Medicine. When the Editors speak, the field listens. Their role is revelatory. It is an imperfect conduit of nature’s voice whispered to researchers in their labs and clinics.
The rules do not, unfortunately, prevent the publication of bad science. The Autism-MMR paper in the Lancet is an excellent example of bad science slipping into the field. In general, however, failures of science lie at the feet of the scientists. The journals rise above it. A retraction here, a commentary there, and the stocks or pillory of peer humiliation are kept for the authors.
It is easy when criticism can be deflected, and laid at the feet of authors. What, however, should the response be when researchers identify a bias in the Big Five journals? Bias in medicine is a serious issue. It indicates a skew in the published science – a tendency to emphasise one kind of science over another or the promotion of one interest over another. It carries risks into the future of skewing practice and funding.
In 2016, Giovanni Filardo and colleagues identified a gender bias in first authors of research articles published in the Big Five. The journals were more likely to publish articles with a man as a first author than a woman. The most biased journal was NEJM. You will not have read about the research in that journal, however, because they rejected the paper when it was submitted. Unfortunately, the bias in the gender of published first authors is not a local, journal issue. The bias has a larger and more insidious career effect. Women are less likely to be in the prestigious position of the first author in prestigious Big Five journals, and ceteris paribus they are disadvantaged in funding applications, job applications, receipt of awards, and recognition.
My co-authors and I recently published an investigation of gender bias in clinical case reports. You may be unsurprised to learn that clinical case reports are more likely to be about men. Apparently, clinical cases about a man are just more interesting than a clinical case about a woman. All but one of the investigated journals showed a gender bias, and the most biased journal was NEJM.
Of course, journals can and should reject research papers that are not relevant or deficient in quality. And our paper may have been both. The fact that a journal like the NEJM should have rejected two recent papers that identified the journal as being the most gender-biased among the Big Five begins to look like an avoidance of criticism.
If there is a tendency to avoid self-reflection, particularly in an area as important as bias in science, then the editorial decisions begin to have much greater significance, and at least a whiff of hypocrisy. The origins of a bias may be authorial. A greater proportion of articles written by men than women are submitted to the journal; a greater proportion of clinical case reports about men rather than women are submitted to the journal. The Editors are in a position to correct the submission bias, just as they vigorously correct other biases. The Big Five would have acceptance rates below 10%; they presumably have a bias towards higher rather than lower quality science. We are suggesting that in exercising their Editorial judgment they could include factors they have (presumably) hitherto not noticed in their own behavior. They might find it easier to explain these editorial shifts if they based it on scientific research published in their own journals. At the very least it indicates that the issue is taken seriously.
This article was co-written by Daniel D Reidpath and Pascale Allotey