Monthly Archives: March 2019

Join the Q: Chasing journal indicators of academic performance

Universities are predisposed to rank each other (and be ranked) by performance, including research performance. Rankings are not merely about quality, they are about perception. And perception translates nominal prestige into cash through student fees, block grants from government, as well as research income. As a consequence, there is a danger that universities may chase indicators of prestige rather than thinking about the underlying data that informs the indicator, and what the underlying data might mean for understanding and improving performance.

This image comes from an article published in The Conversation under a creative commons license. see https://bit.ly/2ExLDNB

Ranking has become so crucial in the life of universities that it infuses the brickwork and is absorbed by us each time we brush along the walls. At one time, when evaluating research performance, an essential metric was the number of publications. That calculus has shifted and it is no longer enough to publish. Now we have to publish in Q1 journals; i.e., journals ranked by impact factor in the top 25%. Those ranked in the next 26th to 50th percentile are Q2, and so forth. Unfortunately, Q-ranking encourages indicator chasing. It has a level of arbitrariness that discourages thoughtful choices about where to publish, and leads to such unhelpful advice as, “publish in more Q1 journals”.

The Q-ranking game was brought home to me in a recent discussion among colleagues in medical education research about where they should publish. In this discussion BMC Medical Education was identified as a poorly ranked journal (Q3) that should not be considered.

I like and entirely approve of publishing research in the very best journals that one can, and encouraging staff to publish in high-quality journals is a good thing. “Best” and “high quality”, however, is not just about the impact factor and the Q-ranking of a journal. The best journal for an article is the journal that can create the greatest impact from the work, in the right area, be that in research, policy, or practice. A personally, highly cited article in a low impact factor journal may be better than a poorly cited paper in high impact factor journal.

Some years ago I was invited by a government research council to review the performance of a university’s Health Policy Unit. One of my fellow panel members was very focused on the poor ranking of most of the journals into which this unit was publishing. The director of the unit tried to defend the record. She argued that it was more important that the publications were policy-relevant than that they were published in a prestigious journal. The argument was cut down by the research council representative. From the representative’s point of view, the government had to allocate funds, and the journal ranking was an important mechanism for evaluating the return on investment.

I did a quick back of the envelope calculation. It was true, the unit had published in some pretty ordinary journals — not an article in The Lancet among them. However, if one treated the collection of papers published by the unit as if the unit was a stand-alone journal, the impact factor exceeded PLoS Medicine, a highly regarded Q1 journal. My argument softened the opposition to refunding the unit, but it did not completely deal with it because the research council didn’t care about the individual papers. They wanted prestige, and Q-ranking marked prestige.

So, which medical education journal should you publish in? The advice was blunt. The university uses a Thomson Reuters product, Journal Citation Reports (JCR) to determine the Q-ranking of journals. BMC Medical Education ranks quite poorly — Q3 — so don’t publish there. The ranking in this case, however, was based on journals bundled into a comparison pool that JCR calls “Education, Scientific Disciplines”. This comparison pool includes such probably excellent (and completely irrelevant) journals as Physical Review Special Topics-Physics Education Research and Studies in Science Education. However, if one adopts the “Social Sciences, General” pool of comparison journals, which JCR also reported, BMC Medical Education jumps from a Q3 to a Q1 journal. And this raises the obvious question, what is the true ranking of BMC Medical Education?

The advice about where to publish explicitly dismissed an alternative source for the Q-ranking of journals, Scimago Journal Ranking (SJR), because it was too generous — with the implication that “generous” meant “not as rigorous”. In fact, it appears that the difference between SJR and JCR is about the pool of journals used for the comparison. Both SJR and JCR treat the pool of journals against which the chosen journal should be compared as a relatively static one. But it is not. The pool against which the journal should be compared (assuming one should do this at all) is dependent on the kind of research being reported and the audience. Consider potential journals for publishing a biomedical imaging paper. The Q-ranking pool could be (1) general medical journals, (2) journals dealing with medical imaging, (3) radiology journals, or (4) radiography journals, (5) some more refined subset of journals. As with BMC Medical Education, the Q-rank of prospective journals could be quite different in each pool.

One might reasonably, and rhetorically ask, did the value of the science or the quality of the work change because the comparison pool changed? This leads to a small thought experiment. Imagine a world in which every journal below Q1 suddenly disappeared. The quality of the remaining journals has not changed, but three-quarters of them are suddenly Q2 and below. (As an aside, this is reminiscent of the observation that half of all doctors are below average).

If a researcher works in a single discipline, learning which are the preferred journals to publish in becomes second nature. If one work across disciplines, then the question is not as clear. The question is no longer, “which are the highest ranked journals?”, but “which are the highest ranked journals given this article, and these possible disciplinary choices?”. If the question is, “which journal will have the most significant impact of the type that I seek?”, then Q-ranking is only relevant if the outcome sought is to publish in a Q1 journal from a particular comparison pool. If one seeks some other kind of impact, like policy relevance or change in practice, then the Q-rank may be of no value.

Indicators of publishing quality should not drive strategy. Strategy should be inspired by a vision of excellence and institutional purpose. If you want an example of how chasing indicators can have a severe and negative impact, have a look at this (Q1!!!!) paper.