Category Archives: Sociology

In the words of Wikipedia: Sociology is the study of social behaviour or society, including its origins, development, organization, networks, and institutions. In the context of this blog, it usually relates to the sociology of health.

The “underserved” are “undeserved”

I hate the phrase, “the underserved”. I would love to remove it from the lexicon of public health. But it appears to be here to stay, particularly in North America where there is even a journal devoted to them.

A girl with kwashiorkor during the Nigerian-Biafran War (Public Domain; Wikipedia).

On a number of occasions in public lectures I have played with the phrase using a comparison of the “undeserved” and the “underserved”. It usually takes listeners a few minutes to work out that I am not repeating myself over and over again. And if you thought I had typed the same thing twice, look again. “underserved”≠”undeserved”.

My spell-checker knows the difference. It tells me that “underserved” is a spelling error and I almost certainly mean “undeserved”, and herein lies the problem. It is not simply that these two words look and sound similar, it is that there is an unpleasant semantic connection between them. It seems to depend where you lie on the political spectrum which term you use to refer to the same group of people.

On the left, the powerless and the left-behind, those with poor access to services and care would be characterised as the underserved. On the right of politics (or a nationalist left where refugees and migrants are vilified) anyone in need, the powerless and the left-behind, those with poor access to services and care are more typically characterised as the undeserved. The same people, the same need, and the same suffering, but a more or less generous view of our social obligations.

 

To share power, someone has to give up power

Over the past few years I have been peripherally involved in various discussions with male colleagues about gender equity. The conversations have had a predictable ebb and flow.

Women’s empowerment. It’s great in theory, but who wants to give up power? Not these men. [source: reddit; https://bit.ly/2wd2AJC]

The consensus, at least among my colleagues, is that gender equity is a good idea. In the abstract, we fully endorse it. The practice is another matter. It is not that we don’t want to share power. We’re enlightened! We know there is a problem, but can it be someone else’s power that is shared?

The reasoning goes something like this. I should not have to share power. I’m talented, I got here on merit, and I deserve everything I achieved. It is an absolute social good when I have power. For me to give up power would not be good, because I wield it benignly and actively promote gender equity. It would be great if another man gave up power because that would support gender equity.

At a fundamental level, power is a zero-sum-game. There are only so many seats around the high tables of power, and if someone gets a seat at the table, someone loses a seat. Sure, we can squeeze an extra seat in here or there — but there are limits. If someone sits on a panel, someone else cannot sit on the panel. If 50% of the world’s population suddenly achieved fair access to power, power that had been largely controlled by the other 50%, competition would increase sharply.

In 2017, the World Health Organization Director-General Tedros Adhanom Ghebreyesus, tried to fudge the arithmetic [it has since changed]. He appointed a substantial number of women to senior positions in WHO. He did this by increasing the pool of senior positions, and he appointed women to the new positions. Unfortunately, many of the new positions were without substantive portfolios, and without real power. In effect he dragged some extra stools to the table. Chairs for men. Stools for newly appointed women.

The strategy had all the right visuals without the structural capacity to support gender equity; i.e., the fair distribution of power.

Gender equity is a good idea. It will be achieved through structural changes that share power and resources, not through appeals to people’s better nature nor through empty gestures. The test of whether one person’s power has increased is whether another person’s power has been diminished.

Globalisation and health

The past has already been written and the accolades distributed. We now need to decide whether the next century is going to be good or bad for our health, and the role of globalisation in helping us to determine our destiny. People living in failed states do not enjoy utopian, anarchic freedom. They die young. Healthy populations need the goods and services of society to be shared in a broadly inclusive fashion. They need health systems that can respond rapidly and flexibly to emerging disease. They need environments that support human life.

 

The zombie apocalypse is our least likely but most entertaining future. [image from proprofs.com]

70,000 years ago our ancestors took their first steps out of Africa. With those steps they initiated the binding link between globalisation and health. The difference between then and now is a matter of temporal and geographical scale. Then, nothing moved faster than a walking pace. Now, a person can traverse the globe in 24 hours. A city thousands of kilometres away can be destroyed in 30 minutes. An idea can be everywhere in seconds.

The technological advances of the last century have been kept pace by extraordinary improvements in human health. Average life expectancy barely moved until the beginning of the last century, and over the next hundred years, it doubled. In 2016, the global average life expectancy was 71.4 years of age. We had achieved the biblical entitlement of three score and ten years promised in Psalm 90. The improvements in health were achieved because of globalisation. Reductions in poverty. Improvements in food supply. Advances in healthcare. Sophisticated infrastructure was delivering clean water and carrying away waste. Those advances have also been accompanied by large inequalities in health outcomes and significant environmental degradation.

I suggest there are three broad intersections between globalisation and health. First, there is the real (and sometimes imagined) disease outbreaks: Ebola or the Zombie apocalypse. Infectious disease, however, is only one part of the health and globalisation relationship. The second, very modern concern is the interconnection between our global activities and environmental change, and by extension the impact on human health. The final idea is our relationships with each other, and how these relationships can shift, and the effect the changes may have on the availability of health supporting resources.

I sketched these ideas out in a 3,000 word essay in early 2017 at the invitation of the Editors of “Vaguardia dossier” a Spanish language, Catalan magazine. Many people (including myself) cannot read the published, Spanish version, but you can get the slightly rough, English language preprint here.

Reidpath DD, Globalización y salud [Globalisation and health]. Vanguardia dossier. 2017; 65:76-81

Join the Q: Chasing journal indicators of academic performance

Universities are predisposed to rank each other (and be ranked) by performance, including research performance. Rankings are not merely about quality, they are about perception. And perception translates nominal prestige into cash through student fees, block grants from government, as well as research income. As a consequence, there is a danger that universities may chase indicators of prestige rather than thinking about the underlying data that informs the indicator, and what the underlying data might mean for understanding and improving performance.

This image comes from an article published in The Conversation under a creative commons license. see https://bit.ly/2ExLDNB

Ranking has become so crucial in the life of universities that it infuses the brickwork and is absorbed by us each time we brush along the walls. At one time, when evaluating research performance, an essential metric was the number of publications. That calculus has shifted and it is no longer enough to publish. Now we have to publish in Q1 journals; i.e., journals ranked by impact factor in the top 25%. Those ranked in the next 26th to 50th percentile are Q2, and so forth. Unfortunately, Q-ranking encourages indicator chasing. It has a level of arbitrariness that discourages thoughtful choices about where to publish, and leads to such unhelpful advice as, “publish in more Q1 journals”.

The Q-ranking game was brought home to me in a recent discussion among colleagues in medical education research about where they should publish. In this discussion BMC Medical Education was identified as a poorly ranked journal (Q3) that should not be considered.

I like and entirely approve of publishing research in the very best journals that one can, and encouraging staff to publish in high-quality journals is a good thing. “Best” and “high quality”, however, is not just about the impact factor and the Q-ranking of a journal. The best journal for an article is the journal that can create the greatest impact from the work, in the right area, be that in research, policy, or practice. A personally, highly cited article in a low impact factor journal may be better than a poorly cited paper in high impact factor journal.

Some years ago I was invited by a government research council to review the performance of a university’s Health Policy Unit. One of my fellow panel members was very focused on the poor ranking of most of the journals into which this unit was publishing. The director of the unit tried to defend the record. She argued that it was more important that the publications were policy-relevant than that they were published in a prestigious journal. The argument was cut down by the research council representative. From the representative’s point of view, the government had to allocate funds, and the journal ranking was an important mechanism for evaluating the return on investment.

I did a quick back of the envelope calculation. It was true, the unit had published in some pretty ordinary journals — not an article in The Lancet among them. However, if one treated the collection of papers published by the unit as if the unit was a stand-alone journal, the impact factor exceeded PLoS Medicine, a highly regarded Q1 journal. My argument softened the opposition to refunding the unit, but it did not completely deal with it because the research council didn’t care about the individual papers. They wanted prestige, and Q-ranking marked prestige.

So, which medical education journal should you publish in? The advice was blunt. The university uses a Thomson Reuters product, Journal Citation Reports (JCR) to determine the Q-ranking of journals. BMC Medical Education ranks quite poorly — Q3 — so don’t publish there. The ranking in this case, however, was based on journals bundled into a comparison pool that JCR calls “Education, Scientific Disciplines”. This comparison pool includes such probably excellent (and completely irrelevant) journals as Physical Review Special Topics-Physics Education Research and Studies in Science Education. However, if one adopts the “Social Sciences, General” pool of comparison journals, which JCR also reported, BMC Medical Education jumps from a Q3 to a Q1 journal. And this raises the obvious question, what is the true ranking of BMC Medical Education?

The advice about where to publish explicitly dismissed an alternative source for the Q-ranking of journals, Scimago Journal Ranking (SJR), because it was too generous — with the implication that “generous” meant “not as rigorous”. In fact, it appears that the difference between SJR and JCR is about the pool of journals used for the comparison. Both SJR and JCR treat the pool of journals against which the chosen journal should be compared as a relatively static one. But it is not. The pool against which the journal should be compared (assuming one should do this at all) is dependent on the kind of research being reported and the audience. Consider potential journals for publishing a biomedical imaging paper. The Q-ranking pool could be (1) general medical journals, (2) journals dealing with medical imaging, (3) radiology journals, or (4) radiography journals, (5) some more refined subset of journals. As with BMC Medical Education, the Q-rank of prospective journals could be quite different in each pool.

One might reasonably, and rhetorically ask, did the value of the science or the quality of the work change because the comparison pool changed? This leads to a small thought experiment. Imagine a world in which every journal below Q1 suddenly disappeared. The quality of the remaining journals has not changed, but three-quarters of them are suddenly Q2 and below. (As an aside, this is reminiscent of the observation that half of all doctors are below average).

If a researcher works in a single discipline, learning which are the preferred journals to publish in becomes second nature. If one work across disciplines, then the question is not as clear. The question is no longer, “which are the highest ranked journals?”, but “which are the highest ranked journals given this article, and these possible disciplinary choices?”. If the question is, “which journal will have the most significant impact of the type that I seek?”, then Q-ranking is only relevant if the outcome sought is to publish in a Q1 journal from a particular comparison pool. If one seeks some other kind of impact, like policy relevance or change in practice, then the Q-rank may be of no value.

Indicators of publishing quality should not drive strategy. Strategy should be inspired by a vision of excellence and institutional purpose. If you want an example of how chasing indicators can have a severe and negative impact, have a look at this (Q1!!!!) paper.