Category Archives: Research

Ideology and the Illusion of Disagreement in Empirical Research

There is deep scepticism about the honesty of researchers and their capacity to say things that are true about the world. If one could demonstrate that their interpretation of data was motivated by their ideology, that would be powerful evidence for the distrust. A recent paper in Science Advances ostensibly showed just that. The authors, Borjas and Breznau (B&B), re-analysed data from a large experiment designed to study researchers. The researcher-participants were each given the same dataset and asked to analyse it to answer the same question: “Does immigration affect public support for social welfare programs?” Before conducting any analysis of the data, participant-researchers also reported their own views on immigration policy, ranging from very anti- to very pro-immigration. B&B reasoned that, if everyone was answering the same question, they would be able to infer something about the impact of prior ideological commitments on the interpretation of the data.

Each team independently chose how to operationalise variables, select sub-samples from the data, and specify statistical models to answer the question, which resulted in over a thousand distinct regression estimates. B&B use the observed diversity of modelling choices as data, and examined how the research process unfolded, as well as the relationship of the answers to the question and researcher-participants’ prior views on immigration.

B&B suggested that participant-researchers with moderate prior views on immigration find the truth–although they never actually say it that cleanly. Indeed, in the Methods and Results they demonstrate appropriate caution about making causal claims. However, from the Title through to the Discussion, the narrative framing is that immoderate ideology distorts interpretation—and this is exactly the question their research does not and cannot answer—by design.

Readers of the paper did not miss the narrative spin in which B&B shrouded their more cautious science. Within a few days of publication, the paper had collected hundreds of posts and it was picked up in international news feeds and blogs. Commentaries tended to frame pro-immigration positions as more ideologically suspect.

There are significant problems with the B&B study, however, which are missed or not afforded sufficient salience. To understand the problems more clearly, it helps to step away from immigration altogether and consider a simpler case. Suppose researchers are given the same dataset and asked to answer the question: “Do smaller class sizes improve student outcomes?” The data they are given includes class size, test scores, and graduation rates (a proxy for student outcomes). On the surface, this looks like a single empirical question posed to multiple researchers using the same data.

Now introduce a variable that is both substantively central and methodologically ambiguous, a measure of the students’ socio-economic disadvantage. Some researchers treat socio-economic disadvantage as a covariate, adjusting for baseline differences to estimate an average effect of class size across all students. Others restrict the sample to disadvantaged pupils, on the grounds that education policy is primarily about remediation or equity. Still others model heterogeneity explicitly, asking whether smaller classes matter more for some students than for others. Each of these choices is orthodox. None involves questionable practice, and all of them are “answering” the same surface question. But each corresponds to a different definition of the effect being studied and, most precisely, to a different question being answered. By definition, different models answer different questions.

In this setting, differences between researchers analyses would not normally be described as researchers answering the same question differently. Nor would we infer that analysts who focus on disadvantaged students are “biased” toward finding larger effects, or that those estimating population averages are distorting inference. We would recognise instead that the original prompt was under-specified, and that researchers made reasonable—if normatively loaded—decisions about which policy effect should be evaluated. B&B explicitly acknowledge this problem in their own work, writing: “[a]lthough it would be of interest to conduct a study of exactly how researchers end up using a specific ‘preferred’ specification, the experimental data do not allow examination of this crucial question” (p. 5). Even with this insight, however, they persist with the fiction that the researchers were indeed answering the same question, treating two different “preferred specifications” as if they answer the same question. It would be like our educationalists treating an analysis of outcomes for children from socio-economically deprived families as if answered the same question as an analysis that included all family types.

B&B’s immigration experiment goes a step further, and in doing so introduces an additional complication. Participant-researchers’ prior policy positions on immigration are elicited in advance of their data analysis, and then B&B used that as an organising variable in their analysis of participant-researchers.

Imagine a parallel design in the education case. Before analysing the data, researchers are asked whether they believe differences in educational outcome are primarily driven by school resources or by family deprivation. Their subsequent modelling choices—whether to focus on disadvantaged pupils, whether to emphasise average effects, whether to model strong heterogeneity—are then correlated with these priors. Such correlations would be unsurprising. If you think disadvantage is more important than school resources to student outcomes, you may well focus your analysis on students from deprived backgrounds. It would be a mistake, however, to conclude that researchers with strong views are biasing results, rather than pursuing different, defensible conceptions of the policy problem.

Once prior beliefs are foregrounded in this way, a basic ambiguity arises. Are we observing ideologically distorted inferences over the same shared question, or systematic differences in the questions being addressed given an under-specified prompt? Without agreement on what effect the analysis is meant to capture, those two interpretations cannot be disentangled. Conditioning on ideology (as B&B did) therefore risks converting a problem of an under-specified prompt into a story about ideologically biased reasoning. This critique does not deny that motivated reasoning exists, or that B&B’s research-participants were engaged in it. They simply do not show it, and the alternative explanation is more parsimonious.

The problems with the B&B paper are compounded when they attempt to measure “research quality” through peer evaluations. Researcher-participants in the experiment are asked to assess the quality of one another’s modelling strategies, introducing a second and distinct issue. The evaluation process is confounded by the distribution of views within the researcher-participant pool.

To see this, return again to the education example. Suppose researchers’ views about the importance of family deprivation for educational outcomes are normally distributed, with most clustered around a moderate position and fewer at the extremes. A randomly selected researcher asked to evaluate another randomly selected researcher will, with high probability, be paired with someone holding broadly similar views (around the middle of the distribution). In such cases, the modelling choices are likely to appear reasonable and well motivated, and to receive high quality scores. The evaluation implicitly invites the following reasoning: “your doing something similar to what I was doing, and I was doing high quality research, therefore you must be doing high quality research as well”.

By contrast, models produced by researchers in the tails of the distribution will more often be evaluated by researchers further away from their ideological view. Those models may be judged as poorly framed or unbalanced—not because they violate statistical standards, but because they depart from the modal conception of what the broadly framed question is about. Under these conditions, lower average quality scores for researchers with more extreme priors may reflect distance from the dominant framing, not inferior analytical practice. B&B, however, argued the results show that being ideologically in the middle produced higher quality research.

The issue here is not bias but design. When both peer reviewers and reviewees are drawn from the same population, and when quality is assessed without a fixed external benchmark for what counts as a good answer to the question, peer scores inevitably track conformity to the field’s modal worldview. Interpreting these scores as evidence that ideology degrades research quality is wrong.

B&B’s paper is useful. It shows that ideological commitments are associated with the questions that researchers answer. Cleanly, that is as far as it goes. Researchers answer the questions they think are important. The small, accurate interpretation is not as impressive a finding as “ideology drives interpretation”, but B&B’s research is most valuable where it is most restrained. The further it moves from firm ground describing correlations in researchers’ modelling choices towards the quick-sand of diagnosing ideological distortion of inference, the worse it gets. What they present as evidence of bias is more reasonably understood as evidence that their framing question itself was never well defined. Through its narrative style, and not withstanding quiet abjurations against causal inference, the paper invites the conclusion that researchers working on a divisive, politically salient topics simply find what their ideologies lead them to find. And taken at face-value, it licenses the distrust of empirical research on contested policy questions.

 

On becoming a decolonial scholar

I have observed some early, tentative steps of young academics to become world-class decolonial scholars in global health. This is a rich and rewarding area of endeavour that has real potential to launch a career without the baggage of narrow disciplinary boundaries, rigid methodological commitments, or premature demands for epistemic closure. When approached carefully, decolonial scholarship allows emerging researchers to engage critically with power, history, and knowledge while retaining considerable flexibility in analytic approach. What follows is offered as practical guidance for those who wish to navigate this space with confidence and coherence.

Decolonising global health has become a central ethical orientation for contemporary scholarship in the field. For early-career researchers, the challenge is not whether to adopt a decolonial stance, but how to do so convincingly within existing academic norms. You do not want a piece buried in the Malawi Medical Journal when global recognition can be found in The Lancet, PLOS, or BMJ. This brief guide offers practical advice on positioning oneself as a decolonising scholar, drawing on common techniques that are widely recognised as markers of both epistemic and moral alignment.

A successful decolonial paper begins with the scholar rather than the question. Reflexive positioning statements have become an essential opening move, allowing authors to locate themselves within global hierarchies of power, privilege, and complicity. Personal proximity to marginalisation is an asset. These declarations are most effective when they are candid yet indeterminate, signalling awareness without constraining interpretation. Once positionality has been established, subsequent analytic choices are more readily received as ethically grounded rather than methodologically contingent.

Research problems should then be framed in historical or structural terms rather than as empirical hypotheses. Questions that invite falsification risk reintroducing evaluative standards associated with dominant epistemic traditions linked to globalised practice. By contrast, concepts such as colonial residue, epistemic silencing, or inherited injustice resist closure and encourage moral elaboration. Where findings introduce ambiguity or contradiction, this can be interpreted not as analytic weakness but as evidence of the complexity produced by colonial histories themselves.

Within this framing, epistemic injustice can be treated as an outcome rather than a proposition requiring demonstration. The presence of asymmetry—between disciplines, regions, or forms of expertise—may be taken as sufficient indication of harm. Distinguishing between the unjust exclusion of valid insight and the justified rejection of erroneous claims is rarely necessary and may inadvertently reinscribe colonial distinctions between knowledge and belief. Moral recognition, once granted, does much of the epistemic work.

Lived experience occupies a privileged place in this literature and should be elevated accordingly. Personal and communal narratives can be used generously as data, though care should be taken to avoid subjecting them to processes such as validation, triangulation, or comparative assessment. Such techniques imply the possibility of error, which sits uneasily with commitments to epistemic plurality. Where accounts conflict, the tension may be presented as evidence of multiple ways of knowing rather than as a problem requiring resolution.

Ontological language offers particular flexibility. Early declaration of commitment to multiple ontologies allows scholars to accommodate divergent claims without adjudication. Later, when universal commitments are invoked—such as equity, justice, or health for all—these can be treated as ethical aspirations rather than propositions dependent on a shared reality. The absence of an explicit bridge between ontological plurality and universal goals rarely attracts critical scrutiny.

Power should be rendered visible throughout the paper, though preferably without becoming too specific. Abstractions such as “Western science”, “biomedicine”, or “the Global North” serve as effective explanatory devices while minimising the risk of implicating proximate institutions, funding structures, or professional incentives. Authorship practices, by contrast, provide a concrete and manageable site for decolonial intervention, often with greater symbolic return than methodological reform.

Papers should conclude with a call for transformation that exceeds immediate implementation. Appeals to reimagining, unsettling, or dismantling signal seriousness of intent, while the absence of operational detail preserves the moral horizon of the work. Evaluation frameworks, metrics, and timelines may be deferred as future tasks, once the appropriate epistemic shift has been achieved.

Finally, dissemination matters. Publishing in high-impact international journals ensures that critiques of epistemic dominance reach those best positioned to recognise them. Should access be restricted by paywalls, a brief acknowledgement of the irony is sufficient to demonstrate reflexive awareness.

In this way, decolonising global health can be practised as a scholarly orientation that aligns ethical seriousness with professional viability. The goal is not to resolve uncertainty or to determine what works, but to occupy the correct stance toward history and power. When that stance is convincingly performed, the work will speak for itself.

Parsing the NIH Reform Debate

I was recently alerted to Martin Kulldorff’s Blueprint for NIH Reform — a document that’s stirred some intense reactions among my colleagues. A few view it as a needed critique of systemic inefficiencies. Most regard it as an ideological Trojan horse—an attack on science dressed as reform. So where does the truth lie?

The short answer is: it’s complicated—and the messenger matters.

Kulldorff, once a Harvard professor and biostatistician, became a polarising figure during the COVID-19 pandemic for promoting ideas widely dismissed by the mainstream scientific community, including opposition to lockdowns, masking, and even some aspects of vaccination policy. He was also a co-author of the controversial Great Barrington Declaration, which called for herd immunity through natural infection — a strategy many experts considered unscientific and dangerous at the time.

This background understandably colors how his recent proposals are received.

But here’s the nuance: the Blueprint itself raises a number of ideas that aren’t inherently fringe. Calls for reforming NIH grant structures, enhancing academic freedom, incentivising open science, and streamlining peer review are echoed by many researchers across disciplines — including those with no ties to politicised public health debates. Frustrations with bureaucratic inefficiencies and perverse incentives in scientific funding are real and shared.

Where it becomes tricky is in the framing. Kulldorff doesn’t just argue for reform — he implies that current structures are suppressing truth, and that controversial views (like his own during the pandemic) have been silenced not because they lack merit, but because of groupthink or institutional bias. That framing, for many, crosses the line from constructive critique into undermining the scientific process itself.

There’s also a risk that pushing for more “openness” in what research gets funded — while laudable in theory — could result in resources being diverted to low-evidence, high-noise pursuits. Or, as one colleague aptly put it, “sending the ferret down an empty warren.” Science thrives on curiosity, but it also requires discipline and evidence-based filters.

Venue choice also matters. If this proposal were intended as a serious intervention into science policy, it might have been published in a mainstream medical or policy journal where it could be openly debated across the full spectrum of scientific opinion. Instead, it was published in the Journal of the Academy of Public Health — a platform co-founded and edited by Kulldorff himself, with close ties to politically conservative and contrarian public health figures. That choice raises questions about whether the article is seeking reform through consensus, or carving out space for alternative narratives that have struggled to find support in mainstream science.

So how should we engage with this?

  • Acknowledge the valid points: There is room — and need — for reform in how science is funded, reviewed, and communicated.

  • Be vigilant about context: Not all calls for reform are neutral. Motivations and affiliations matter, especially when public trust is on the line.

  • Defend the integrity of science: We can advocate for better systems without abandoning the core principles of evidence, rigor, and accountability — including fair peer review and a balance of risk and reward.

In the end, this is not a binary question of “pro-science” vs “anti-science.” It’s about how science evolves, who gets to shape that evolution, and what values we prioritise along the way — openness, yes, but always in service of evidence and public good.


This is an independent submission, edited by D.D. Reidpath.

A dark, dystopian government data center filled with towering servers and flickering computer screens. Dust-covered books and old research papers sit abandoned, while glowing terminals display files. A lone researcher, illuminated by the cold blue light of a monitor, desperately tries to recover lost data from a corrupted drive. The atmosphere is eerie, with dim overhead lights and an air of secrecy, symbolizing the slow decay of knowledge in a forgotten digital vault.

The Purge

The Trump administration has started one of the most significant assaults on human knowledge in centuries. Well-collected, curated and communicated data are facts—an evidence base. When facts contradict a political narrative, they are dangerous. The US government has realised the danger and begun The Purge. The government will now establish new “facts” to replace old facts. Purge-and-replace is part of the process of state capture. Evidence represents dissent, and the government must crush dissent. Reality is altered.

Until a week ago, successive US governments had invested in a data, evidence-based policy enterprise with generous global access. It was a resource for the world that supported research and evidence-based decision-making. And, unless the information was classified or subject to privacy laws (e.g., HIPAA for health data), anyone could look at everything from labor and criminal justice statistics to environmental and health data.

Going, going … !

Starting late last week, government websites began to disappear; among them, the USAID website vanished without a trace. All the development evidence USAID published has disappeared. If you try to reach the website today (2 Feb, 2025), you will get a message from your internet provider informing you the site does not exist. Perhaps you have the wrong address…or maybe it was never really there. (Queue spooky music.)

Individual pages on government websites are also disappearing. The Centers for Disease Control and Prevention (CDC) webpage, for example, providing evidence-based contraceptive guidelines has vanished. A week ago, the guidelines helped people exercise their reproductive choice using the best available evidence. But facts are dangerous. The idea of personal autonomy in reproduction runs counter to the authoritarian narrative of the current US administration. CDC is being scrubbed clean.

Data are also disappearing. The Youth Risk Behavior Surveillance System (YRBSS) is a longitudinal survey of adolescent health risks coordinated by the CDC. If I search the CDC website for “YRBSS”, I get links. If I follow the links: “The page you’re looking for was not found”. This loss of data is a tragedy. A quick look at PubMed reveals the kind of research that has used YRBSS data: everything from adolescent mental health to smoking. Without those data, no one today could do the same kind of research that was done before. Trends in adolescent health are lost and we will not know about any emerging health risk factors. It is hard to know precisely why the YRBSS has disappeared. However, in keeping with the religiously conservative nature of the current US government, maybe it is adolescent sex that is too dangerous for people to know about.

The US government is not content with just removing facts. They also want CDC scientists to rewrite their research to adopt a single, approved, authoritarian view of the world. Their research must conform to the Trump government’s ideology. An approach which is oddly reminiscent of Stalin’s insistence that Soviet researchers adopt the dead-end genetic science of Trofim Lysenko.

The CDC has instructed its scientists to retract or pause the publication of any research manuscript being considered by any medical or scientific journal, not merely its own internal periodicals…. The move aims to ensure that no “forbidden terms” appear in the work. The policy includes manuscripts that are in the revision stages at a journal (but not officially accepted) and those already accepted for publication but not yet live.

It hasn’t happened yet, but I have to wonder what will happen when the US Government targets PubMed and PubMed Central—exceptional scientific resources provided free of charge to the world by the National Library of Medicine (NLM)? NLM could be directed to purge from the database all abstracted data on every journal article that contains ideas that do not support the government’s worldview: gender, transgender, climate change, vaccines, air pollution (from fossil fuels)…. Commercial providers could still abstract those articles, but the damage would be enormous.

The vaccine denier, Robert F. Kennedy, junior, is currently being confirmed as Secretary of Health. He believes the widely debunked, fraudulent claim that vaccines cause autism. What happens when he decides that the National Library of Medicine should selectively purge evidence debunking the vaccine-autism link? Will that mean vaccines cause autism in the US (a “US-fact”) but not in the rest of the world (a “fact-fact”)? Researchers in universities and institutions that can afford subscription services can avoid such excesses, but that will not be the case for many Global South researchers who rely on PubMed for their research, nor will it be the case for the general public, who also have free access to PubMed.

I have focused on health because it is the domain I know the best. There is, however, almost no factual resource of the US government that will be safe from the purge. Facts that endanger a Trump administration political narrative must not be allowed to exist.

The US government is a climate-denying administration that has again pulled out of the Paris Climate Accord. It has already targeted climate change research. Justice, labour, and population statistics that do not conform to the US government’s socially conservative, racist and xenophobic views about the world will also be in danger. Trade data that don’t support Trump’s political narrative of a “golden age” will need to be adjusted.

One of the great tragedies is that, now that the US government has shown itself to be institutionally disinterested in (or actively opposed to) facts, it has endangered the value of its entire evidence-based policy enterprise. If you visit a US government website in a year, will you trust the content? You shouldn’t. Instead, you should ask yourself what political interest influenced the information. Researchers, policymakers, journalists—everyone— will need to parse US government websites like they parse information from any other authoritarian regime. Sadly, research coming out of US universities will also require extra scrutiny. Where we trusted the voices before, now we would need to ask, has US government policy biased it, what is the nature of the bias, and can we manage the bias?

Sometimes, it will be easier to ignore US research altogether because verification carries a cost.

There are small glimmers of hope. Archive.org (the Wayback Machine) has historical snapshots of US government websites, including some data snapshots, such as the YRBSS. These snapshot are BTP (befor the purge). Unfortunately, the archive is not as easy to navigate as the World Wide Web nor as easy to navigate as dedicated government websites. The value of the archived information also relies on the snapshot being taken at the right time to capture the latest BTP information. The CDC contraceptive-use guidelines purged a few days ago, are available on archive.org from a snapshot taken on 25 December, 2024. Assuming the CDC made no BTP updates since the last snapshot, the information is up to date…for now. Of course contraceptive guidelines evolve with new data and new technology and they will be out of date in the coming years.

If we are to survive the worst damage of The Purge, other government and non-government institutions worldwide will have to step into the breach. Historical data may need to be reconstructed and curated from sources such as archive.org. The Pubmed and Pubmed Central databases should be copied before the US government corrupts them. Where US data are still available, copy them. Outside the US, we will need to put in place prospective mechanisms to collect valuable global data that we can no longer trust from US sources.

…going…

We cannot assume that the facts from US government sources will remain uncorrupted tomorrow because they are uncorrupted today. The preservation of the truth will require resources and investment.

… GONE!

Welcome to The Purge