Author Archives: Daniel Reidpath

A surreal political illustration of a female government official standing stiffly like a marionette puppet, with visible strings attached to her limbs and head. The strings are controlled by a faceless figure in a suit, symbolizing hidden power or authoritarian control. The woman’s face appears calm, even smiling, with a speech bubble saying ‘empowerment’, but her shadow on the wall behind her shows her kneeling in chains, labeled ‘vessel’. The background features a muted map of the world, with certain countries glowing faintly and connected by dark, vein-like tendrils. The overall mood is unsettling and dystopian, in a clean, editorial illustration style. DALL.E generated

Parasitising Human Rights

A snail glides slowly from the shelter of the underbrush into the sunlight. One of its eye stalks (ommataphore) pulses with an unnatural rhythm, swollen, brightly coloured and weirdly attractive. A thrush spots the movement and swoops down, drawn to the flickering lure, pecks off the stalks and flies away.

The thrush was fooled. What it mistook for a juicy caterpillar was a parasite seeking a new host. The parasite, Leucochloridium paradoxum, is a trematode that infects a snail and turns it into a self-destructive zombie. The life cycle is simple: bird eats parasitised snail, parasite reproduces in bird’s gut, bird defecates, snail eats infected droppings. Once the parasite has been eaten by the snail, it hijacks the snail’s behaviour. It migrates to the snail’s eye stalks and drives it out of the safety of the underbrush and into the sunlight, where it will lure a bird to eat it. Rinse and repeat.

It was only very recently that I realised that the Christian far-right groups had adopted an analogous strategy to attack the international human rights framework and women’s rights in particular.

The Geneva Consensus Declaration (GCD) and its companion, the Women’s Optimal Health Framework (WOHF), function with unnerving similarity to the apparently tasty snail. They are each packaged in the shiny and appealing language of “optimal health”, “human dignity”, and “family”. They infiltrate the human rights system—not to strengthen it, but to hijack it, disguising regressive aims as a legitimate rights discourse. Once absorbed by a State-host, the State is zombified to re-present the regressive framework in shiny, deceptively appealing language waiting to parasitise the next State.

The GCD was first presented to the United Nations as a letter under Donald Trump’s 45th Presidency of the United States. It was an initiative of the Secretary of State, Mike Pompeo, a fundamentalist Christian. Borrowing the name of the City of Geneva, made famous by its association with refugees, human rights and the Geneva Conventions, the GCD is neither supported nor endorsed by Switzerland nor the the Republic and Canton of Geneva, nor is it adopted by the UN.

The GCD document opens with lofty and appealing commitments to universal human rights and gender equality—pulling deceptively and disingenuously on the Universal Declaration of Human Rights. It declares that “all are equal before the law” and that the “human rights of women are an inalienable, integral, and indivisible part of all human rights and fundamental freedoms”.

Once consumed, there is a parasitic turn. The GCD reverts to a framework that reduces women to vessels and vassals in service to cells and states. The foetus is elevated. It is endowed with rights that eclipse those of the woman herself. She becomes a fleshy bag—nutrients in, baby out—stripped of the autonomy to define her own purpose or direction. The role of the State shifts. It is no longer the guarantor of individual freedom but the authority that dictates what a woman may or may not be allowed to do. “The family”—a surprisingly labile cultural concept—is suddenly reified, declared “the fundamental group unit of society,” as if its meaning were fixed and universal. The document commits fully to a vision of a society where the population serves the State, and women serve the population—with the least autonomy.

Health is a human right as is the right to healthcare. The GCD and the WOHF want to parse this, playing a game of reductio ad absurdum. You might have a right to healthcare, they argue, but you do not have a right to an abortion. As if it makes sense to say you have a right to healthcare, but not if you have scabies, rabies, HIV, or malaria. Pregnancy is not a disease, but it does require healthcare and that care may include the termination of the pregnancy. A woman’s purpose is not reproduction—servitude to a foetus.

Men, too, are caught in the parasitic zombification. They should not mistake their apparent elevation in these structures for freedom. They lose something fundamental. Choice. Authoritarian gender orders assign roles to everyone. Power is not granted—it is rationed and always conditional. The State grants status for obedience and identity in exchange for submission. Those assigned dominance are especially bound by its terms. This constraint brooks no dissent. In a society of freedom, you can find your own place. In a society of roles, your place determines you.

These zombified States do not act alone. The US-backed Institute for Women’s Health promotes the destruction of women’s rights, replacing evidence with sleek visuals and rhetorically based policy tools. The materials are presented as neutral frameworks but embed deeply conservative ideologies—valorising motherhood, framing women’s worth through familial roles, and avoiding any substantive discussion of sexual rights.

States that adopt these frameworks serve as megaphones, amplifying anti-abortion and anti-diversity policies in UN negotiations and global fora. This is not a grassroots movement for gender justice. It is a top-down project of moral, political, and social control, disguised as health policy.

The GCD and WOHF are not neutral initiatives. They are a parasitic ideological vehicle that masquerades as progressive while advancing regressive policies. Their true function is to infiltrate human rights systems, hijack the language of empowerment, and turn States into agents of restriction.

We must name this strategy for what it is: a parasitic ideology—designed to deceive, manipulate, and replicate. Human rights advocates must remain alert, resist co-option, and expose these frameworks not just for their content, but for the insidious strategies they deploy.

The only antidote to such parasitism is clarity, resistance, and the refusal to surrender universal human rights to the State.

Building Research Capacity with AI

Over 25 years ago, the “10/90 gap” was used to illustrate the global imbalance in health research. Only 10% of global research benefited the regions where 90% of preventable deaths occurred. Since then, efforts to improve research capacity in low- and middle-income countries (LMICs)—where 90% of avoidable deaths occurred—have made important gains; nonetheless, significant challenges remain. A quarter of a century later, there are still too few well-trained researchers in LMICs, and their research infrastructure and governance are also inadequate. The scope of the problem increased dramatically in 2025 when governments cut North American and European overseas development assistance (ODA, i.e., foreign aid) precipitously. That aid—however inadequate—supported improvements in research capacity.

Traditional approaches to improving research capacity, such as training workshops and degree scholarship programs, have gone some way to address the expertise challenge. However, they fall short because they are not scalable. The relatively recent introduction of massive open online courses (MOOCs), such as TDR/WHO’s MOOCs in implementation research, goes a long way to overcoming that scalability problem—at least in instruction-based learning. Nonetheless, for many LMIC researchers, major bottlenecks remain because of poor or limited access to mentorship, one-off and quick advice, bespoke training, research assistance, and inter- and intra-disciplinary collaboration. The scalability problem can leave them at a persistent disadvantage compared to their high-income country counterparts. Research is not done well from isolation and ignorance.

The rise of large language model artificial intelligence (LLM-AIs) such as ChatGPT, Mistral, Gemini, Claude, and DeepSeek offers an unprecedented opportunity…and some additional risks. LLM-AIs are advanced AI models trained on vast amounts of text data to understand and generate human-like language. They are flexible, multilingual, and always available (24/7), offering researchers in LMICs immediate access to knowledge and assistance. If used correctly, LLMs could revolutionise approaches to building research capacity and democratise access to skills, knowledge, and global scientific discourse. Many online educational providers already integrate LLM-AIs into their instructional pipelines as tutors and coaches.

Unfortunately, LMICs risk further entrenching or increasing the 10/90 gap if they cannot take advantage of the benefits of LLM-AIs.

AI as a game changer

Researchers in resource-limited settings can access an always-on, massively scalable assistant for the first time. By massively scalable, every researcher could have one or more 24/7, decent research assistants for a monthly subscription of less than $20. They offer scalability and flexibility that traditional human research assistants cannot (and should not) match. However, they are not human and may not fully replicate a human research assistant’s nuanced understanding and critical thinking—and they are certainly less fun to have a cup of coffee with. Furthermore, the effectiveness of LLM-AIs depends on the sophistication of the user, the task complexity and the quality of input the user provides.

I read a recent post on LinkedIn by a UCLA professor decrying the inadequacies of LLM-AIs. However, a quick read of the post revealed that the professor had no idea how to engage appropriately with the technology.

Unfortunately, like all research assistants, senior researchers, and professors, LLM-AIs can be wrong. Like all tools, one needs to learn how to use them with sophistication.

In spite of any inadequacies, LLM-AIs can remove barriers to research participation by offering tutoring on complex concepts, assisting with literature reviews and data analysis, and supporting the writing and editing of manuscripts and grant proposals.

Reid Hoffman, the AI entrepreneur, described on a podcast how he used LLM-AIs to learn about complex ideas. He would upload a research paper onto the platform and ask, “Explain this paper as if to a 12-year-old”. Hoffman could then “chat” with the LLM-AI about the paper at that level. Once comfortable with the concepts, he would ask the LLM-AI to “explain this paper as if to a high school senior”. He could use the LLM-AI as a personal tutor by iterating-up in age and sophistication.

Researchers can also use the LLM-AIs to support the preparation of scientific papers. This is happening already because an explosion of generically dull (and sometimes fraudulent) scientific papers is hitting the market. This explosion has delighted the publishing houses and created existential ennui among the researchers. The problem is not the LLM-AIs—it is in their utilisation, and it will take time for the paper production cycle to settle.

While access to many LLMs requires a monthly subscription, some LLM-AIs, like DeepSeek, significantly lower costs and accessibility barriers by distributing “open weights models”. Researchers can download these open weights models freely and put them on personal or university computer infrastructure without paying a monthly subscription. They make AI-powered research assistance viable for most LMIC research settings, and universities and research institutes can potentially lower the costs further.

LLM-AIs allow researchers in LMICs to become less dependent on high-income countries for training and mentorship, shifting the balance towards scientific self-sufficiency. AI-powered tools could accelerate the development of a new generation of LMIC researchers, fostering homegrown expertise and leadership in relevant global science. They are no longer constrained by the curriculum and interests of high-income countries and can develop contextually relevant research expertise.

The Double-Edged Sword

Despite its positive potential, the entry of LLM-AIs into the research world could have significant downsides. Without careful implementation, existing inequalities could be exacerbated rather than alleviated. High-income countries are already harnessing LLM-AIs at scale, integrating them into research institutions, project pipelines, training, and funding systems. LMICs, lacking the same level of investment and infrastructure, risk being left behind—again. The AI revolution could widen the research gap rather than close it, entrenching the divide between well-resourced and under-resourced institutions.

There is also a danger in how researchers use LLM-AIs. They are the cheapest research assistants ever created, which raises a troubling question: will senior researchers begin to rely on AI to replace the need for training junior scientists? Suppose an LLM-AI can summarise the literature, draft proposals, and assist in the analysis. In that case, there is a real risk that senior researchers will neglect mentorship, training and hands-on learning. Instead of empowering a new generation of LMIC researchers, LLM-AIs could be used as a crutch to maintain existing hierarchies. If institutions see the LLM-AIs as a shortcut to productivity rather than an investment in building research capacity, it could stall the development of genuine human expertise.

Compounding these risks, AI is fallible. LLM-AIs can “hallucinate”, generating false information with complete confidence. They always write with confidence. I’ve never seen one write, “I think this is the answer, but I could be wrong”. They can fabricate references, misinterpret scientific data, and reflect biases embedded in their training data. If used uncritically, they could propagate misinformation and skew research findings.

The challenge of bias is not to be underestimated. LLM-AIs are trained on the corpus of material currently available on the web, reflecting all the biases of the web–who creates the content, what content they create, etc.

Furthermore, while tools like DeepSeek reduce cost barriers, commercial AI models still pose a financial challenge. LMIC institutions will need to negotiate sustainable access to AI tools or risk remaining locked out of their benefits—particularly of the leading edge models. The worst outcome would be a scenario where HICs use AI to accelerate their research dominance while LMICs struggle to afford the very tools that could democratise access.

A Strategic Approach

To ensure LLM-AIs build rather than undermine research capacity in LMICs, they must be integrated strategically and equitably. Training researchers and students in AI literacy is paramount. Knowing how to ask the right questions, validate AI outputs, and integrate results into research workflows is essential. This is not a difficult task, but it takes time and effort, like all learning. The LLM-AIs can help with the task—effectively bootstrapping the learning curve.

Rather than replacing traditional research capacity building, LLM-AIs should be embedded into existing frameworks. MOOCs, mentorship programs, and research fellowships should incorporate LLM-AI-based tutoring, iterative feedback, and language support to enhance—not replace—human mentorship. The focus should be on areas where LLM-AI can offer the greatest immediate impact, such as brainstorming, editing, grant writing support, statistical assistance, and multilingual research dissemination.

Institutions in LMICs should also push for local, ethical LLM-AI development that considers regional needs. This push is easier said than done, particularly in a world of fracturing multilateralism. However, appropriately managed, LLM-AI models can be adapted to recognise and integrate local research priorities rather than merely reinforcing an existing scientific discourse. The fact that a research question is of no interest in high-income countries does not mean it is not critically urgent in an LMIC context.

Finally, securing affordable and sustainable access to AI tools will be essential. Governments, universities, and research institutions must lobby for cost-effective AI licensing models or explore open-source alternatives to prevent another digital divide. Disunited lobbying efforts are weak, but together, across national boundaries, they could have significant power.

An Equity Tipping Point

The LLM-AI revolution is a key juncture for building research capacity in LMICs. Harnessed correctly, LLM-AIs could break down long-standing barriers to participation in science, allowing LMIC researchers to compete on (a more) equal footing. The rise of models like DeepSeek suggests a future where AI is not necessarily a privilege of the few but a democratised resource for the many.

Fair access will not happen automatically. Without deliberate, ethical, and strategic intervention, LLM-AIs could reinforce existing research hierarchies. The key to harvesting the benefits of the technology lies in training researchers, integrating LLM-AIs into programs to build research capacity and securing equitable access to the tools. Done well, LLM-AIs could be a transformative force, not just in scaling research capacity but in redefining who gets to lead global scientific discovery.

LLM-AIs offer an enormous opportunity. They could either empower LMIC researchers to chart their own scientific futures, or they could become another tool to push them further behind.


Acknowledgment: This blog builds upon insights from a draft concept note developed by me (Daniel D. Reidpath), Lucas Sempe, and Luciana Brondi from the Institute for Global Health and Development (Queen Margaret University, Edinburgh), and Anna Thorson from the TDR Research Capacity Strengthening Unit (WHO, Geneva). Our work on AI-driven research capacity strengthening in LMICs informed much of the discussion presented here.

The original draft concept note is accessible here.

US Aid: Strategic Transactionalism

The US Government is showing all the compassion of a loan shark, where the Rubio Rule rules—”What’s in it for me?” Tragically, any international social capital the U.S. built over the past 80 years has been torched in a bonfire of pointless cruelty.

Yesterday, Politico published a draft document obtained from a government aide describing a revamped USAID. It is entirely about the U.S.—safer, stronger, more prosperous—with the benefits to others only arising en passant, if at all.

According to the document, international assistance will focus on investments that deliver “first-order benefits back home [in the U.S.]”. That is, there must be an immediate, directly attributable gain to the U.S. from it’s humanitarian “investment”. This is in marked contrast to the successful decades of soft power developed by the U.S. after World War II.

Global Health would sit in a new agency for International Humanitarian Assistance (IHA).

“By responding rapidly to natural disasters, preventing famines, containing disease outbreaks, and securing peace, IHA would demonstrate American values, prevent instability that could threaten our interests, distinguish ourselves from our geo-political adversaries (such as China), enhance U.S. leadership on the global stage, and increase safety at home.”

That sounds great! I wonder what those Amercian values are? They are not the values of compassion or empathy, nor the values established by the U.S. in the international human rights instrument. Those international values established by the U.S. are the ones that the Trump Administration has “put through the wood chipper”.

The values must be those of “strategic transactionalism”. This is my newly minted term that refers to the idea that no agreement with the U.S. can be trusted because it has shown itself to have no regard for such things. Agreements, contracts, and treaties exist only to the extent that the administration chooses to honour them. Here today, gone tomorrow.

What’s in it for me?

According to the document, “in all cases, countries would have to demonstrate high levels of “commitment” to be eligible for any U.S. assistance engagement”.

“High levels of ‘commitment’” is code in the world of crime bosses and authoritarian leaders for absolute subjugation. A country will do what it is told, when it is told—without question. Give up resources. Cede Territory. Treat some people as less worthy than others.

“Success would be measured by concrete metrics: lives saved, outbreaks of infectious diseases contained, pandemic prevented, famines averted, and measurable increases in in positive perception of the United States in emerging markets.”

The irony of this is that things like “lives saved” is exactly what was being measured before the conflagration. The new metric appears to be “measurable increases in positive perception of the United States in emerging markets”.

Rest assured, any positive perception of the U.S. is now only a form of “strategic transactionalism”. It is performative because money is still money, but anything else from the US risks bully, bluster, and betrayal.

The Star Trek Captain, Jean-Luc Picard as the Borg character Locutus

Resistance is necessary

I complain. A lot. I am not a happy person. But you will never die wondering what I thought or where I stood. Still, complaining isn’t enough, and whinging can feel futile.

The Borg and the rising authoritarian states of the 21st century want you to believe that “resistance is futile.” It isn’t. Resistance is not only necessary; resistance is an obligation.

Small acts of everyday resistance can raise the costs of authoritarianism so high the system collapses. In the late Soviet Union, acts of passive resistance—from workers deliberately slowing down production to citizens openly defying censorship laws—contributed to the erosion of state control. These acts of everyday resistance helped to chip away at the crumbling foundations. Authoritarian regimes rely on compliance to function. When enough people withdraw their cooperation, inefficiency turns into paralysis, and paralysis into collapse. It becomes so grindingly inefficient and ineffective that it fails. The unwillingness of the people to work in the interests of an illegitimate state is that state’s undoing.

Small acts of everyday resistance need not rise to criminality. There are ways of resisting that work, that keep the pressure up, and that allow you to control your level of exposure.

The power of the authoritarian state does not lie in compliance alone. It also lies in isolation—your sense of being alone in your unhappiness. Why do you think the Chinese state is so quick to remove online complaints and hide protests? The protest is not the problem. The protest’s effect is letting others know they are not alone in their unhappiness. And if you do not feel alone, you are also more likely to engage in small acts of everyday resistance.

Work to rule is a classic form of everyday resistance. This tactic has been historically effective in labour movements, such as the bureaucratic slowdowns under oppressive regimes, where workers deliberately followed every regulation to the letter to hinder authoritarian efficiency. Do your job. To the letter. No more. No less. When only one person works to rule, they are a miserable, unhelpful arse. When large numbers of people work to rule, unhappiness shows. It is palpable. In a government department that is engaging in immoral and cruel behaviour (“within the law”), you can slow it down, throw sand in the gearbox, and make it less cruel by being less effective and less efficient.

In the U.S., ICE agents could do their jobs—badly. Administrative staff supporting ICE agents can slow things down by moving paper at an excruciatingly necessary pace. The word “expedite” should be struck from the vocabulary.

Singapore in the late ’80s and ’90s was a highly (overly) regulated society. Many would say it has persisted. But in the ’90s, chewing gum became a tool of everyday resistance. People would stick it over the door sensors on the MRT trains. The doors couldn’t close, and it would bring the system to a grinding halt. The act was small, non-specific in its target, and (back then) unidentifiable.

Posters, protests, badges, public art, and internet memes have all been used to demonstrate everyday resistance. Remembering when the state wants to forget or reimagine a truth is a powerful corrective. Archive the truth on the internet.

In a digital age, careful choices about how and when to use devices, credit cards, and online accounts can disrupt data collection and tracking. Using burner phones where you can get them, paying with cash instead of cards, and setting up anonymous online accounts are small but effective ways to limit surveillance and maintain privacy. Resistance is not about criminality; it is about the right to privacy, the freedom to think, and the quiet power of refusing to comply—to engage in cruelty. Even small acts of everyday resistance remind others they are not alone.

It is possible to resist and chew gum at the same time.

Resources:

If you want some ideas, have a look at these two.