Category Archives: AI

“AI Wrote That!”

“Some run from brakes of ice, and answer none; And some condemned for a fault alone.” [Measure for Measure, Act 2, Scene 1]

If only we could all write like Shakespeare. It’s sonorous, timeless, replete with metaphor and meaning. Now we have AI slop swilling around the internet. Finding something written by a human, something genuine, something worth reading, is like trying to pick out the orations of Cicero in a sports crowd as they roar for a touchdown. If you let yourself, you could drown in that cacophony of information.

The appearance of generative AI and its effectively infinite capacity to…well…generate has meant that you, poor reader, are now faced with the literary equivalent of a Deli full of lunchmeat—homogenised words with colouring and preservatives.

We need better ways of writing. We need to return to the old ways—a kind of writing where the artist, steeped in craft, can mold and form a narrative or argument and render it in a single draft. I am thinking here, at best, of a cuneiform tablet. But I would settle for ink, quill and velum. That is the true measure of the art.

We blame AI, but things really started to go wrong in the late 19th Century. The combination of wood-pulp technology and the Fourdrinier machine made paper cheap and available. And as paper became more affordable, thinking got lazier. Loose, ill-considered mutterings and on-the-fly musings could now be committed to paper and reworked through multiple drafts. There was no allegiance to de novo refined precision.

László Bíró, inventor of the ballpoint pen, and Marcel Bich, mass producer of the same, need to shoulder some of the blame (1933-1956). Even with the ready availability of paper, the blotches and smudgings of the maladroit kept many wannabe writers out of the market. Some thought they had good ideas, but manual dexterity was a solid benchmark for well-constructed prose.

The manual typewriter became a ubiquitous domestic item in the 1960s. Liquid paper had already been invented, which meant we could all become monkeys at the keyboard, randomly pecking in the hope of producing Shakespeare. These were followed in rapid succession by the electric typewriter and the electric typewriter with correction tape.

Between 1978 and 1983, authorship was no longer bound to paper. WordStar, WordPerfect, and Microsoft Word, running on personal computers, freed the illiterate to create everything from a letter to Grandma or a eulogy to a first novel. Effort and thought were gone. “Writing” was a mindless process of rinse-and-repeat. Spellcheck, grammar check, word suggestions, thesaurus (for the truly illiterate—or as I like to call them, the analphabetic) and “suggestions”.

And here we are—2025. Editors are inundated with crap because everyone is now a writer.

Claude, write me a bawdy Limerick proving the infinity of primes.

A strumpet proved primes never cease
By shagging each one for a piece,
She’d finish the set,
Find one larger yet,
Her clients increased without peace.

OK! It’s not Shakespeare. But it is a curiosity—a two-minute amusement. It’s also worth thinking about how that limerick comes to exist. The GenAIs are not monkeys at a typewriter. They are constrained. They respond to prompts. The outputs are not random. You might get lucky and one of the generative AI engines immediately produces a limerick worth two minutes of your life. The chances are, however, you will get dross, or it will be a proof, but it won’t be bawdy, or it will be bawdy, but it won’t be a proof. You will need to go back and forth with the AI, refining, editing, and selecting. It was your idea—a bawdy proof. You refined and selected. For a five-line limerick, it might not take much time and effort, but it does require some—and that process is creative.

When photography first appeared on the scene in the second half of the 19th Century, it was seen as the end of painting, because all painting was an attempt to reproduce reality perfectly (Not!). And all photography was the perfect reproduction of reality (also Not!). Photography is now accepted as an art form, although not always. The technology, however, is mechanical, and…. Where is the art?

I heard a story told of the renowned art photographer, Robert Maplethorpe. A woman commissioned him to take her photograph. He took dozens and dozens of photos on the day. When the woman returned some weeks later to receive her portrait, she was not entirely happy with it and asked Maplethorpe if she could see the other photos taken on the day. He refused. The other photographs are not “Maplethorpes”, he explained.

The production of the art might rely on a mechanical device—but the composition, the lighting, the post-production, and most importantly, the aesthetic choice is entirely in the hands of the artist. Maplethorpe might have been able to render a portrait in a fraction of the time it would take to paint the same picture—that is a matter of medium, however, not artistic merit.

If Shakespeare be the measure of literary art, then, Houston, we have a problem. Who in 2025 knows what that line from Measure for Measure means: “Some run from brakes of ice, and answer none; And some condemned for a fault alone”?

The Bard himself is unintelligible to the reader—and he is rarely, if ever, translated into modern English. The translation is an affront to the author as artist, which is ironic because Shakespeare almost certainly would have embraced the idea.

If he were translated, we might get any of the three following forms. There is the poetic and adherent: “Some hide in icy coverts, shun the call; and some are judged for but a single fall”. There is the plainer meaning: “The guilty hide and prosper; the unlucky answer once and fall”. And there is the prosaic: “Some people evade justice entirely by hiding and refusing to answer charges, while others are condemned for committing just one offence”.

The problem with AI is not that working closely with it cannot produce things of merit and worth: curated, thoughtful, and illuminating—things artistic and authored. The problem is the volume. We are looking for grains of black sand on a shore of white sand.

To judge “AI Wrote That!” as a dismissive and condemnatory act is as useful as looking at a Maplethorpe and declaring, “That’s a Photograph!”


ps: AI did not write this, except where it did.

Building Research Capacity with AI

Over 25 years ago, the “10/90 gap” was used to illustrate the global imbalance in health research. Only 10% of global research benefited the regions where 90% of preventable deaths occurred. Since then, efforts to improve research capacity in low- and middle-income countries (LMICs)—where 90% of avoidable deaths occurred—have made important gains; nonetheless, significant challenges remain. A quarter of a century later, there are still too few well-trained researchers in LMICs, and their research infrastructure and governance are also inadequate. The scope of the problem increased dramatically in 2025 when governments cut North American and European overseas development assistance (ODA, i.e., foreign aid) precipitously. That aid—however inadequate—supported improvements in research capacity.

Traditional approaches to improving research capacity, such as training workshops and degree scholarship programs, have gone some way to address the expertise challenge. However, they fall short because they are not scalable. The relatively recent introduction of massive open online courses (MOOCs), such as TDR/WHO’s MOOCs in implementation research, goes a long way to overcoming that scalability problem—at least in instruction-based learning. Nonetheless, for many LMIC researchers, major bottlenecks remain because of poor or limited access to mentorship, one-off and quick advice, bespoke training, research assistance, and inter- and intra-disciplinary collaboration. The scalability problem can leave them at a persistent disadvantage compared to their high-income country counterparts. Research is not done well from isolation and ignorance.

The rise of large language model artificial intelligence (LLM-AIs) such as ChatGPT, Mistral, Gemini, Claude, and DeepSeek offers an unprecedented opportunity…and some additional risks. LLM-AIs are advanced AI models trained on vast amounts of text data to understand and generate human-like language. They are flexible, multilingual, and always available (24/7), offering researchers in LMICs immediate access to knowledge and assistance. If used correctly, LLMs could revolutionise approaches to building research capacity and democratise access to skills, knowledge, and global scientific discourse. Many online educational providers already integrate LLM-AIs into their instructional pipelines as tutors and coaches.

Unfortunately, LMICs risk further entrenching or increasing the 10/90 gap if they cannot take advantage of the benefits of LLM-AIs.

AI as a game changer

Researchers in resource-limited settings can access an always-on, massively scalable assistant for the first time. By massively scalable, every researcher could have one or more 24/7, decent research assistants for a monthly subscription of less than $20. They offer scalability and flexibility that traditional human research assistants cannot (and should not) match. However, they are not human and may not fully replicate a human research assistant’s nuanced understanding and critical thinking—and they are certainly less fun to have a cup of coffee with. Furthermore, the effectiveness of LLM-AIs depends on the sophistication of the user, the task complexity and the quality of input the user provides.

I read a recent post on LinkedIn by a UCLA professor decrying the inadequacies of LLM-AIs. However, a quick read of the post revealed that the professor had no idea how to engage appropriately with the technology.

Unfortunately, like all research assistants, senior researchers, and professors, LLM-AIs can be wrong. Like all tools, one needs to learn how to use them with sophistication.

In spite of any inadequacies, LLM-AIs can remove barriers to research participation by offering tutoring on complex concepts, assisting with literature reviews and data analysis, and supporting the writing and editing of manuscripts and grant proposals.

Reid Hoffman, the AI entrepreneur, described on a podcast how he used LLM-AIs to learn about complex ideas. He would upload a research paper onto the platform and ask, “Explain this paper as if to a 12-year-old”. Hoffman could then “chat” with the LLM-AI about the paper at that level. Once comfortable with the concepts, he would ask the LLM-AI to “explain this paper as if to a high school senior”. He could use the LLM-AI as a personal tutor by iterating-up in age and sophistication.

Researchers can also use the LLM-AIs to support the preparation of scientific papers. This is happening already because an explosion of generically dull (and sometimes fraudulent) scientific papers is hitting the market. This explosion has delighted the publishing houses and created existential ennui among the researchers. The problem is not the LLM-AIs—it is in their utilisation, and it will take time for the paper production cycle to settle.

While access to many LLMs requires a monthly subscription, some LLM-AIs, like DeepSeek, significantly lower costs and accessibility barriers by distributing “open weights models”. Researchers can download these open weights models freely and put them on personal or university computer infrastructure without paying a monthly subscription. They make AI-powered research assistance viable for most LMIC research settings, and universities and research institutes can potentially lower the costs further.

LLM-AIs allow researchers in LMICs to become less dependent on high-income countries for training and mentorship, shifting the balance towards scientific self-sufficiency. AI-powered tools could accelerate the development of a new generation of LMIC researchers, fostering homegrown expertise and leadership in relevant global science. They are no longer constrained by the curriculum and interests of high-income countries and can develop contextually relevant research expertise.

The Double-Edged Sword

Despite its positive potential, the entry of LLM-AIs into the research world could have significant downsides. Without careful implementation, existing inequalities could be exacerbated rather than alleviated. High-income countries are already harnessing LLM-AIs at scale, integrating them into research institutions, project pipelines, training, and funding systems. LMICs, lacking the same level of investment and infrastructure, risk being left behind—again. The AI revolution could widen the research gap rather than close it, entrenching the divide between well-resourced and under-resourced institutions.

There is also a danger in how researchers use LLM-AIs. They are the cheapest research assistants ever created, which raises a troubling question: will senior researchers begin to rely on AI to replace the need for training junior scientists? Suppose an LLM-AI can summarise the literature, draft proposals, and assist in the analysis. In that case, there is a real risk that senior researchers will neglect mentorship, training and hands-on learning. Instead of empowering a new generation of LMIC researchers, LLM-AIs could be used as a crutch to maintain existing hierarchies. If institutions see the LLM-AIs as a shortcut to productivity rather than an investment in building research capacity, it could stall the development of genuine human expertise.

Compounding these risks, AI is fallible. LLM-AIs can “hallucinate”, generating false information with complete confidence. They always write with confidence. I’ve never seen one write, “I think this is the answer, but I could be wrong”. They can fabricate references, misinterpret scientific data, and reflect biases embedded in their training data. If used uncritically, they could propagate misinformation and skew research findings.

The challenge of bias is not to be underestimated. LLM-AIs are trained on the corpus of material currently available on the web, reflecting all the biases of the web–who creates the content, what content they create, etc.

Furthermore, while tools like DeepSeek reduce cost barriers, commercial AI models still pose a financial challenge. LMIC institutions will need to negotiate sustainable access to AI tools or risk remaining locked out of their benefits—particularly of the leading edge models. The worst outcome would be a scenario where HICs use AI to accelerate their research dominance while LMICs struggle to afford the very tools that could democratise access.

A Strategic Approach

To ensure LLM-AIs build rather than undermine research capacity in LMICs, they must be integrated strategically and equitably. Training researchers and students in AI literacy is paramount. Knowing how to ask the right questions, validate AI outputs, and integrate results into research workflows is essential. This is not a difficult task, but it takes time and effort, like all learning. The LLM-AIs can help with the task—effectively bootstrapping the learning curve.

Rather than replacing traditional research capacity building, LLM-AIs should be embedded into existing frameworks. MOOCs, mentorship programs, and research fellowships should incorporate LLM-AI-based tutoring, iterative feedback, and language support to enhance—not replace—human mentorship. The focus should be on areas where LLM-AI can offer the greatest immediate impact, such as brainstorming, editing, grant writing support, statistical assistance, and multilingual research dissemination.

Institutions in LMICs should also push for local, ethical LLM-AI development that considers regional needs. This push is easier said than done, particularly in a world of fracturing multilateralism. However, appropriately managed, LLM-AI models can be adapted to recognise and integrate local research priorities rather than merely reinforcing an existing scientific discourse. The fact that a research question is of no interest in high-income countries does not mean it is not critically urgent in an LMIC context.

Finally, securing affordable and sustainable access to AI tools will be essential. Governments, universities, and research institutions must lobby for cost-effective AI licensing models or explore open-source alternatives to prevent another digital divide. Disunited lobbying efforts are weak, but together, across national boundaries, they could have significant power.

An Equity Tipping Point

The LLM-AI revolution is a key juncture for building research capacity in LMICs. Harnessed correctly, LLM-AIs could break down long-standing barriers to participation in science, allowing LMIC researchers to compete on (a more) equal footing. The rise of models like DeepSeek suggests a future where AI is not necessarily a privilege of the few but a democratised resource for the many.

Fair access will not happen automatically. Without deliberate, ethical, and strategic intervention, LLM-AIs could reinforce existing research hierarchies. The key to harvesting the benefits of the technology lies in training researchers, integrating LLM-AIs into programs to build research capacity and securing equitable access to the tools. Done well, LLM-AIs could be a transformative force, not just in scaling research capacity but in redefining who gets to lead global scientific discovery.

LLM-AIs offer an enormous opportunity. They could either empower LMIC researchers to chart their own scientific futures, or they could become another tool to push them further behind.


Acknowledgment: This blog builds upon insights from a draft concept note developed by me (Daniel D. Reidpath), Lucas Sempe, and Luciana Brondi from the Institute for Global Health and Development (Queen Margaret University, Edinburgh), and Anna Thorson from the TDR Research Capacity Strengthening Unit (WHO, Geneva). Our work on AI-driven research capacity strengthening in LMICs informed much of the discussion presented here.

The original draft concept note is accessible here.

A Christmas Story

In the last year of the reign of Biden, there was a ruler in Judea named Benyamin. He was a man of great cunning and greater cruelty.

In those days, Judea, though powerful, was a vassal state. Its strength was created through alliances with distant empires. It wielded its might with a fierce arm and harboured a deep hatred for its neighbors. Benyamin, fearing the loss of his power, sought to destroy the Philistines on that small strip of land called Gaza, and claim it for himself.

For over four hundred and forty days and nights, he commanded his armies to bomb their towns and villages, reducing them to rubble. The Philistines were corralled, trapped within walls and wire, with no escape. Benyamin promised them safety in Rafah and bombed the people there. He offered refuge in Jabalia, and bombed the people there.

In Gaza, there was no safety and there was no food.

Even as leaders wept for the Philistines, they sold weapons to Benyamin and lent him money to prosecute his war. Thus, the world watched in silence as the Philistines endured great suffering. Their cries rose up to heaven, seemingly unanswered.

And so it came to pass, in the last days of the last year of Biden, there was a humble Philistine named Yusouf born of the family of Dawoud. Before the war, Yusouf had been a mechanic. He worked hard each day fixing tires and carburetors, changing break-pads and exhaust systems. And at the end of each day, he would return home to his young wife, Mariam. The same Mariam, you may have heard of her, who was known for her inexhaustable cheerfulness.

That was before the war. Now Mariam was gaunt and tired, and heavy with child.

On the night of the winter solstice, in a dream, a messenger came to Yusouf. “Be not afraid, Yusouf”, the messenger said. “Be not afraid for yourself, for the wife you love so very much, or for your son—who will change the world. What will be, will be and was always meant to be”. Yusouf was troubled by this dream, and found himself torn between wonder, happiness, and fear. Mariam asked him why he looked troubled, but he said nothing and kept his own counsel.

The following night the same messenger visited Mariam in her dreams. Mariam was neither afraid nor troubled. The next morning she had a smile on her face that Yusouf had not seen for so long he had almost forgotten it. “It is time, Yusouf”, she said. “We have to go to the hospital in Beit Lahiya.”

Yusouf was troubled. Long ago he had learned to trust Mariam, but his motorbike had no fuel and it was a long walk. Too far for Mariam, and they were bombing Beit Lahiya. He remembered the words of the messenger in his dreams and he went from neighbour to neighbour. A teaspoon of fuel here, half a cup there. No one demanded payment. If they had any fuel, no one refused him. Having little, they shared what they had. It was the small act of kindness that binds communities. Yusouf wept for their generosity.

When he had gathered enough fuel, he had Mariam climb on the bike. Shadiah, the old sweet seller who had not made a sweet in over a year and could barely remember the smell of honey or rosewater, helped her onto the back.

Yusouf rode carefully. He weaved slowly around potholes and navigated bumps. In spite of his care, he could feel Mariam tense and grip him tighter. And then the motorbike stopped. A last gasping jerk and silence. The fuel was spent.

The late afternoon air was cooling as he helped Mariam walk towards the hospital. When they arrived at the gate, a porter stopped, them. “They’re evacuating the hospital. You can’t go in”, the porter told them. Yusouf begged. “My wife, she is going to give birth,” he told the porter—who could plainly see this for himself. The porter looked at Mariam and took pity. “You can’t go in, but there is a small community clinic around the corner. It was bombed recently, but some of it, a room or two, is still standing. I’ll send a midwife.”

Yusouf gently guided Mariam to the clinic. He found an old mattress on a broken gurney and a blanket. He lay it on the floor and settled Mariam.

If there had been a midwife—if she had ever arrived… if she had ever got the porter’s message—she would have been eager to retell the story of the birth. Sharing a coffee, with a date-filled siwa, she would have painted the picture. Mariam’s face was one of grace. Yusouf anxiously held her hand. The baby came quickly, with a minimum of fuss, as if Mariam was having her fifth and not her first.

Yusouf quickly scooped up the baby as it began to vocalise it’s unhappiness with the shock of a cold Gaza night. He cut the cord crudely but effectively with his pocket knife. And it was only as he was passing the the baby to Mariam that he looked confused. He did not have the son he was promised, he had a daughter. The moment was so fleeting that quantum physicists would have struggled to measure the breadth of time, and Yusouf smiled at the messenger’s joke.

Because there was no midwife to witness this moment, we need to account for the witnesses who were present. There was a mangy dog with a limp looking for warmth. He watched patiently and, once the birth was completed, he found a place at Mariam’s feet. There were three rats that crawled out of the rubble looking for scraps. They gave a hopeful sniff of the night air and sat respectfully and companionably on a broken chair. As soon as the moment passed, they disappeared into the crevices afforded by broken brick and torn concrete. Finally, there was an unremarkable cat. In comfortable fellowship, they all watch the moment of birth knowing that, tomorrow or the next day, they were mortal enemies, but tonight there was peace.

“Nasrin”, Yousuf whispered in Mariam’s ear as he kissed her forehead. “We’ll call her Nasrin.” The wild rose that grows and conquers impossible places.

There was a photo journalist called Weissman, who heard from the porter that was a very pregnant woman at the clinic. “She’s about to pop”, the porter said. Weissman hurried to the bombed out clinic so that he could bear witness to this miracle in the midst of war.

He missed the birth. And when he arrived, he did not announce his presence. It seemed rude. An intrusion on a very private moment. It did not, however, stop him from taking photos for AAP.

He later shared those images with the world. Yusouf lay on the gurney mattress, propped against a half destroyed wall. Mariam was lying against him, exhausted, eyes closed, covered in a dirty blanket. The baby Nasrin was feeding quietly, just the top of her head with a shock of improbably thick dark hair peeking out. Yousuf stared through the broken roof at the stars in heaven. The blackness of a world without electricity made resplendent. He looked up with wonderment and contentment on his face. He was blessed, he thought. No. They were blessed. The messenger was right.

As Weissman picked his way in the dark towards the hospital gate, where he had last seen the porter, he shared the same hope that he had seen on Yusouf’s face. New life can change things.

The night sky lit up, brightening his path to the hospital. He turned back and was awed by a red flare descending slowly over the remains of the clinic as if announcing a new beginning to the world. A chance for something different was born here today.

The explosion shook the ground and Weissman fell. Cement and brick dust from where the clinic had stood rose sharply in to the air. An avalanche of dust raced towards him.

UKRI go its A.I. policy half right

UKRI AI policy: Authors on the left. Assessors on the right

UKRI AI policy: Authors on the left. Assessors on the right (image generated by DALL.E)

When UKRI released its policy on using generative artificial intelligence (A.I.) in funding applications this September, I found myself nodding until I wasn’t. Like many in the research community, I’ve been watching the integration of A.I. tools into academic work with excitement and trepidation. In contrast, UKRI’s approach is a puzzling mix of Byzantine architecture and modern chic.

The modern chic, the half they got right, is on using A.I. in research proposal development. By adopting what amounts to a “don’t ask, don’t tell” policy, they have side-stepped endless debates that swirl about university circles. Do you want to use an A.I. to help structure your proposal? Go ahead. Do you prefer to use it for brainstorming or polishing your prose? That’s fine, too. Maybe you like to write your proposal on blank sheets of paper using an HB pencil. You’re a responsible adult—we’ll trust you, and please don’t tell us about it.

The approach is sensible. It recognises A.I. as just one of the many tools in the researcher’s arsenal. It is no different in principle from grammar-checkers or reference managers. UKRI has avoided creating artificial distinctions between AI-assisted work and “human work” by not requiring disclosure. Such a distinction also becomes increasingly meaningless as A.I. tools integrate into our daily workflows, often completely unknown to us.

Now let’s turn to the Byzantine—the half UKRI got wrong—the part dealing with assessors of grant proposals. And here, UKRI seems to have lost its nerve. The complete prohibition on using A.I. by assessors feels like a policy from a different era—some time “Before ChatGPT” (B.C.) was released in November 2022. The B.C. policy fails to recognise the enormous potential of A.I. to support and improve human assessors’ judgment.

You’re a senior researcher who’s agreed to review for UKRI. You have just submitted a proposal using an A.I. to clean, polish and improve the work. As an assessor, you are now juggling multiple complex proposals, each crossing traditional disciplinary boundaries (which is increasingly regarded as a positive). You’re probably doing this alongside your day job because that’s how senior researchers work. Wouldn’t it be helpful to have an A.I. assistant to organise key points, flag potential concerns, help clarify technical concepts outside your immediate expertise, act as a sounding board, or provide an intelligent search of the text?

The current policy says no. Assessors must perform every aspect of the review manually, potentially reducing the time they can spend on a deep evaluation of the proposal. The restriction becomes particularly problematic when considering international reviewers, especially those from the Global South. Many brilliant researchers who could offer valuable perspectives might struggle with English as a second language and miss some nuance without support. A.I. could help bridge this gap, but the policy forbids it.

The dual-use policy leads to an ironic situation. Applicants can use A.I. to write their proposals, but assessors can’t use it to support the evaluation of those proposals. It is like allowing Formula 1 teams to use bleeding-edge technology to design their racing cars while insisting that race officials use an hourglass and the naked eye to monitor the race.

Strategically, the situation worries me. Research funding is a global enterprise; other funding bodies are unlikely to maintain such a conservative stance for long. As other funders integrate A.I. into their assessment processes, they will develop best-practice approaches and more efficient workflows. UKRI will fall behind. This could affect the quality of assessments and UKRI’s ability to attract busy reviewers. Why would a busy senior researcher review for UKRI when other funders value their reviewers’ time and encourage efficiency and quality?

There is a path forward. UKRI could maintain its thoughtful approach to applicants while developing more nuanced guidelines for assessors. One approach would be a policy that clearly outlines appropriate A.I. use cases at different stages of assessment, from initial review to technical clarification to quality control. By adding transparency requirements, proper training, and regular policy reviews, UKRI could lead the way with approaches that both protect research integrity and embrace innovation.

If UKRI is nervous, they could start with a pilot program. Evaluate the impact of AI-assisted assessment. Compare it to a traditional approach. This would provide evidence-based insights for policy development while demonstrating leadership in research governance and funding.

The current policy feels half-baked. UKRI has shown they can craft sophisticated policy around A.I. use. The approach to applicants proves this. They need to extend that same thoughtfulness to the assessment process. The goal is not to use A.I. to replace human judgment but to enhance it. It would allow assessors to focus their expertise where it matters most.

This is about more than efficiency and keeping up with technology. It’s about creating the best possible system for identifying and supporting excellent research. If A.I. is a tool to support this process, we should celebrate. When we help assessors do their job more effectively, we help the entire research community.

The research landscape is changing rapidly. UKRI has taken an important first step in allowing A.I. to support the writing of funding grant applications. Now it’s time for the next one—using A.I. to support funding grant evaluation.