Category Archives: Epidemiology

The study of the causes and distribution of disease. A methodological branch of health sciences

Software for cost effective community data collection

In 2012 we started enumerating a population of about 40,000 people in the five Mukim (sub-districts) in the District of Segamat in the state of Johor on peninsular Malaysia.  This marked the start of the data collection for establishing a Health and Demographic Surveillance Site (HDSS) — grandly called the South East Asia Community Observatory (SEACO).

When establishing SEACO we had the opportunity to think, at length, about how we should collect individual and household data.  Should we use paper-based questionnaires? Should we use Android Tablets?  Should we use a commercial service, or should we use open source software?  When we eventually collect the data, how should we move it from paper/tablets into a usable database?  In thinking about this process, one of the real challenges was that HDSS involve the longitudinal follow-up of individuals, households, and communities.  Whatever data collection system we chose, therefore, had to simplify the linking of data about Person_1 at Time_1 with the data about Person_1 at Time_2.

I eventually settled on OpenDataKit (ODK).  ODK is a marvellous piece of software developed by a team of researchers at the University of Washington, it runs on Android Tablets and it was released under an open source license. We hacked the original codebase to allow the encryption of data on the Tablet (later it became a mainstream option in ODK), I wrote a small Python script for downloading the data from the Tablets, and the IT Manager wrote a PHP script to integrate the data with a MySQL database. We managed the entire process from collection to storage, and it worked extremely well.  I hate the idea of using proprietary software if I don’t have to, and when we set up SEACO we decided that as much as possible we would use open source software so that others could replicate our approach.

 

SEACO data collector using an Android Tablet with ODK completes a household census, 2012

Recently we moved to away from ODK to a proprietary service: surveyCTO.  Unlike ODK, we have to pay for the service and for reasons I will go into, it has thus far been worth the move.

ODK did not do exactly what we needed and this meant that the IT team regularly made adjustments to the code-base (written in Java).  The leading hacker on the team moved on.  That left us short-handed and also without someone with the familiarity he had with the ODK codebase. I was torn between trying to find a new person who could take on the role of ODK hacker versus moving to proprietary software.  The final decision rested on a few factors — factors that are worth keeping in mind should the question arise again.  First, our operation has grown quite large. There are multiple projects going on at any one time, and we required a full-time ODK person.  SurveyCTO maintained most of the functionality we already had, and it also had some additional features that were nice for monitoring the data as they came in, and managing access to data.  Second, the cost of using surveyCTO was considerably lower than the staff costs associated with having an in-house developer.  We would lose the capacity for some of our de novo development but benefit by having a maintained service at a fraction of the cost.

If I had more money, my preference would be to maintain the capacity for in-house development.  If I were only doing relatively small, or only one-off cross-sectional studies, I would use ODK without hesitation.  For a large, more complex operation, a commercial service made economic and functional sense.

One of the other services I considered was Magpi. At the time I took the decision, it was more expensive than surveyCTO for our needs. If you, however, are just beginning to look at the problem, you should look at all options. I am sure there are now other providers we had not considered.

Fat on the success of my country

When I first visited Ghana in the early 1990’s, there was a very noticeable relationship between BMI and wealth.  Rich people were far more likely to be overweight and obese than poor people.  That visit took place about ten years after the 1982-1984 famine.  Some of the roots of the famine lay in natural causes resulting in crop failure and some lay in local and regional politics, and it was small children that bore the brunt of it.  Less than ten years after the famine it was perhaps unsurprising to see that (on average) the thinnest were the poorest, and the fattest were the richest.

Working in Australia in the early 2000s, however, there appeared to be exactly the opposite relationship.  It appeared that the poorest were more likely to be overweight or obese and the wealthiest, normal weight. This observation was certainly borne out at an ecological level when my colleagues and I found an unmistakable relationship between area level, socioeconomic disadvantage, and obesogenic environments — fast food chain “restaurants” were more likely to be found in poorer areas.

So which is it?  Are the poor more likely to be overweight and obese, or is it the rich?  One of the challenges in working out this relationship is that it appears to be different in different countries.  Neuman and colleagues conducted a multi-level study of low-and middle-income countries (LMICs) looking at this very problem using DHS Survey data.  They found an interaction between country-level wealth, individual-level wealth, and BMI.  Unfortunately, the study was limited to LMICs because the DHS surveys do not operate in high-income countries. While it would be tempting to extrapolate the interaction into high-income countries, without the data, it would just be a guess.

We don’t have the definitive answer, but a recent paper by Mohd Masood and me, based on his PhD research, provides some nice insights into the issue.  We were able to bring together data from 206,266 individuals in 70 low-, middle- and high-income countries using 2003 World Health Survey (WHS) data.  The WHS data are now getting a little old, but it is the only dataset we knew of that provided BMI and wealth measures from a sample of all countries, using a consistent methodology, all measured over a similar period of time.

 

Mean BMI of the five quintiles of household wealth in countries ranging from the poorest to the richest (GNI-PPP). [https://doi.org/10.1371/journal.pone.0178928]

The analysis showed that as country-level wealth increased, mean BMI increased in all wealth groups, except the very wealthiest group.  The mean BMI of the wealthiest 20% of the population declined steadily as the wealth of the country increased.  In the wealthiest countries, the mean BMI converged for the poorest 80% of the population around a BMI of 24.5 (i.e., near the WHO cut-off for overweight of 25).  The wealthiest 20% had a mean BMI comfortably below that, around 22.5.

It is obviously not inevitable that as the economic position of countries improves, everyone except the very richest put on weight.  There are thin, poor people and fat, rich people living in the wealthiest of countries.  Nonetheless, the data do point to structural drivers creating obesogenic environments. My colleagues and I had argued, at least in the context of Malaysia, that the increasing prevalence of obesity was an ineluctable consequence of development. The development agenda pursued by the government of the day decreased physical activity, promoted a sedentary lifestyle, and did nothing to moderate the traditional fat rich, simple carbohydrate diet associated with the historically rural lifestyle of intensive agriculture.

We really need more data points (i.e., a repeat of the WHS) to try and tease out the effect of economic development on obesity in the poorest to the richest quintiles of the population.  I would suspect, however, that countries need to think more deeply about what it is they pursue (for their population) when they pursue national wealth.

 

 

 

Guidelines for the reporting of COde Developed to Analyse daTA (CODATA)

I was reviewing an article recently for a journal in which the authors referenced a GitHub repository for the Stata code they had developed to support their analysis. I had a look at the repository. The code was there in a complex hierarchy of nested folders.  Each individual do-file was well commented, but there was no file that described the overall structure, the interlinking of the files, or how to use the code to actually run an analysis.

I have previously published code associated with some of my own analyses.  The code for a recent paper on gender bias in clinical case reports was published here, and the code for the Bayesian classification of ethnicity based on names was published here. None of my code had anything like the complexity of the code referenced in the paper I was reviewing.  It did get me thinking however about how the code for statistical analyses should be written. The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network has 360 separate guidelines for reporting research.  This includes guidelines for everything from randomised trials and observational studies through to diagnostic studies, economic evaluations and case reports. Nothing on the reporting of code for the analysis of data.

On the back of the move towards making data available for re-analysis, and the reproducible research movement, it struck me that guidelines for the structuring of code for simultaneous publication with articles would be enormously beneficial.  I started to sketch it out on paper, and write the idea up as an article.  Ideally, I would be able to enrol some others as contributors.  In my head, the code should have good meta-data at the start describing the structure and interrelationship of the files.  I now tend to break my code up into separate files with one file describing the workflow: data importation, data cleaning, setting up factors, analysis.  And then I have separate files for each element of the workflow. My analysis is further divided into specific references to parts of papers. “This code refers to Table 1”.  I write the code this way for two reasons.  It makes it easier for collaborators to pick it up and use it, and I often have a secondary, teaching goal in mind.  If I can write the code nicely, it may persuade others to emulate the idea.  Having said that, I often use fairly unattractive ways to do things, because I don’t know any better; and I sometimes deliberately break an analytic process down into multiple inefficient steps simply to clarify the process — this is the anti-Perl strategy.

I then started to review the literature and stumbled across a commentary written by Nick Barnes in 2010 in the journal Nature. He has completely persuaded me that my idea is silly.

It is not silly to hope that people will write intelligible, well structured. well commented code for statistical analysis of data.  It is not silly to hope that people will include this beautiful code in their papers.  The problem with guidelines published by the EQUATOR Network is in the way that journals require authors to comply with them. They become exactly the opposite of guidelines, they are rules — the ironic twist on the observation by Geoffrey Rush’s character, Hector Barbossa in Pirates of the Caribbean.

Barnes wrote, “I want to share a trade secret with scientists: most professional computer software isn’t very good.”  Most academics/researchers feel embarrassed by their code.  I have collaborated with a very good Software Engineer in some of my work and spent large amounts of time apologising for my code.  We want to be judged for our science, not for our code.  The problem with that sense of embarrassment is that the perfect becomes the enemy of the good.

The Methods sections of most research articles make fairly vague allusions to how the data were actually managed and analysed.  One may make references to statistical tests and theoretical distributions.  For a reader to move from that to a re-analysis of the data is often not straight forward.  The actual code, however, explains exactly what was done.  “Ah! You dropped two cases, collapsed two factors, and used a particular version of an algorithm to perform a logistic regression analysis.  And now I know why my results don’t quite match yours”.

It would be nice to have an agreed set of guidelines reporting COde Developed to Analyse daTA (CODATA).  It would be great if some authors followed the CODATA guidelines when they published.  But it would be even better if everyone published their code, no matter how bad or inefficient it was.

 

Babies have less than a 1 in 3 chance of recovery from a poor 1 minute Apgar score

We recently completed a study of 272,472 live, singleton, term births without congenital anomalies recorded in the Malaysian National Obstetrics Registry (NOR). We wanted to know what proportion of births had a poor 1 minute Apgar score (<4); and the likelihood that they would recover (Apgar score ≥7) by 5 minutes.

As we noted in the paper:

While the Apgar score at 5 minutes is a better predictor of later outcomes than the Apgar score at 1 minute, there is a necessary temporal process involved, and a neonate must pass through the first minute of life to reach the fifth. Understanding the factors associated with the transition from intrauterine to extrauterine life, particularly for neonates with 1 min Apgar scores <4, has the potential to improve care.

Surprisingly, to me at least, we could find no research looking at that 1 minute to 5 minute transition.  Ours was a first.

From the 270,000+ births, you can see (Figure 1) that the probability of a 5 minute Apgar score ≥7 rises dramatically as the 1 minute Apgar score increases. There is an almost straight line relationship between a 1 minute Apgar score of 1, a 1 minute Apgar score of 6, and the chance of  a 5 minute Apgar score ≥7.

Fig 1: The probability (with 95% CI) of an Apgar score at 5 min (≥7) given any Apgar score at 1 minute

A 1 minute Apgar of 6 almost guarantees a 5 minute Apgar score ≥7; in contrast a 1 minute Apgar of 3 has only a 50% chance of recovery, and a 1 minute Apgar of 1 has only less than a 10% chance of recovery.

Fortunately, only 0.6% of births had poor Apgar scores (<4).  The type of delivery (Caesarean section, or vaginal delivery) and the staff conducting the delivery (Doctor or Midwife) were both significantly associated with the chance of recovery.  The challenge is working out the causal order.  Do certain kinds of delivery cause poor recovery, or are babies likely to have poor recovery delivered in particular ways?  Does the training of Doctors or Midwives exacerbate/improve the risks of poor recovery, or are babies likely to have poor recovery delivered by particular personnel?

Our study cannot answer the questions, but it does raise interesting points for future studies of actual labor room practice — questions not easily answered with registry type data.