Category Archives: Biostatistics

A branch of statistics with a focus on biological systems — in my case almost always human health.

Donald Trump’s BMI: getting the measure of the man.

I find myself fascinated by a pointless lie because it is inescapably tragic. All it can do is diminish the person in the eyes of others. And this brings us to Donald Trump’s height. In January 2018, the Physician to the President, Ronny L. Jackson MD asserted that Donald Trump was 6’3″ tall (1.90m). This is so unlikely to be true, that it stretches credulity. There is no reason for Jackson to lie spontaneously about a patient’s height, and it seems probable that he was encouraged to add a few inches by the President himself.

When asked to self report height both men and women in the US tend to overstate it.  Burke and Carman have suggested that overstating height is motivated by social desirability — you can never be too tall. There is ample evidence of Donald Trump’s (misplaced) search for the socially desirable with respect to his hair, his tan, his ethnicity, his intelligence and now his height.

In 2018 we learnt that Donald Trump was officially not quite Obese (body mass index (BMI) <30), and in 2019 he had nudged over the line into the obese range (BMI 30). Overstating height creates a problem in the calculation of BMI — which is mass (in kilograms) divided by height (in meters squared). Given that Donald Trump is likely shorter than 1.9m (6’3″), and probably closer to 1.854m (6’1″) this will have implications for whether he was really obese in 2018 (not just overweight as stated by his Physician) and just how obese he probably is (Figure 1).

Figure 1: Donald Trump’s BMI in 2018 and 2019 given different assumptions about his height [R-code here].

In 2018 Donald trump was just below the obese category if and only if he was really 6’3″ (1.9m) tall.  At any height less than that he was obese in 2018 and he is obese today.  His most likely true height given comparisons with others (cf, Barack Obama) is 6’1″, and this puts him comfortably in the obese range.

Misrepresenting one’s height does not create a problem if the lie is reserved for others — except perhaps in a political sense. Problems arise if one deludes oneself. Telling others that you are taller and healthier than you really are is one thing; if you lie to yourself you cannot properly manage your health.

 

 

 

Prevalence of sexual assault at Australian Universities is … non-zero.

A few days ago the  Australian Human Rights Commission (AHRC) launched Change the course, a national report on sexual assault and sexual harassment at Australian universities lead by Commissioner Kate Jenkins. Sexual assault and sexual harassment are important social and criminal issues, and the AHRC report is misleading and unworthy of the gravity of the subject matter.

It is statistical case-study in “how not to.”

The report was released to much fanfare, receiving national media coverage including TV and newspapers, and a quick response from universities. “At a glance …” the report highlights among other things:

  • 30,000+ students responded to the survey — remember this number, because (too) much is made of it.
  • 21% of students were sexually harassed in a university setting.
  • 1.6% of students were sexually assaulted in a university setting.
  • 94% of sexually harassed and 87% of sexually assaulted students did not report the incidents.

From a reading of the survey’s methodology, any estimates of sexual harassment/assault should be taken with a shovel-full of salt and should generate no response other than that of the University Of Queensland’s Vice-Chancellor, Peter Høj‘s, that any number greater than zero is unacceptable. What we did not have before the publication of the report was a reasonable estimate of the magnitude of the problem and, notwithstanding the media hype, we still don’t.  The AHRC’s research methodology was weak, and it looks like they knew the methodology was weak when they embarked on the venture.

Where does the weakness lie?  The response rate!!!

A sample of 319,252 students was invited to participate in the survey.  It was estimated at the design stage that between 10 and 15% of students would respond (i.e., 85-90% would not respond) (p.225 of the report).  STOP NOW … READ NO FURTHER.  Why would anyone try to estimate prevalence using a strategy like this?  Go back to the drawing board.  Find a way of obtaining a smaller, representative sample, of people who will respond to the questionnaire.

Giant samples with poor response rates are useless.  They are a great way for market research companies to make money, but they do not advance knowledge in any meaningful way, and they are no basis for formulating policy. The classic example of a large sample with a poor response rate misleading researchers was the Literary Digest poll to predict the outcome of the 1936 US presidential election.  They sent out 10 Million surveys and received 2.3 Million responses.  By any measure, 2.3 Million responses to a survey is an impressive number.  Unfortunately for the Literary Digest, there were systematic differences between responders and non-responders.  The Literary Digest predicted that Alf Landon (Who?) would win the presidency with 69.7% of the electoral college votes.  He won 1.5% of the electoral college votes.  This is a lesson about the US electoral college system, but it is also a significant lesson about the non-response bias. The Literary Digest had a 77% non-response rate; the AHRC had a 90.3% non-response rate. Who knows how the 90.3% who did not respond compare with the 9.7% who did respond?  Maybe people who were assaulted were less likely to respond and the number is a gross underestimate of assaults.  Maybe they were more likely to respond and it is a gross overestimate of assaults.  The point is that we are neither wiser nor better informed for reading the AHRC report.

Sadly, whoever estimated the (terrible) response was even then, overly optimistic.  The response rate was significantly lower than the worst-case scenario of 10% [Response Rate = 9.7%, 95%CI: 9.6%–9.8%].

In sharp contrast to the bad response rate of the AHRC study, the Crime Victimisation Survey (CVS) 2015-2016, conducted by the Australia Bureau of Statistics (ABS) had a nationally representative sample and a 75% response rate — fully completed!  That’s a survey you could actually use for policy.  The CVS is a potentially less confronting instrument, which may account for the better response rate.  It seems more likely, however, that recruiting students by sending them emails is neither sophisticated enough nor adequate.

Poorly conducted crime research is not merely a waste of money, it trivialises the issue.  The media splash generates an illusion of urgency and seriousness, and the poor methodology means it can be quickly dismissed.

If there is a silver lining to this cloud, it is that AHRC has created an excellent learning opportunity for students involved in quantitative (social)  research.

Addendum

It was pointed out to me by Mark Diamond that a better ABS resource is the 2012 Personal Safety Survey, which tried to answer the question about the national prevalence of sexual assault.  A Crime Victimisation Survey is likely to receive a better response rate than a survey looking explicitly at sexual assault.  I reproduce the section on sample size from the explanatory notes because it highlights the difference between a well conducted survey and the pile of detritus reported by AHRC.

There were 41,350 private dwellings approached for the survey, comprising 31,650 females and 9,700 males. The design catered for a higher than normal sample loss rate for instances where the household did not contain a resident of the assigned gender. Where the household did not contain an in scope resident of the assigned gender, no interview was required from that dwelling. For further information about how this procedure was implemented refer to Data Collection.

After removing households where residents were out of scope of the survey, where the household did not contain a resident of the assigned gender, and where dwellings proved to be vacant, under construction or derelict, a final sample of around 30,200 eligible dwellings were identified.

Given the voluntary nature of the survey a final response rate of 57% was achieved for the survey with 17,050 persons completing the survey questionnaire nationally. The response comprised 13,307 fully responding females and 3,743 fully responding males, achieving gendered response rates of 57% for females and 56% for males.

 

 

 

 

 

Guidelines for the reporting of COde Developed to Analyse daTA (CODATA)

I was reviewing an article recently for a journal in which the authors referenced a GitHub repository for the Stata code they had developed to support their analysis. I had a look at the repository. The code was there in a complex hierarchy of nested folders.  Each individual do-file was well commented, but there was no file that described the overall structure, the interlinking of the files, or how to use the code to actually run an analysis.

I have previously published code associated with some of my own analyses.  The code for a recent paper on gender bias in clinical case reports was published here, and the code for the Bayesian classification of ethnicity based on names was published here. None of my code had anything like the complexity of the code referenced in the paper I was reviewing.  It did get me thinking however about how the code for statistical analyses should be written. The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network has 360 separate guidelines for reporting research.  This includes guidelines for everything from randomised trials and observational studies through to diagnostic studies, economic evaluations and case reports. Nothing on the reporting of code for the analysis of data.

On the back of the move towards making data available for re-analysis, and the reproducible research movement, it struck me that guidelines for the structuring of code for simultaneous publication with articles would be enormously beneficial.  I started to sketch it out on paper, and write the idea up as an article.  Ideally, I would be able to enrol some others as contributors.  In my head, the code should have good meta-data at the start describing the structure and interrelationship of the files.  I now tend to break my code up into separate files with one file describing the workflow: data importation, data cleaning, setting up factors, analysis.  And then I have separate files for each element of the workflow. My analysis is further divided into specific references to parts of papers. “This code refers to Table 1”.  I write the code this way for two reasons.  It makes it easier for collaborators to pick it up and use it, and I often have a secondary, teaching goal in mind.  If I can write the code nicely, it may persuade others to emulate the idea.  Having said that, I often use fairly unattractive ways to do things, because I don’t know any better; and I sometimes deliberately break an analytic process down into multiple inefficient steps simply to clarify the process — this is the anti-Perl strategy.

I then started to review the literature and stumbled across a commentary written by Nick Barnes in 2010 in the journal Nature. He has completely persuaded me that my idea is silly.

It is not silly to hope that people will write intelligible, well structured. well commented code for statistical analysis of data.  It is not silly to hope that people will include this beautiful code in their papers.  The problem with guidelines published by the EQUATOR Network is in the way that journals require authors to comply with them. They become exactly the opposite of guidelines, they are rules — the ironic twist on the observation by Geoffrey Rush’s character, Hector Barbossa in Pirates of the Caribbean.

Barnes wrote, “I want to share a trade secret with scientists: most professional computer software isn’t very good.”  Most academics/researchers feel embarrassed by their code.  I have collaborated with a very good Software Engineer in some of my work and spent large amounts of time apologising for my code.  We want to be judged for our science, not for our code.  The problem with that sense of embarrassment is that the perfect becomes the enemy of the good.

The Methods sections of most research articles make fairly vague allusions to how the data were actually managed and analysed.  One may make references to statistical tests and theoretical distributions.  For a reader to move from that to a re-analysis of the data is often not straight forward.  The actual code, however, explains exactly what was done.  “Ah! You dropped two cases, collapsed two factors, and used a particular version of an algorithm to perform a logistic regression analysis.  And now I know why my results don’t quite match yours”.

It would be nice to have an agreed set of guidelines reporting COde Developed to Analyse daTA (CODATA).  It would be great if some authors followed the CODATA guidelines when they published.  But it would be even better if everyone published their code, no matter how bad or inefficient it was.

 

Sharing data while not sharing data

There has been a major shift among journals towards making data available at the time of publication.  The PLoS stable of journals which includes PLoS Medicine, PLoS Biology, and PLoS One, for example, have a uniform publication policy that is quite forthright about the need to share data.

I have mixed feelings about this.  I have certainly advocated for data sharing and (with Pascale Allotey)  conducted one of the earliest empirical investigations of data sharing in Medicine.  I can understand, however, why researchers are reluctant to provide open access to data. The data can represent hundreds, thousands, or tens of thousands of person-hours of collection and curation. The data also represent a form of Intellectual Property in the development of the ideas and methods that lead to the data collection. For many researchers, there may be a sense that others are going to swoop in and collect the glory with none of the work. There have certainly been strong advocates for data sharing where the motivation looked to be potentially exploitative (see our commentary).

I recently stumbled across a slightly different issue in data sharing.  It arose in an article in PLoS One by Buttelmann and colleagues. Their study looked at whether great apes (Orangutans, Chimpanzees and Bonobos) could distinguish in a helping task between another’s true and false beliefs.  The data set comprised 378 observations from 34 apes in two different studies, and they made their data available … as a jpg file.  A small portion of it appears below, and you can download the whole image from PLoS One.

Partial data from Buttleman et al. (2017)

It seems strange to me to share the data as an image file.  If you wanted people to use the data, surely you would share it as a text file, CSV, xlsx, etc.  If the intention was to satisfy the journal requirements but discourage use, then an image file looks (at first glance) to be a perfect medium.  Fortunately, there are some excellent online tools for optical character recognition (OCR), and the one I used made quick work of the image file.  I downloaded it as in xlsx format, read it into R, and cleaned up a few typographical errors that were introduced by the OCR. You can download their data in a machine-readable form here. I have included in the download an R script for reading the data in and running a simple mixed effects model to re-analyse their study data. My approach was a little better than theirs, but the results look pretty similar. I am not sure why they did not account for the repeated measurement within ape, but ignoring that seems to be the typical approach taken within the discipline.