Category Archives: Education

Topics related to the education sector (usually the tertiary or Higher Education sector).

Prevalence of sexual assault at Australian Universities is … non-zero.

A few days ago the  Australian Human Rights Commission (AHRC) launched Change the course, a national report on sexual assault and sexual harassment at Australian universities lead by Commissioner Kate Jenkins. Sexual assault and sexual harassment are important social and criminal issues, and the AHRC report is misleading and unworthy of the gravity of the subject matter.

It is statistical case-study in “how not to.”

The report was released to much fanfare, receiving national media coverage including TV and newspapers, and a quick response from universities. “At a glance …” the report highlights among other things:

  • 30,000+ students responded to the survey — remember this number, because (too) much is made of it.
  • 21% of students were sexually harassed in a university setting.
  • 1.6% of students were sexually assaulted in a university setting.
  • 94% of sexually harassed and 87% of sexually assaulted students did not report the incidents.

From a reading of the survey’s methodology, any estimates of sexual harassment/assault should be taken with a shovel-full of salt and should generate no response other than that of the University Of Queensland’s Vice-Chancellor, Peter Høj‘s, that any number greater than zero is unacceptable. What we did not have before the publication of the report was a reasonable estimate of the magnitude of the problem and, notwithstanding the media hype, we still don’t.  The AHRC’s research methodology was weak, and it looks like they knew the methodology was weak when they embarked on the venture.

Where does the weakness lie?  The response rate!!!

A sample of 319,252 students was invited to participate in the survey.  It was estimated at the design stage that between 10 and 15% of students would respond (i.e., 85-90% would not respond) (p.225 of the report).  STOP NOW … READ NO FURTHER.  Why would anyone try to estimate prevalence using a strategy like this?  Go back to the drawing board.  Find a way of obtaining a smaller, representative sample, of people who will respond to the questionnaire.

Giant samples with poor response rates are useless.  They are a great way for market research companies to make money, but they do not advance knowledge in any meaningful way, and they are no basis for formulating policy. The classic example of a large sample with a poor response rate misleading researchers was the Literary Digest poll to predict the outcome of the 1936 US presidential election.  They sent out 10 Million surveys and received 2.3 Million responses.  By any measure, 2.3 Million responses to a survey is an impressive number.  Unfortunately for the Literary Digest, there were systematic differences between responders and non-responders.  The Literary Digest predicted that Alf Landon (Who?) would win the presidency with 69.7% of the electoral college votes.  He won 1.5% of the electoral college votes.  This is a lesson about the US electoral college system, but it is also a significant lesson about the non-response bias. The Literary Digest had a 77% non-response rate; the AHRC had a 90.3% non-response rate. Who knows how the 90.3% who did not respond compare with the 9.7% who did respond?  Maybe people who were assaulted were less likely to respond and the number is a gross underestimate of assaults.  Maybe they were more likely to respond and it is a gross overestimate of assaults.  The point is that we are neither wiser nor better informed for reading the AHRC report.

Sadly, whoever estimated the (terrible) response was even then, overly optimistic.  The response rate was significantly lower than the worst-case scenario of 10% [Response Rate = 9.7%, 95%CI: 9.6%–9.8%].

In sharp contrast to the bad response rate of the AHRC study, the Crime Victimisation Survey (CVS) 2015-2016, conducted by the Australia Bureau of Statistics (ABS) had a nationally representative sample and a 75% response rate — fully completed!  That’s a survey you could actually use for policy.  The CVS is a potentially less confronting instrument, which may account for the better response rate.  It seems more likely, however, that recruiting students by sending them emails is neither sophisticated enough nor adequate.

Poorly conducted crime research is not merely a waste of money, it trivialises the issue.  The media splash generates an illusion of urgency and seriousness, and the poor methodology means it can be quickly dismissed.

If there is a silver lining to this cloud, it is that AHRC has created an excellent learning opportunity for students involved in quantitative (social)  research.

Addendum

It was pointed out to me by Mark Diamond that a better ABS resource is the 2012 Personal Safety Survey, which tried to answer the question about the national prevalence of sexual assault.  A Crime Victimisation Survey is likely to receive a better response rate than a survey looking explicitly at sexual assault.  I reproduce the section on sample size from the explanatory notes because it highlights the difference between a well conducted survey and the pile of detritus reported by AHRC.

There were 41,350 private dwellings approached for the survey, comprising 31,650 females and 9,700 males. The design catered for a higher than normal sample loss rate for instances where the household did not contain a resident of the assigned gender. Where the household did not contain an in scope resident of the assigned gender, no interview was required from that dwelling. For further information about how this procedure was implemented refer to Data Collection.

After removing households where residents were out of scope of the survey, where the household did not contain a resident of the assigned gender, and where dwellings proved to be vacant, under construction or derelict, a final sample of around 30,200 eligible dwellings were identified.

Given the voluntary nature of the survey a final response rate of 57% was achieved for the survey with 17,050 persons completing the survey questionnaire nationally. The response comprised 13,307 fully responding females and 3,743 fully responding males, achieving gendered response rates of 57% for females and 56% for males.

 

 

 

 

 

Guidelines for the reporting of COde Developed to Analyse daTA (CODATA)

I was reviewing an article recently for a journal in which the authors referenced a GitHub repository for the Stata code they had developed to support their analysis. I had a look at the repository. The code was there in a complex hierarchy of nested folders.  Each individual do-file was well commented, but there was no file that described the overall structure, the interlinking of the files, or how to use the code to actually run an analysis.

I have previously published code associated with some of my own analyses.  The code for a recent paper on gender bias in clinical case reports was published here, and the code for the Bayesian classification of ethnicity based on names was published here. None of my code had anything like the complexity of the code referenced in the paper I was reviewing.  It did get me thinking however about how the code for statistical analyses should be written. The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network has 360 separate guidelines for reporting research.  This includes guidelines for everything from randomised trials and observational studies through to diagnostic studies, economic evaluations and case reports. Nothing on the reporting of code for the analysis of data.

On the back of the move towards making data available for re-analysis, and the reproducible research movement, it struck me that guidelines for the structuring of code for simultaneous publication with articles would be enormously beneficial.  I started to sketch it out on paper, and write the idea up as an article.  Ideally, I would be able to enrol some others as contributors.  In my head, the code should have good meta-data at the start describing the structure and interrelationship of the files.  I now tend to break my code up into separate files with one file describing the workflow: data importation, data cleaning, setting up factors, analysis.  And then I have separate files for each element of the workflow. My analysis is further divided into specific references to parts of papers. “This code refers to Table 1”.  I write the code this way for two reasons.  It makes it easier for collaborators to pick it up and use it, and I often have a secondary, teaching goal in mind.  If I can write the code nicely, it may persuade others to emulate the idea.  Having said that, I often use fairly unattractive ways to do things, because I don’t know any better; and I sometimes deliberately break an analytic process down into multiple inefficient steps simply to clarify the process — this is the anti-Perl strategy.

I then started to review the literature and stumbled across a commentary written by Nick Barnes in 2010 in the journal Nature. He has completely persuaded me that my idea is silly.

It is not silly to hope that people will write intelligible, well structured. well commented code for statistical analysis of data.  It is not silly to hope that people will include this beautiful code in their papers.  The problem with guidelines published by the EQUATOR Network is in the way that journals require authors to comply with them. They become exactly the opposite of guidelines, they are rules — the ironic twist on the observation by Geoffrey Rush’s character, Hector Barbossa in Pirates of the Caribbean.

Barnes wrote, “I want to share a trade secret with scientists: most professional computer software isn’t very good.”  Most academics/researchers feel embarrassed by their code.  I have collaborated with a very good Software Engineer in some of my work and spent large amounts of time apologising for my code.  We want to be judged for our science, not for our code.  The problem with that sense of embarrassment is that the perfect becomes the enemy of the good.

The Methods sections of most research articles make fairly vague allusions to how the data were actually managed and analysed.  One may make references to statistical tests and theoretical distributions.  For a reader to move from that to a re-analysis of the data is often not straight forward.  The actual code, however, explains exactly what was done.  “Ah! You dropped two cases, collapsed two factors, and used a particular version of an algorithm to perform a logistic regression analysis.  And now I know why my results don’t quite match yours”.

It would be nice to have an agreed set of guidelines reporting COde Developed to Analyse daTA (CODATA).  It would be great if some authors followed the CODATA guidelines when they published.  But it would be even better if everyone published their code, no matter how bad or inefficient it was.

 

Babies have less than a 1 in 3 chance of recovery from a poor 1 minute Apgar score

We recently completed a study of 272,472 live, singleton, term births without congenital anomalies recorded in the Malaysian National Obstetrics Registry (NOR). We wanted to know what proportion of births had a poor 1 minute Apgar score (<4); and the likelihood that they would recover (Apgar score ≥7) by 5 minutes.

As we noted in the paper:

While the Apgar score at 5 minutes is a better predictor of later outcomes than the Apgar score at 1 minute, there is a necessary temporal process involved, and a neonate must pass through the first minute of life to reach the fifth. Understanding the factors associated with the transition from intrauterine to extrauterine life, particularly for neonates with 1 min Apgar scores <4, has the potential to improve care.

Surprisingly, to me at least, we could find no research looking at that 1 minute to 5 minute transition.  Ours was a first.

From the 270,000+ births, you can see (Figure 1) that the probability of a 5 minute Apgar score ≥7 rises dramatically as the 1 minute Apgar score increases. There is an almost straight line relationship between a 1 minute Apgar score of 1, a 1 minute Apgar score of 6, and the chance of  a 5 minute Apgar score ≥7.

Fig 1: The probability (with 95% CI) of an Apgar score at 5 min (≥7) given any Apgar score at 1 minute

A 1 minute Apgar of 6 almost guarantees a 5 minute Apgar score ≥7; in contrast a 1 minute Apgar of 3 has only a 50% chance of recovery, and a 1 minute Apgar of 1 has only less than a 10% chance of recovery.

Fortunately, only 0.6% of births had poor Apgar scores (<4).  The type of delivery (Caesarean section, or vaginal delivery) and the staff conducting the delivery (Doctor or Midwife) were both significantly associated with the chance of recovery.  The challenge is working out the causal order.  Do certain kinds of delivery cause poor recovery, or are babies likely to have poor recovery delivered in particular ways?  Does the training of Doctors or Midwives exacerbate/improve the risks of poor recovery, or are babies likely to have poor recovery delivered by particular personnel?

Our study cannot answer the questions, but it does raise interesting points for future studies of actual labor room practice — questions not easily answered with registry type data.

 

 

Zika Causes Birth Defects In 1 In 10 Pregnancies

Well … Not really.  But that was the misleading headline of an article I saw in the “healthy living” section of The Huffington Post. And then chased it up to its source — an article published by Reuter‘s journalist Julie Steenhuysen.

There were 3,978,497 births in the US in 2015.  Assuming similar numbers in 2017 (and no seasonal variation which is unlikely), you would be looking at a whopping 400,000 births with a Zika virus related birth defect.  The usual rate of birth defects in the US from all causes is about 3 per 100, so with a cumulative total in excess of three times the current numbers one could anticipate a swift, dramatic (and possibly ineffective) response from the government.

Moving down from the headline, however, a very different story is revealed:

About one in 10 pregnant women with confirmed Zika infections had a fetus or baby with birth defects, offering the clearest picture yet of the risk of Zika infection during pregnancy, U.S. researchers said on Tuesday.

No longer is it 1 in 10 pregnancies. Its 1 in 10 pregnancies with Zika.  The facts are not half as dramatic as the headline.  What am I talking about?  “Not half as dramatic”?  The total number of pregnancies in the US Zika Pregnancy Register for 2016–2017 (on 8 April 2017) was 1,311. Fifty-six of the pregnancies resulted in liveborn infants with birth defects, and 7 of the pregnancies were associated with losses with birth defects.  That just doesn’t sound as impressive a number as the headline suggested.  Undoubtedly personally tragic, but far from as significant a population health issue.