A few days ago the Australian Human Rights Commission (AHRC) launched Change the course, a national report on sexual assault and sexual harassment at Australian universities lead by Commissioner Kate Jenkins. Sexual assault and sexual harassment are important social and criminal issues, and the AHRC report is misleading and unworthy of the gravity of the subject matter.
It is statistical case-study in “how not to.”
The report was released to much fanfare, receiving national media coverage including TV and newspapers, and a quick response from universities. “At a glance …” the report highlights among other things:
- 30,000+ students responded to the survey — remember this number, because (too) much is made of it.
- 21% of students were sexually harassed in a university setting.
- 1.6% of students were sexually assaulted in a university setting.
- 94% of sexually harassed and 87% of sexually assaulted students did not report the incidents.
From a reading of the survey’s methodology, any estimates of sexual harassment/assault should be taken with a shovel-full of salt and should generate no response other than that of the University Of Queensland’s Vice-Chancellor, Peter Høj‘s, that any number greater than zero is unacceptable. What we did not have before the publication of the report was a reasonable estimate of the magnitude of the problem and, notwithstanding the media hype, we still don’t. The AHRC’s research methodology was weak, and it looks like they knew the methodology was weak when they embarked on the venture.
Where does the weakness lie? The response rate!!!
A sample of 319,252 students was invited to participate in the survey. It was estimated at the design stage that between 10 and 15% of students would respond (i.e., 85-90% would not respond) (p.225 of the report). STOP NOW … READ NO FURTHER. Why would anyone try to estimate prevalence using a strategy like this? Go back to the drawing board. Find a way of obtaining a smaller, representative sample, of people who will respond to the questionnaire.
Giant samples with poor response rates are useless. They are a great way for market research companies to make money, but they do not advance knowledge in any meaningful way, and they are no basis for formulating policy. The classic example of a large sample with a poor response rate misleading researchers was the Literary Digest poll to predict the outcome of the 1936 US presidential election. They sent out 10 Million surveys and received 2.3 Million responses. By any measure, 2.3 Million responses to a survey is an impressive number. Unfortunately for the Literary Digest, there were systematic differences between responders and non-responders. The Literary Digest predicted that Alf Landon (Who?) would win the presidency with 69.7% of the electoral college votes. He won 1.5% of the electoral college votes. This is a lesson about the US electoral college system, but it is also a significant lesson about the non-response bias. The Literary Digest had a 77% non-response rate; the AHRC had a 90.3% non-response rate. Who knows how the 90.3% who did not respond compare with the 9.7% who did respond? Maybe people who were assaulted were less likely to respond and the number is a gross underestimate of assaults. Maybe they were more likely to respond and it is a gross overestimate of assaults. The point is that we are neither wiser nor better informed for reading the AHRC report.
Sadly, whoever estimated the (terrible) response was even then, overly optimistic. The response rate was significantly lower than the worst-case scenario of 10% [Response Rate = 9.7%, 95%CI: 9.6%–9.8%].
In sharp contrast to the bad response rate of the AHRC study, the Crime Victimisation Survey (CVS) 2015-2016, conducted by the Australia Bureau of Statistics (ABS) had a nationally representative sample and a 75% response rate — fully completed! That’s a survey you could actually use for policy. The CVS is a potentially less confronting instrument, which may account for the better response rate. It seems more likely, however, that recruiting students by sending them emails is neither sophisticated enough nor adequate.
Poorly conducted crime research is not merely a waste of money, it trivialises the issue. The media splash generates an illusion of urgency and seriousness, and the poor methodology means it can be quickly dismissed.
If there is a silver lining to this cloud, it is that AHRC has created an excellent learning opportunity for students involved in quantitative (social) research.
It was pointed out to me by Mark Diamond that a better ABS resource is the 2012 Personal Safety Survey, which tried to answer the question about the national prevalence of sexual assault. A Crime Victimisation Survey is likely to receive a better response rate than a survey looking explicitly at sexual assault. I reproduce the section on sample size from the explanatory notes because it highlights the difference between a well conducted survey and the pile of detritus reported by AHRC.
There were 41,350 private dwellings approached for the survey, comprising 31,650 females and 9,700 males. The design catered for a higher than normal sample loss rate for instances where the household did not contain a resident of the assigned gender. Where the household did not contain an in scope resident of the assigned gender, no interview was required from that dwelling. For further information about how this procedure was implemented refer to Data Collection.
After removing households where residents were out of scope of the survey, where the household did not contain a resident of the assigned gender, and where dwellings proved to be vacant, under construction or derelict, a final sample of around 30,200 eligible dwellings were identified.
Given the voluntary nature of the survey a final response rate of 57% was achieved for the survey with 17,050 persons completing the survey questionnaire nationally. The response comprised 13,307 fully responding females and 3,743 fully responding males, achieving gendered response rates of 57% for females and 56% for males.