I was reviewing an article recently for a journal in which the authors referenced a GitHub repository for the Stata code they had developed to support their analysis. I had a look at the repository. The code was there in a complex hierarchy of nested folders. Each individual do-file was well commented, but there was no file that described the overall structure, the interlinking of the files, or how to use the code to actually run an analysis.
I have previously published code associated with some of my own analyses. The code for a recent paper on gender bias in clinical case reports was published here, and the code for the Bayesian classification of ethnicity based on names was published here. None of my code had anything like the complexity of the code referenced in the paper I was reviewing. It did get me thinking however about how the code for statistical analyses should be written. The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network has 360 separate guidelines for reporting research. This includes guidelines for everything from randomised trials and observational studies through to diagnostic studies, economic evaluations and case reports. Nothing on the reporting of code for the analysis of data.
On the back of the move towards making data available for re-analysis, and the reproducible research movement, it struck me that guidelines for the structuring of code for simultaneous publication with articles would be enormously beneficial. I started to sketch it out on paper, and write the idea up as an article. Ideally, I would be able to enrol some others as contributors. In my head, the code should have good meta-data at the start describing the structure and interrelationship of the files. I now tend to break my code up into separate files with one file describing the workflow: data importation, data cleaning, setting up factors, analysis. And then I have separate files for each element of the workflow. My analysis is further divided into specific references to parts of papers. “This code refers to Table 1”. I write the code this way for two reasons. It makes it easier for collaborators to pick it up and use it, and I often have a secondary, teaching goal in mind. If I can write the code nicely, it may persuade others to emulate the idea. Having said that, I often use fairly unattractive ways to do things, because I don’t know any better; and I sometimes deliberately break an analytic process down into multiple inefficient steps simply to clarify the process — this is the anti-Perl strategy.
It is not silly to hope that people will write intelligible, well structured. well commented code for statistical analysis of data. It is not silly to hope that people will include this beautiful code in their papers. The problem with guidelines published by the EQUATOR Network is in the way that journals require authors to comply with them. They become exactly the opposite of guidelines, they are rules — the ironic twist on the observation by Geoffrey Rush’s character, Hector Barbossa in Pirates of the Caribbean.
Barnes wrote, “I want to share a trade secret with scientists: most professional computer software isn’t very good.” Most academics/researchers feel embarrassed by their code. I have collaborated with a very good Software Engineer in some of my work and spent large amounts of time apologising for my code. We want to be judged for our science, not for our code. The problem with that sense of embarrassment is that the perfect becomes the enemy of the good.
The Methods sections of most research articles make fairly vague allusions to how the data were actually managed and analysed. One may make references to statistical tests and theoretical distributions. For a reader to move from that to a re-analysis of the data is often not straight forward. The actual code, however, explains exactly what was done. “Ah! You dropped two cases, collapsed two factors, and used a particular version of an algorithm to perform a logistic regression analysis. And now I know why my results don’t quite match yours”.
It would be nice to have an agreed set of guidelines reporting COde Developed to Analyse daTA (CODATA). It would be great if some authors followed the CODATA guidelines when they published. But it would be even better if everyone published their code, no matter how bad or inefficient it was.