It is time to make method papers open
Changing the way we publish and review methods papers
During this month of June 2015, I have simultaneously been asked to review a paper about a tool for data sharing and attended the 'Meet the Editors Roundtable' at the annual OHBM meeting. These two events prompted me to think that there is something wrong in the way we publish method in neuroimaging and elsewhere.
The issue for readers
How many times have you read a new method, just to figure out that this is not available anywhere! that simply doesn't make sense. If I propose a new method, I also write the code and test it - so why wouldn't I make the code available?
One very simple explanation often given is that the code is not user-friendly. Why then developing a method if no-one can use it? [1] If only a handful of people can understand the equations and possibly recode them (and clearly it's not going to be possible in many cases, think e.g. non linear warping, eddy current correction, etc), then the method is inutile. It is not useless, as other might be inspired by it to develop something else - but surely as a method person, you'd like to see your method used. Yes, making the code user friendly is hard, but this is rewarding too.
The issue for reviewers
If I have to review a method, how do I know that it works as described? of course, a new method is tested on simulated data to show that. In neuroimaging and statistics, we also often apply the new method to a case study. But maybe it only works for the data used because of special properties or exceptional high quality - I'd like to be able to see that is works on another dataset. The code for review doesn't have to be user-friendly, as long as (i) it has comments telling how to use it and (ii) it takes arguments in and spit results out, it should be enough.
Why should we review the code? because people can make mistake. Just like we make mistake when we write a blog or a paper, we make mistake when we code. More importantly, mistakes are not always detectable even in the simulations. I take for example my own code that implements TFCE [3] for EEG. In this code, I have default parameters which I believe do a good job at keeping the FWER at the nominal level. I could have made a mistake in typing those and have different values, but this would have not shown up in simulations because little numerical differences don't make a huge difference under the null in this particular case [4]. The problem is that without checking the code, there might be discrepancies between what is described in the article and what is actually coded and used, which can impact long term the results published using such tools.
Proposal
As already advocated on this blog, we must share. For data analysis, this can be tricky because not everybody write scripts to analyse imaging data - but this is possible to some extent and I proposed some solution, at least for fMRI. However, if someone writes a method paper, he/she definitively writes code, and this must be made available.
As reviewers, following the Agenda for Open Research, I'd recommend to not accept an article if you cannot test the code.
As reviewers, if some aspects of the code are fundamental, it is worth spending time reading it. As editors, that means we need to have reviewers willing to read the code, but we need not bother with the manuscript and simulations, while another reviewer reads and comments on the paper itself.
References
[1] Stromberg, A. (2004). Why write statistical software? The case of robust statistical methods. J. Stat. Softw. 10, 1–8.
[2] Pernet, C. & Poline, JB. (2015). Improving functional magnetic resonance imaging reproducibility. GigaScience, 4, 15
[3] Smith, S. & Nichols, T. (2009).Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage, 4, 83–98
[4] Pernet, C, Latinus, M., Nichols, T. & Rousselet, G. (2015). Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study. J. Neurosc Methods, 250, 85-93
Comments
Post a Comment