Reproducible NeuroImaging
New year, new calls for Reproducible NeuroImaging: what's up for 2017
It's January 2017 and we have already two calls for better science: Russ Poldrack et al. in Nat. Neuroscience and Munafo et al. in Nat. Human Behaviour. Below is a summary of the 'new year resolutions' (and the one I don't think we will). For different take on that, see also NeuroSkeptic's blog.
Which issues to deal with?
Overall, there is nothing new in the message: we need to be more reproducible and for that we need (1) to have properly powered studies, (2) to distinguish exploratory from confirmatory analyses, ideally via pre-registration, and (3) report all analyses performed along with sharing data and code. What I found really nice in Poldrack et al 2017 was the detailed power analysis both by year/sample size and type of effect (motor vs cognitive). The paper was however very technique oriented, while Munafo et al offers a more general view on what needs to be done, including training and funding.
Power
To avoid issues related to computing post-hoc power, Poldrack et al. computed the minimum effect size required given N in each study since 1995, for 80% power. The tricky bit here is that this has to be for the whole brain, so they made the reasonable (IMO) assumption that the analysis in each study is a t-map with minimal cluster to be found of 200 voxels above a given threshold specified by random field theory. With that, they show that while sample size increases, smaller effect sizes can be detected (that's a given) but that with current study sizes (median N=28) only effect of d ~0.75 can be confidently detected. And this is a problem. Why? because they later show, using the super clean data from the HCP, that average effect size is in the range 0.5 (gambling task) to 1 (motor). The implication is that most of the time our data have effects ranging maybe 0.3 to 0.8 and that we therefore need more subjects (yet again).
Analytical flexibility
I have been advocating data sharing, code sharing and full reporting myself (see my GigaScience paper). As many papers now, they make the point that analytical flexibility is a big problem, and that pre-registration is a solution. I particularly like Munafo's approach, i.e. starting from the cognitive bias: we do an analysis (actually several) that leads to a nice results that we can explain, in line with hypothesis we make given the results .. ough we have been p-hacking and harking because of confirmation bias (the analysis path that led to no result was necessarily wrong, or there was a bug - have you check there is no bug on the code that gives the 'right' result?) and hindsight bias (yes the hypothesis we just made up could have been made in advance based on the literature, no cheating just fooling ourselves). Pre-registration alleviates these biases by having to specify hypotheses, main outcomes, and the analysis path (or paths, nothing say you cannot pre-specify two analysis strategies) in advances, blinded from the data.
Reporting
For NeuroImaging people like me, I'd say one word: COBIDAS. There are plenty of guidelines out there, more or less followed, on what to report and how. For MRI, the COBIDAS report (specifically the annexes) tell you what to do. I guess the important message is 'report it all'. Unless we report derived data (i.e. effect sizes) along with the negative and positive statistical results that goes with that, we will make little progress in accumulating evidence.
In addition to this, Poldrack et al. see the paper of the future as a software container that runs the analysis code written using literate programming (and therefore produces the paper and figures). I'm afraid that's not going to happen soon - unless the way science is done completely changes. I'm not talking about using literate programming, that's really what we need (i.e. your comments go beyond saying what it currently computes, but tell what hypothesis you are testing). I'm talking about having the whole thing contained and programmed. I think it is, and will remain, a step too far for most of us. That means that to have such paper of the future, we will need larger multidisciplinary teams, including and recognizing the role of research software engineers, and we will need to change how people are credited and evaluated (I'm not against it, on the contrary - just I can't see happening tomorrow), then we could see these papers of the future.
Will a fixed (pre-registered) method prevent discovery?
Munafo et al 2017 touch on this in the introduction, and it worths spending time thinking about it, because this is the typical argument of people against the adoptions of these changes: if you specify it all in advance, you won't discover anything. First, this is completely missing the point: pre-registration allows to distinguish confirmatory from exploratory; not to prevent exploratory analyses. Second, you can specify in advance that you are doing a completely new research with no prior and that you will therefore perform N exploratory analyses on it. Third, no one said that once you published your pre-registered confirmatory analysis, you cannot show additional possible cool effects (exploratory / discovery). Last but not least, pre-registration might in fact, increase discovery. Here is my reasoning: 'breakthroughs are achieved by interpreting observations in a new way' (Mufano et al. 2017). Here the authors do not say it is achieved by doing a million different analyses until we find something - but that we infer differently from data.
How can we infer differently? This might be based on new gained knowledge (via a meta-analysis, a review of disparate studies, etc), or via new observations (which necessarily have been planned to have the right power for an expected 'new' effect), or new methods (preferably from available data unless there is reason not too, and in this case the analysis must be pre-registered otherwise the new method is just tuning an algorithm until significant results are obtained). The proposed changes (pre-registration, higher power, etc) are therefore needed for breakthrough, i.e. they will not hinder discovery, on the contrary they will increase it.
References
Poldrack et al 2017 Scanning the Horizon: towards transparent and reproducible neuroimaging research doi:10.1038/nrn.2016.167 [preprint here]
Munafo et al. 2017 A manifesto for reproducible science doi:10.1038/s41562-016-0021
Comments
Post a Comment