high impact factor journals and academics have opposite interests



My temptation of publishing a high IF article: 

Reflection on a past mistake


This post is a reflection on how the interests of high impact factor (IF) journals can be against the interests of academics. In particular, high IF journals seem to be looking for controversial stories and one might get lured into writing such an article.

The story


With my colleague, Dr G. Rousselet, we were preparing an article on brain-behaviour correlations and the numerous problems related to such papers (this ended-up being published as ‘Improving standards in brain-behavior correlation analyses’ published in Frontiers in Human Neuroscience). In the original version of the paper we focused on high IF journals rather than specialized one, so we send it to Nature Neuroscience.

The editor did not accept the article - fine. Because we took examples of papers published in Nature Neuroscience, the editor asked us however to write a ‘refutation’ of one of the articles mentioned (in the Frontiers in Human Neuroscience paper, we do not cite directly any paper). Here are our mistakes: (i) during the preparation of the initial paper, we asked authors for the data saying it was for general purpose (as it was), but at this point we should have said the new analyses might challenge their conclusions directly; (ii) lured by the prospect of a high IF paper, we accepted the invite to write a refutation without telling the authors before accepting; (iii) we thought we could still distil the essence of our article within the refutation.

Here is our reply to the Editor, accepting to write a refutation:

Dear Dr XXX,

We agree with changing our article category from 'commentary' to 'refutation' and understand the procedure. However, we would like to stress that our paper goes beyond refuting XXX et al. paper. Our article also highlights more fundamental statistical analysis problems that plague neuroscience research. 

In particular, we emphasise that:
[1] test assumptions are almost never checked;
[2] effect sizes are very rarely reported and taken into consideration; 
[3] inferences about the data are not always appropriate. 

In XXX et al. paper, data contained outliers and variance were not homogeneous (assumption problems) which led to incorrect statistical results. However, even if the results were valid, showing significant effects as in YYY et al. paper, no confidence intervals or effect sizes are reported, which does not allow readers to compare their results to published results, or to make up their own mind about the size and importance of the effects. For these reasons, we provide alternative analyses strategies. Thus, despite accepting the changes to 'refutation' we hope you will consider keeping other constructive criticisms, as for YYY’s analyses.

Silly us – once revised, the paper had been stripped from its essence. I am not convinced that the journal wanted to put the records straights. I think their goal was to spark a controversy and make a buzz to boost their IF. Of course having a debate over a paper and method is healthy for science, but only if it’s done in a community spirit with both parties joining together, not fighting against each other with a non-open access, interest driven referee.  

Unintended consequences

Once we accepted the invite and wrote the commentary, we sent an email to the authors to let them know that Nature Neuroscience was going to contact them. 

Dear Dr XXX and Dr XXX,

We recently asked for the data from your paper titled ‘xxxxxxxxxxxxx' published in Nature Neuroscience. Our goal was to apply robust statistical methods to your results and possibly find other relationships between brain measures and behaviour. The same request was sent to other groups, including a group who also published recently in Nature Neuroscience. After analyses, we found that your data might be biased by outliers and that there is no strong evidence for a linear brain-behaviour relationship in your data. Similar statistical problems could be observed in other studies. Following these results, we decided to send a commentary to Nature Neuroscience about statistical problems related to standard analyses like Pearson correlations and proposed alternative methods. This initial paper was intended to be didactic and only used your data as an illustration. We did not intend to criticize your work, for which we have great respect otherwise.

After consideration from the editors, we have been asked to write a 'refutation', which is a short letter to the editor questioning directly your work. Whilst we try to keep the article open and didactic it is now of course a more direct attack: this was never our intention. You can find in attachment the original submitted article. We are now submitting the new version, which will be passed on to you by Nature Neuroscience. If you have any questions, please feel free to contact us by phone or e-mail.

We expected a non-sympathetic answer, nobody likes being criticized. The answer was more hostile than non-sympathetic:

Thanks for forwarding the article - having read it briefly it is difficult to interpret as anything other than a deliberate and very direct attack. I personally feel that you have deceived me in misrepresenting the purpose for which you asked for the data. While I applaud scientific debate, obtaining data by subterfuge in this way disturbs me. We will consider our next move.

Followed a few other heated emails. The authors did not agree with our article or analyses (obviously) – no matter the debate there was one interesting piece

‘It is regrettable that you chose to submit to Nature Neuroscience without engaging us in any prior correspondence that could have detected and resolved these major errors in your reanalysis.’

We still think there was no mistake (and a few millions of simulations latter, we had another paper demonstrating why alternative correlations should be used – ‘Robust correlation analyses: false positive and power validation using a new open source Matlab toolbox’). The point is, however, that it seems to be a much better solution to contact authors to discuss their method and test alternative methods on their data together. Best scenario, an agreement is reached and a joint paper is published. If one still disagrees, a joint refutation + response can be prepared – this way no parties feel threatened or under attack. But that’s not what we did, so we changed our mind and ask the editor again, trying to illustrate the problem:

Dear Dr XXX, 

After extensive discussion with senior colleagues and exchanges with Prof. XXX (he is cc’ed), we believe that writing a 'refutation' of XXX et al. might not be the most productive way to get our message across to the scientific community. As initially planned, we would very much prefer to write a more general 'comment' on proper use of statistics, using several data sets as multiple illustrations of a similar problem. We do not particularly wish to follow your suggestion of writing a refutation, because we feel that it is unfair to discredit XXX in particular, when our illustration of statistical flaws exist in many other published papers.  

Our position is as follows [ …]
We could go down the route of refutation if these flaws were specific to this study. Unfortunately, the flaws are generic and present in other studies. To illustrate the pervasiveness of the problem, the latest issue of Nature Neuroscience comprises suspicious correlations in [and we listed 5 studies]

We are here reminded of your Editorial in the context of the Grill-Spector debacle, which ended with: “We hope that one positive outcome of this correction will be to give authors, referees and editors an increased awareness of the hazards of this approach, so that such mistakes may be avoided in future studies.”  http://www.nature.com/neuro/journal/v10/n1/full/nn0107-1.html. Although this Editorial was written to describe problems that were not related to correlations, it would be very much appropriate in the present context.  [ … ]

Clearly, the editorial board didn’t follow its own recommendation, because making a buzz is good for IF. In the end, the paper was not published, which isn’t a bad thing, as we did not intend to attack XXX et al. Note I am not here saying Nature Neuroscience are the bad guys either. Among all publishers, the Nature group is, to my eyes, at the top of the list. What I’m saying is that publishers of high IF journals have a strong interest in publishing heated discussions/articles, while we (academics) don’t.

How this can be avoided

1 – When working on someone else’s data, I think it is always good to have their feedback on your analysis before submitting any paper.
2 – If your paper is a direct critic, make it constructive and possibly to do it with the initial authors.
3 – In both cases, go open. We (academics) really don’t need journal editors to tell us how right or wrong things are – sharing data and code is the only way forward. You could have the data and a common code using/comparing the analyses of both parties on a repository like FigShare or Dryad. This way, readers can test the methods and decide themselves..




Comments

Popular Posts