Outcome Unbiased Journals

Chris Said, a neuroscientist, prods the NIH to support outcome-unbiased journals:

The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research.

The Times article places the blame for this trend on the sharp competition for grant money and on the increasing pressure to publish in high impact journals. While both of these factors certainly play contributing roles…the cause is not simply that the competition is too steep. The cause is that the competition points scientists in the wrong direction.

…scientific journals favor surprising, interesting, and statistically significant experimental results. When journal editors give preferences to these types of results, it is obvious that more false positives will be published by simple selection effects, and it is obvious that unscrupulous scientists will manipulate their data to show these types of results. These manipulations include selection from multiple analyses, selection from multiple experiments (the “file drawer” problem), and the formulation of ‘a priori’ hypotheses after the results are known.

…the agencies should favor journals that devote special sections to replications, including failures to replicate. More directly, the agencies should devote more grant money to submissions that specifically propose replications….I would [also] like to see some preference given to fully “outcome-unbiased” journals that make decisions based on the quality of the experimental design and the importance of the scientific question, not the outcome of the experiment. This type of policy naturally eliminates the temptation to manipulate data towards desired outcomes.

Comments

Comments for this post are closed