Outcome Unbiased Journals

Chris Said, a neuroscientist, prods the NIH to support outcome-unbiased journals:

The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research.

The Times article places the blame for this trend on the sharp competition for grant money and on the increasing pressure to publish in high impact journals. While both of these factors certainly play contributing roles…the cause is not simply that the competition is too steep. The cause is that the competition points scientists in the wrong direction.

…scientific journals favor surprising, interesting, and statistically significant experimental results. When journal editors give preferences to these types of results, it is obvious that more false positives will be published by simple selection effects, and it is obvious that unscrupulous scientists will manipulate their data to show these types of results. These manipulations include selection from multiple analyses, selection from multiple experiments (the “file drawer” problem), and the formulation of ‘a priori’ hypotheses after the results are known.

…the agencies should favor journals that devote special sections to replications, including failures to replicate. More directly, the agencies should devote more grant money to submissions that specifically propose replications….I would [also] like to see some preference given to fully “outcome-unbiased” journals that make decisions based on the quality of the experimental design and the importance of the scientific question, not the outcome of the experiment. This type of policy naturally eliminates the temptation to manipulate data towards desired outcomes.


But I heard that the right was CRAZY when they suggested that incentives could be influencing climate change research.

It is, but that's besides the point.

What is relevant the number of studies on one side and the paucity of studies on the other (even with the support of coal and oil industries), and the one side with fewer studies as support claiming that there is a big dispute among experts.

What do you think a poll of climatologists would say?

Jeff, So as not to keep you in suspense, 97% of active world climatologists believe that human activity is warming the atmosphere.


97% of climatologists "actively publishing on climate change" - 88% of climatologists - said that human activity "is a significant contributing factor in changing mean global temperatures".

I guess we know what Bill thinks a poll of climatologists would say.

"Science is the belief in the ignorance of experts." Richard Feynman

Know why you've never seen a poll on the atomic weight of cesium? Because real science doesn't need polls, it has evidence.

Bill, the spectrum is not between "is climate change real" versus "is climate change a hoax." The spectrum is the extent to which increasing carbon dioxide will warm the earth, and the extent to which warming the earth will produce net harm for the planet.

I would be interested to see what a poll of climatologists thought on the extent to which research can be influenced by the factors mentioned above. However, if these things do exist (and I don't know that they do, that research is over my head, but it certainly looks like the incentives are similar), would we really expect researchers to be aware of them? If they were, how would this system develop in neuroscience?

The spectrum among climatologists is between "climate change will warm the planet by a couple of degrees over the next century, producing only minor dislocations and negligible economic harm" and "climate change will warm the planet by a dozen degree over the next century, wiping out all human life."

The spectrum among the public is between "climate change is real" and "climate change is a hoax."

People who make arguments about incentives frequently fall into the "hoax" camp, and they often believe so because they have their own incentives to not want to accept that human activity can affect the climate.

No, the actual spectrum is between

"Any human externalities (even and especially emotionally overwrought ones) must be aggressively managed" and "No, let's have some sanity."

You see this repeated from zoning out treehouses up to shutting off the energy that feeds and moves the economy.

What you guys are talking about is just marketing.

Would a dozen degrees really wipe out all human life?

By 'degree' do you refer to PhD holders?

Energy companies spend a small fraction of the money spent by Big Green on climate research. When an energy company does submit a result, it is immediately denounced by the left with ad hominem attacks.

Energy companies are in the business of providing energy. They aren't supposed to be in the business of refuting academic kooks whose obscure field of study was hardly noticeable until they concocted an Earth destroying catastrophe to justify their continued existence.

97% of African American Studies professors believe in reparations for slavery.

97% of Women's Studies professors believe in the Equal Rights Amendment.

100% of leftists are leftists Bill. "Climatologists" are defined by their beliefs, not their function. Try getting a grant for a research proposal that aims to dispute AGW.

You can't easily get a man to believe anything when his paycheck demands that he believe otherwise.

I'm so glad that you rationally laid this out with uncontrovertable evidence, rather than simply making up numbers.

You're the one who started it with the 97% poll.

TmC--At least MY poll was real. In case you didn't figure it out, Willis were fiction.

"…scientific journals favor surprising, interesting, and statistically significant experimental results."

This provides no incentive for someone to publish an article confirming anthropogenic climate change (not surprising). However, it does provide incentive for someone to publish an article showing that a different process is responsible for the observations.

It's not exactly that simple. True conversation.
Me: What if I showed this?
Advisor: Not novel enough
Me: What if I showed that?
Advisor: Noone would buy it.

That was actually identified as a major problem in cancer research -- it is much easier to work on something that has a high probability of extending patients' lives by a couple months than a low (but real) possibility of curing them. Incrementalism has become the rule.

There are many people and corporations with both the money and incentive to influence climate change research. Generally speaking*, they don't. They direct most of their effort to public relations campaigns. Why, do you think, would they do that?

* A notable exception is the Berkeley Earth Surface Temperature study, funded in part by the Koch brothers. It found that the earth surface is warming and that the temperature increase is as reported by NASA, NOAA and HadCRU

BEST also showed that the earth has been just as warm in the past, and has experienced faster climate changes in the last 1000 years - adding to the evidence refuting that cO2 is the issue.

Here is a quotation from one of the BEST reports.

"In this paper, this framework is applied to the Global
Historical Climatology Network land temperature dataset to present a new global land
temperature reconstruction from 1800 to present with error uncertainties that include many key
effects. In so doing, we find that the global land mean temperature has increased by 0.911 ±
0.042 C since the 1950s (95% confidence for statistical and spatial uncertainties). This change is
consistent with global land-surface warming results previously reported, but with reduced

Have you read any of them?

I would imagine that it's important not to toot that particular horn too loudly if you're a climate change skeptic, for fear of self-hoisting.

You might think of some solutions to this problem one of which was not readily available ten two fifteen years ago:

Require the publication of the data along with the paper as a pdf file or excel sheet.

There is another benefit of this: you could do metadata studies just using google to assemble data across many studies without having to contact the author for his data.

Most science is less about extracting a non-obvious signal from straightforwardly collected data, than about collecting difficult data itself.

There is a tradition in some fields (but not others) to grant laboratories that do the latter a monopoly over the former, which complicates demands to "release the data" (Lenski affair, etc - scientific practice standards will happily accept that whilst Lenski's lab is obliged to release an ancestral strain of the E. Coli sample, or the results of any statistical analysis relevant to paper claims, Lenski is wholly entitled to refuse to release the raw output of any machines used to analyze the bacteria, or any potentiated strains at all - otherwise, everyone would want to run the final step but not do the gruntwork. Scientists in these fields naturally regard the intermediate steps as not "data" in the context).

Yes, but if the data is disclosed, you can replicate (or try to replicate a small set) to determine if the larger set is valid. Releasing the data reduces the costs of challenging it.

In those fields where the main difficulty is collecting data, the "data" is "we took this strain of bacteria and did X, Y and Z to it". How do you package this in a PDF or an Excel sheet? The relevant fields already regard describing the required steps as having released the data, by both prevailing ethical and legal interpretation.

What you're really demanding is that any electronic records generated during the analytical stage be released as well, which is not quite the same as demanding "the data". Like Lenski, the laboratory would just respond that the records alone are scientifically meaningless. Such is how the experimental sciences view the meaning of "data".


My experience with economist, as compared to economists when they serve as expert witnesses, is that the survey or dataset is not published. As to science, if the issue is that it is some format question, ie, excel or pdf., the solution would be to define the disclosure to address the problem: to borrow from deposition subpoenae, "documents and data sufficient to show and permit a third party to verify". Hope this answers your excel or pdf issue.

It doesn't, because most sciences are not like economics and you seem to be failing to understand this. Surveying is straightforward, but most experimental sciences are not.

david, I confess that I do not do life or physical science, but social science. But if you followed the comments below, the differences regarding data may be small if you specify how you created the data, as well as posting it.

If you claim data revelation and description of how the data were generated, then I guess no one will find the truth any less costly than they do now.

But, I believe the cost will be lower with the suggestions I made above.

How about a try.

Once a strain is described in a publication, the lab is expected to share it upon request. They may drag their heels a bit if they don't want competition, but denying the request would look very bad.

For this reason, there is substantial pressure to quickly publish any follow-up studies once the initial findings are published. The initial lab always has a good head start (for many reasons, including that it takes several months to get something published)

I think the point that may be missed here is that if you think of the problem as one of fudged or poorly analyzed data, then requiring disclosure at the outset deters parties from submitting the article in the first place. At least in the areas I deal with, if an economic expert for my side gets the other experts data, he/she can quickly turn around her counter-analysis. The problem is often getting the data. But, that problem is eliminated if the expert is required to file the supporting data with his submission or the submission isn't accepted.

but denying the request would look very bad

And yet, as far as direct competition is concerned, denying the request is what more often than not happens.

You would have this public data published in a proprietary format?

Anyway, supplemental PDFs and spreadsheets are already common, but they don't really enable re-analysis of any data. Raw DNA sequence is the most useful for reanalysis, and is held by many repositories (e.g. Genbank)

See my comment above. You can specify the data submission as data suffient to permit a third party to verify and be silent on the format. Sometimes seeing how someone formats the data--how data is fit into categories, rounded, etc. or extrapolated, or the sample size for categories, or what data was excluded as outliers, can tell you much.

Assuming of course that the data sheet itself isn't already biased. If I were going to tip the scales of the study, I wouldn't start at the analysis stage. I'd fiddle with the diagnostic criteria and patient selection. Maybe unblind a couple of patients. That kind of thing. Ultimately, the only guard against experimental bias is replication.

You have to include how you cooked, er, collected the data, the methodology, what is included and excluded.
You just don't get to give me a number on faith,nor is that what I'm saying. Give me the data and how you got it.

By way of background, I'm a lawyer who uses and abuses economic experts. You don't take their data without looking to how it was generated and from what sources. If they have survey data, you look at the questions, tests, what is excluded as an outlier. If data is composed, you look at how it is aggregated into categories, what are the selection criteria, how are outliers handled, what is the range of measurement errors and tolerances. If it is analysed and run through a stat software, you always ask, because they will be surprised: did you check for heteroskedacity. Stops them every time. I still don't know what it means.

I still don't see any argument against not requiring disclosure of data, raw data, and methodology in collecting data with the paper.

I have often wondered how many times studies which show no statistical result are replicated, simply because the first study was not published and so no one knows that it's been tried before. I know that I've had more than one project die because the null hypothesis was not rejected. An outcome-unbiased journal would be very welcome.

Confused. The link in the first line links to a 2005 post on John Ioannidis's "Why most published research findings are false." Where is link to Chris Said?

Here is the original post: http://filedrawer.wordpress.com/2012/04/17/its-the-incentives-structure-people-why-science-reform-must-come-from-the-granting-agencies/

What does Said suggest that journals do to encourage casual observers of science to purchase as many copies of outcome-unbiased journals as they do of standard "surprising, interesting, significant" journals. Because, let's face it -- replications, failures to replicate, statistically insignificant results, et. al. are BORING. Occasional readers are just going to bypass all that stuff, and occasional readers are where the income is for journals -- not in subscriptions, that are discounted anyway.

What is needed is outcome-insensitive journal funding mechanisms, not just outcome-insensitive selection criteria (or outcome-insensitive science grants).

I think the solution lies in having non-journal models for replication and validation. A magazine prints the story; fact checkers do the grunt work out of the limelight.

Maybe funding agencies can retain in house validation teams? Or some similar model.

Grants get funded before the research is done, so at least for a given study, the grant funding is outcome-independent. However, your chances of getting the next grant depend on whether you seem to have done good work with the previous ones, and that's going to be partly dependent on the outcome of the experiments you've done before. The guy who got a bunch of interesting results in the past is probably a better bet for getting new interesting results in the future than they guy who has done a lot of solid experiments, but never seems to find anything new.

Probably the way to do an outcome independent journal would be to get some kind of tentative approval for the paper based on a description of the proposed experiment, ideally before the experiment is even done. Then, the researcher does the experiment, follows up on it, and eventually has a paper that is submitted for publication. And the reviewers start with the notion that if they did at least what was claimed for pre-approval and seem to have done a good job, it should get published regardless of what the result was.

Failures to produce expected results eve once should also be published. Blind alleys should be marked.


I proposed that here: http://www.overcomingbias.com/2010/11/results-blind-peer-review.html

Not novel enough.

(that's a joke, it's funny)

Or have authors submit the Introduction, Methods and study-goals / protocols before they start the study. The referees do not even have to read them until later.

How 'bout caps for principal investigators. It kind of makes a mockery of academia for researchers to "run" hundred member labs.

Andrew....increasingly, even in neuroscience, there are "big science" problems, akin to human genome, that require Craig Venter sized labs to get the job done. The connectome project (http://www.humanconnectomeproject.org) is one such problem in neuroscience. Henry Markham's brain simulation project (and its competition from Dharmendra Modha at IBM) are others.

I agree that there are big projects, but I disagree that these projects are not distributed or cannot be distributed. The individual as a research subunit will always be with us, and as long as academia grants doctorates and dissertations and papers with first authors are produced. There are, for example, many people contributing to what we may at some point call the human connectome. Basically anyone with an MRI can contribute. I don't know why it must be done under one roof or under central direction. Also, as with the human genome, it's not clear what the benefits were of completing the project relative to the opportunity costs.

My sense is that even for most of these big projects they are still distributed, it's just that a figurehead gets a lot of the credit, partly due to how research is funded, which creates a positive feedback loop. So, the implication that these big projects must be done under one roof may just be a self-fulfilling prophesy. The trendy professor gets the big grant when the grants could just as well, though not quite as conveniently, been spread around.

The purpose of papers is to communicate across the research universe. Paper production is also the way we judge whether we have successfully produced a new original researcher. It seems like one of the main losses of standard research communication is the inefficiencies we are talking about here with positive result bias. So how much is saved by having big projects in-house? What is lost by having everyone in a lab work on a singular project and what is gained? If a project is so big and interconnected that it has to be run like a corporation, maybe it should be run by a corporation. Is a large research lab like a city with positive spillovers and cheaper transaction costs, or is the requirement of communication through papers a feature rather than a bug? Those are economics questions.

Hi Andrew....responding to just a piece of your post, "Basically anyone with an MRI can contribute."....actually this isn't true. MRI's aren't really mass produced and each one is somewhat unique. For example, the 3T Siemens Allegra that is 15 feet below me right now sits in its own unique Faraday cage to shield it from all the various radio frequency and magnetic inhomogeneities in this building. The bottom line: it's really difficult to share data between MRI's (one of the unstated problems perhaps w. replicating fMRI results). In general, this technical variability (introduced by instrumentation or location) is a very tough problem with any high performance technology, so all things being equal, scientists like to "zero out" that confound by running everything on the same machine in the same place.

Look, it's a fully feudal system: PI's "land" is his position in the university that allows him to find money, he uses the money to buy labor, then he owes 100% of the fruits of the labor and pretty much controls future scientific life of the laborer.

Would you agree that a significant 'abstract submission bias' exists on part of the authors too who have equivocal results from research. Is there then an incremental bias ,first from the authors and then the reviewers ,both weeding out the 'BORING' stuff?

The scientist's incentive is to publish highly-cited papers. Publishing in heavily-read journals is a means to that end, but even if the journal is outcome-unbiased, the readership is not, and an exciting result will garner more attention and more citations than a boring result.

The scientist's incentive structure remains.

Yep. Even if the journal publishes the "we tried to find out if X causes Y and saw no evidence that it does" papers, the readers will be a lot more interested in the 5%+epsilon "we tried to find out if X causes Y, and saw evidence (p=0.05) that it does."

Publishing is a bad incentive. Profit is a better one. Most publishing scientists are academics, and therefore subsidized. They're competing for status, not for money. Stop subsidizing higher education and see whether that alleviates the problem.

Those of us who believe in an objective reality (heat flows from a high temperature to a low temperature, hereditary information is encoded in DNA, chemical reactions are re-arrangements of atoms) would say that exactly the same facts would eventually be uncovered. The schedule might be different.

The schedule is the important part.

Perhaps, but is academia doing it right? A lot of human capital is wasted way out on the edge of knowledge on narrow paths leaving a lot of applications "understaffed."

If we really want results-blind peer review, the review stage (for experimental work, anyway) needs to come *before the experiments are run*. Experimenters would submit their design and protocols to journals, journals would accept or reject them, and then experimenters run the experiment. I do not see this as plausible, but any other attempt to hide the results I do not see working, either.

I like this idea. Of course, you still have selection bias.

This hits it right on the nose. Not only would you take a stab at biased results this way, you'd also encourage publication of null results, which would help prevent duplication of failed efforts.

It would also help for reviewers to not know authors' identities rather than just the other way around. That would be even harder to accomplish successfully, though; you can often guess at the author of a paper based on writing style, citations, etc.

@Prof Hanson, I am not holding my breath for journals to be outcome-indifferent, but my idea is to consider instead an alternative and complementary, invited paper journal format as follows:

Editor identifies promising research problem, and assigns it to 5 established researchers (and their teams or coauthors), invites them to write a paper. All papers will be published after peer review, to fix small kinks and ensure a reasonable level of polish. Readers, and future authors, can contrast the 5 papers in terms of methods, results, caveats, etc. Some will have positive results, others negative.

I am not suggesting this approach to replace entirely the current system. It would not work. But I think it would play an important role to complement the traditional, positive-outcome-biased journals.

This waste of resources is unfair to the general public that pays for most of the research.

One of the inevitable downsides to a growing gov't role in science is that scientific integrity largely gives way before politics -- just look at climate science. At this point even the evidence humans have had any significant effect on CO2 levels is (at best) contradictory because of confusion over isotopic origins and the smoothness of the trend, to say nothing of the unreliability of GCMs, yet nearly all the scientists involved strongly espouse spending trillions based on a CAGW viewpoint only weakly supported by scientific evidence, to the point of colluding to suppress skeptics and even calling for "deniers" to be tried for war crimes.

Public trust in "science" is not surprisingly falling. I think this is very dangerous; it took Mankind many millennia to struggle past superstition and unscientific thinking into the scientific light that is largely responsible for our much higher living standards.

I work on phytoplankton. With respect to your claim that "evidence humans have had any significant effect on CO2 levels is (at best) contradictory because of confusion over isotopic origins and the smoothness of the trend," could you kindly direct me to papers demonstrating that humans may not be having an effect on atmospheric CO2 levels?

I'm also not sure how GCMs relate to the human contribution to atmospheric CO2 concentration, perhaps you could clear that up as well?

Thanks in advance.

How about a Journal of Negative Results. There'd be so much competition for entries that it could become the most prestigious journal out there.

Comments for this post are closed