Does the NIH fund edge science?

That is a new paper by Mikko Packalen and Jay Bhattacharya, here is the abstract:

The National Institutes of Health (NIH) plays a critical role in funding scientific endeavors in biomedicine that would be difficult to finance via private sources. One important mandate of the NIH is to fund innovative science that tries out new ideas, but many have questioned the NIH’s ability to fulfill this aim. We examine whether the NIH succeeds in funding work that tries out novel ideas. We find that novel science is more often NIH funded than is less innovative science but this positive result comes with several caveats. First, despite the implementation of initiatives to support edge science, the preference for funding novel science is mostly limited to work that builds on novel basic science ideas; projects that build on novel clinical ideas are not favored by the NIH over projects that build on well-established clinical knowledge. Second, NIH’s general preference for funding work that builds on basic science ideas, regardless of its novelty or application area, is a large contributor to the overall positive link between novelty and NIH funding. If funding rates for work that builds on basic science ideas and work that builds on clinical ideas had been equal, NIH’s funding rates for novel and traditional science would have been the same. Third, NIH’s propensity to fund projects that build on the most recent advances has declined over the last several decades. Thus, in this regard NIH funding has become more conservative despite initiatives to increase funding for innovative projects.


At current levels of funding, for NIH to take risks on real cutting edge research, most funded research would be classed as "failures", ie, results indicating the need for a new sophisticated wild ass guess. And the grants would need to be done quickly, skipping over research that follows up on promising positive results.

NIH and other agencies combining forces on non-cutting edge research for the human genome project, for one, produced an explosion in new methods for research, diagnosis, and new kinds of treatments. Sequencing parts of DNA was cutting edge, but doing it at the scale of sequencing all of the DNA drove commercial innovation. Since, many cutting edge projects have produced promising directions that are still too risky for private funding after all the biotech startups failed at about a 99% rate.

Even the "successful" biotech commercial startups are still in startup, like 23 and me. It and a half dozen companies use the same gene chip tech that came out of the human genome project are still over promising on ancestry origins while producing hints to law enforcement from "big data" on millions of users, which is many times larger than for medical research.

So, to provide the big data for medical research, NIH is funding the not cutting edge "All of Us" project to collect medical data on one million US resident volunteers. It's merely doing what commercial companies have been doing "for profit" but with informed medical consent so medical researchers can access the data.

Is George Will's column today a coincidence:

It's not only NIH that is risk averse (see the authors' conclusions on pages 18-19), as Silicon Valley prefers to invest in the sure thing (digital advertising) rather than in novel ideas.

My favorite news stories coming out of Silicon Valley relates to driverless cars. Now that Silicon Valley has given up on building a reliable car, they are concentrating on more important technology, such as turning the car's windshield into an advertising platform. Here's a representative article (but a simple Google search will produce dozens):

The NIH grant review process is not designed to support "innovative science that tries out new ideas." The review assigns each application an "impact score" which represents an assessment by the study section of the project's potential to advance knowledge or improve health. New ideas rarely do either of those things: they are high risk-high reward ventures. Most of them fail.

The impact score is, in turn, a synthesis of the study section's assessment of the project along a number of other dimensions. Although "innovation" is one of them, most of them are focused on verifying that the proposal, if funded, has a high probability of achieving its stated aims, and that the achievement of those aims would make an important contribution to knowledge or would directly improve health. Those criteria are largely inconsistent with trying out new things: those criteria will select for slow but steady incremental progress.

Given the need for NIH to supplicate Congress for funding each year, and given the glare of publicity associated with that process, it is hardly surprising that their research portfolio is heavily weighted towards sure things. Truly targeting innovative, high-risk high-reward ideas would be institutional suicide.

And, in my opinion, that is how it should be. Our tax dollars should not be disbursed to large-scale expensive speculation. Keep the speculation small-scale and privately funded. Then, among those speculative ideas that show early promise, we can move towards larger-scale exploration and bring in the public funding for larger-scale projects that have a mature basis.

Finally, remember that scientific knowledge is a public good, and the private sector will under-produce it. So it seems to me quite appropriate for the NIH to fund the operation of "normal science," and leave it to the private sector to, on the one hand, speculate on a small scale with innovative ideas, and, on the other hand, develop established scientific ideas into marketable goods and services.

COI Disclosure: I am an NIH-funded investigator, and I serve on a study section.

+1 This is why the comments were turned back on.

Dr. Schechter's comment is spot on. There are some private philanthropies that do engage in funding speculative research. The Howard Hughes Medical Institute funds a small number of investigators who are at the cutting edge of research.

George Will: "It has been said that the great moments in science occur not when a scientist exclaims “Eureka!” but when he or she murmurs “That’s strange.” [Abraham] Flexner thought the most fertile discoveries come from scientists “driven not by the desire to be useful but merely the desire to satisfy their curiosity.” He wanted to banish the word “use” to encourage institutions of learning to be devoted more to “the cultivation of curiosity” and less to “considerations of immediacy of application.” It is axiomatic that knowledge is the only resource that increases when used, and it is a paradox of prosperity that nations only reap practical innovations from science by regarding them as afterthoughts, coming long after basic science. The practical lesson from Flexner’s hymn to impracticality is this: Indifference to immediate usefulness is a luxury central to the mission of some luxuries of our civilization — the great research universities, free from the tyranny of commercial pressures for short-term results. Only government can have the long time horizon required for the basic research that produces, in time, innovations that propel economic growth." Flexner was the author of an essay, published in 1939, titled "The Usefulness of Useless Knowledge". Some folks might call the obsession with immediate usefulness "complacency".


Thanks for providing the context.

Small steps towards a better comments section.

Why not eliminate the franchise to distribute grants and confine the NIH to in-house research? See Philip Johnson on the distortions incorporated into grant distribution by the central government.

Negative feedback loop. Better to let other countries do the edge science than to land on Rand Paul's Waste Report.

Comments for this post are closed