How to fix the incentive structure of science

This is from David L. Stern, who is not the David Stern who was formerly commissioner of the NBA:

…I am won over by the arguments that science papers should be made available freely to everyone as soon as authors feel that the work is complete. Posting papers to preprint servers is one good solution; I imagine there are others. (I prefer to call such documents open papers to remove the stigma associated with calling the work “pre” anything.) However, the discussion about the future of open papers has been imbalanced, with too much emphasis on the consequences of open papers for peer review and too little discussion of the fact that scientists are driven to publish in journals because of the existing incentive structure. The CV, and, specifically, journal names (and impact factors, journal reputation, etc.) are used extensively to judge scientists in competitions for jobs, promotions, and grant money. This is the main impediment to widespread adoption of open papers. I have heard many arguments about how it is too hard to change the structure of these competitions and that we should, instead, focus on producing great science in open papers, and let the culture-shift follow. In contrast, I think it is easier to change the incentive structure first; widespread adoption of open-papers will follow, like water flowing downhill.

There are further suggestions at the link.  Hat tip goes to Jeffrey Flier.


I still like the Reddit style open forum for scientific papers idea which allows commentary. Quality can be judged via a combo of up votes and peer reviews.

Not all up votes are equal. The whole point of peer review is have a knowledgeable person critique the study.

Aren't knowledgeable person eyeballs too rare for this approach? I once knew a paper where they sent the bad quality papers (those that would have been desk-rejected outright somewhere else) to other people who had had their own papers rejected by that very journal. They got cheap proof of peer review this way.

I've been saying for decades now, instead of spending hundreds of thousands on journal subscriptions, academic libraries should host part of an online distributed system of open research, to include many forms. The whole notion of a scientific "paper" being the standard unit of output is quite outdated. Sure, individual studies (with archived data) would still be a part of it, along with commentary, wiki-ized summary articles, etc. Just think of all the different metrics of academic productivity this could produce (along the lines of Scholarship Reconsidered). Put the Journals out of business. You have to pay them to publish your work, and then people have to pay them to get it... what a racket.

Let's not forget open data either; *carefully watches the climate scientist curiously shoveling dirt onto something*

That is an interesting observation. Something can be good science and still subject to a swarm of partisan attacks. Should vaccine safety be judged by a social network? Perhaps contra-George it's better to make a GitHub type repository for science, and constrain it as social medium.

Open data is critical - which is why this is such an interesting web site to visit -

Admittedly, they aren't 'climate scientists.' Instead, they are researchers interested in polar conditions (reliably predicting when and for how long the Northwest Passage opens would be a gold mine for numerous commercial interests, for example, which is just one of the reasons such research is valuable) using all available empirical data to create an accurate picture of what is occurring in the polar regions of the Earth.

All the data sets are available here -

Though this might make you feel better - not a single climate model is accurate in terms of recent or current Arctic conditions. The satellite and ground based empirical data shows a reality exceeding the parameters of even the most pessimistic assumptions associated with those utterly flawed climate models and their incorrect predictions.

"Though this might make you feel better – not a single climate model is accurate in terms of recent or current Arctic conditions. The satellite and ground based empirical data shows a reality exceeding the parameters of even the most pessimistic assumptions associated with those utterly flawed climate models and their incorrect predictions. "

There have been various predictions that the Arctic would be Ice free before 2016. So, this statement is empirically wrong.

"BBC December 12, 2007: Arctic Summers Ice-Free by 2013
Their latest modelling studies indicate northern polar waters could be ice-free in summers within just 5-6 years. Professor Wieslaw Maslowski told an American Geophysical Union meeting that previous projections had underestimated the processes now driving ice loss.

"Our projection of 2013 for the removal of ice in summer is not accounting for the last two minima, in 2005 and 2007," the researcher from the Naval Postgraduate School, Monterey, California, explained to the BBC.
"So given that fact, you can argue that may be our projection of 2013 is already too conservative."

Remarkably, this stunning low point was not even incorporated into the model runs of Professor Maslowski and his team, which used data sets from 1979 to 2004 to constrain their future projections."

Words have meaning, but "could be" has little.

It certainly is counter to the narrative that the current Arctic conditions are worse than any models predicted. However, it does lend credence to prior_test's point regarding the "utterly flawed climate models and their incorrect predictions".

Jump, true. That 'could have' been written by Mother Theresa.

Which climate models predict no reduction in polar ice or predict increases in polar ice?

If all climate models predict loss of ice in the polar regions, where are the inaccuracies? The failure to predict exactly where the ice will be lost, instead only being correct in predicting aggregate loss of ice?

I find economists throwing stones at climate theory and models to be quite funny given economists get to stand inside their glass houses to throw the stones through their completely busted out windows of economic models that are totally busted.

When will the 70% cut in oil and gas prices going to result in a huge surge in gdp growth?

When will 15 years of tax cuts starting from a balanced budget going to pay for themselves with much higher gdp growth than during the 90s?

When will lower labor costs boost consumer spending?

Climate theory and models predict more rain, more drought, more weather extremes, more ice loss, more sea level rise, more heat waves,... which predictions are not true? Is anyone arguing that the weather is not changing at all and the only reason people see record weather conditions in Atlanta, Wichita, Los Angeles, Dallas is those are new cities built in the past decade or two so no one lived those places to record the extremes that have occurred every few years for a thousand years?

Probably for the fact of a couple incidents across tens of thousands of papers, we should ban climate science research since it's obviously all lies and manipulation. Similarly, the VW scandal is proof that we need to ban cars outright.

Seriously though, open data is a really big deal. Too much work is happening with proprietary data these days, making replicability, a core science concept, difficult if not for practical purposes impossible in many cases.

You mean like this?

People respond to incentives. Not quite, but this would be like asking oil companies to cut production but refusing to entertain a carbon tax or cap and trade. While some few scientists who are already secured their positions might jump into the open paper concept, it is not reasonable to ask others to sacrifice their individual career advancement to set an example for the broader betterment of science.

Perhaps some sort of upvote/downvote system by the broader academy (professors at top 1000 global universities plus editors of a large variety of publications?) could serve as an indicator of quality to stand in if/until eventual journal publication? However, this would penalize innovative research and there would be problems with fairly evaluating research on controversial issues.

it is not reasonable to ask others to sacrifice their individual career advancement to set an example for the broader betterment of science.

You do not stop, huh?

Have you considered that said sacrifice wouldn't even "set an example for the broader betterment of science?"

Have you considered that "scientific community" doesn't even really work in the way it has been described? That the scientific method must flow through humans who must be concerned with things other than "investigating reality?"

How is speeding up science not conducive to the betterment of science? You publish the open paper sooner, get feedback sooner, produce a better version sooner, and can refocus your efforts on a new project sooner.

It often takes years between initial submission to a journal and publication, and post-publication, almost no one without a subscription to the paper will ever read it again.

Faster. More accessible. That's good.

"Perhaps some sort of upvote/downvote system by the broader academy (professors at top 1000 global universities plus editors of a large variety of publications?)"

Why editors and Uni professors? These are neither the most talented nor disinterested scientists out there.

No reason not to include some others. The editors and uni professors are less likely to be in a position to profit from partisan or ideological signalling, I believe. Their interests presumably relate more to career advancement and reputation as an objectively "good" researcher/editor.

Peer review isn't just a matter of reputation or upvotes/downvotes.

There are at least three things that a good review should do:

1) Test correctness -- raise potential technical problems with the manuscript.

2) Enforce quality control -- make sure that relevant work is cited fairly, evaluate whether the presentation, language, and formatting of the paper meet community standards.

3) Suggest improvements that will help the author to realize their vision more effectively.

Each of these three are much more valuable when the reviewer invests more time in the review, which means that an upvote/downvote system will give mostly useless information.

Having an attached comment section is also a bad idea, for the simple reason that most objections raised by the reviewers under category #1 are wrong. That's expected -- the author has put in several months of work and really ought to be the authority on their own paper. So an "open peer review" system is likely to mix the high value content of the paper with low value content which mostly consists of the reviewer raising objections that they haven't thought through completely. Not unlike the comment section following a newspaper article.

As #overlyhonestmethods put it: "We did not make the corrections suggested by Reviewer A because we think Reviewer A is a *****ing idiot."

Perhaps an additional upvote system for feedback quality, where an academic might like to establish themselves as a good reviewer? I loosely aware that the open source software development community has such tools, and they might be the place to look for existing strategies.

One could imagine a future where high reviewer status and good reputation in open papers (including citations) would contribute to career advancement for academics. Some combination of innovation and "competitive" forces would eventually winnow the number of competing systems down to a handful, or perhaps just one. JSTOR could probably do it, but they might not like to kill their business model.

At its heart, you're still bundling the high value information of the manuscript with the reviewer's comments, which are mostly low value and extremely variable in quality.

I think many people think of peer review as some kind of test or guarantee of quality, which it really isn't. It's a couple of people of average ability and some prior experience in the field reading a manuscript to see if it meets community standards and maybe offering a few suggestions for improvement.

Peer review with a view to journal publication is certainly no panacea, but it's virtually certain to deliver better results than bought and paid for research.

There's a definite gating factor to peer-review, but the bigger problem by far in science is that many papers are not reproducible due to this effect: (95% confidence limit publication bias demo)

How about mandatory retirement for tenured professors at 60. Fraud has become so widespread is because of the extreme competition for tenure-track positions. If you're 30 and entering academia, the message is either become a all-star in your field or starve to death. This shift is primarily demographic, boomer professors refuse to retire from their cushy tenured positions.

That's right. Akin to Federal Circuit judges, who should have a mandatory retirement since they often become senile on the bench. It might require a tweak to the US Constitution however.

I think it would be better for us all if unfireable academics could maximize the span of their careers in the pursuit of various intellectual engagements, without all the troubles of political meddling. Young researchers cannot be so bold because they do not yet have tenure.

Tenure is a bedrock of intellectual freedom of the academy. I consider any attack on tenure to be suspect, except in the case where universities are playing negotiation games to try to keep salaries down.

The worst harm a tenured professor can do is write some inconvenient research, and maybe get a little boring in teaching the same class he/she's taught for the last 30-40 years.

The incentive problem in science isn't the pressure to publish in journals, it's to publish novel findings, regardless of whether they're true. Doesn't matter if they're published for free or behind a paywell.

The author thinks that changing academic hiring so radically that it doesn't even use CVs (or interviews, since he wants anonyomous applications). Such a change is supposed to be the easy option?

As a PhD scientist, I have to say that modern science is a remarkably innovative and highly competitive environment, despite the fact that there is almost no profit motive. To me, it's an amazing example of emergent market principles, which is well outside what we normally think of as a "economics".

The original post here reminds me a lot of socialists that criticize capitalism by pointing out that it's not perfect. True, it's not perfect... but you have to make a lot of assumptions to believe you can improve it by radically changing it.

The data transparency issue, also raised here, is highly important, but separate from the core of the original post. We can get more data transparency within the current system, and there has already been clear progress towards this.

"Almost no profit motive." The profit motive is what motivates the vast majority of academics and university scientists. Lucrative salary, tenure, extremely cushy working conditions, and getting paid to do what you enjoy...this is pure profit seeking. The Foucaults and Einsteins...ars gratia artis is rare in the academy, as it is in all segments of society.

OMG, Trump should have totally become a scientist.

"Lucrative salary, tenure, extremely cushy working conditions" are not motivations to compete, but in my experience the average professor at a top university is at least as competitive and certainly more innovative than the average business man I've met at a top companies.

As a mathematician, I don't see any stigma attached to the word "preprint." It serves exactly the purpose it's supposed to. It not only announces but in fact displays a new result, before it's had the chance to go through peer review.

Mathematicians these days cite preprints, which I thinks gives us sufficient incentive to publish them. I'm not particularly concerned about this practice, as I don't think there's a very high risk of propagating erroneous results. I'm not sure about the other sciences.

I have a longer response here. In short: When it comes to identifying new good ideas in the chaff of terrible research, the valuable commodity is the attention and careful consideration of an expert. Any change that impedes the careful rationing of that resource, or that squanders the fruit of that resource a  (e.g., badges of approval), is very likely making things worse. Committees don’t rely on counting Nature articles because they think the Nature reviewers are smarter than them — everyone considers themselves the smartest — they do it because reading and understanding papers from dozen (or hundreds!) of applicants is an enormous burden. There are conceivable ways to fix this (e.g., radical suggestions like replacing the hiring committees at many institutions with a cross-institution evaluation and hiring board), but just taking away the information from the hiring committee members does nothing except force them to rely on even noisier signals.

It's actually very difficult in this full of activity life
to listen news on Television, so I simply use internet for that reason, and take
the most up-to-date information.

Comments for this post are closed