A market system for improving peer review?

Paul Frijters and Benno Torgler have a new six-page paper (pdf)on that topic, here is the abstract:

The current peer review system suffers from two key problems: promotion of an in-crowd whose methods, opinions and innovations it protects; and failure to represent the opinions and interests of non-peer clients. As a result, whole disciplines orient themselves toward navel-gazing research questions of little import to society or even science as a whole, and new methods and concepts must be unusually persuasive to break through. We thus suggest a more efficient and integrity-preserving system based on an open two-sided market in which buyers and sellers of peer review services would both be subject to a set of recursive quality indicators. We lay out key features we think would be important to reduce the opportunities for gaming and that improve the signals about the societal value of a contribution. Our suggestions include a level of reward offered by the author of a paper to get refereed and a level of desired quality of the referee. They include randomly selecting from a group of referees that express a willingness to accept the offered contract. They include the possibility that papers are put up by non-authors for peer-review for assessment on different criteria, such as societal relevance. And they finally include the possibility that referee reports themselves become refereed by other referees. What we envisage is that such an open market in which all elements are subject to peer review will over time lead to specialized reviewers in different criteria, and more useful signals about the nature and quality of any individual piece of work. Our incentivized market set-up would both professionalize the peer review process and make it completely transparent, an innovation long overdue.

Interesting, but the main problem with the idea is simply that no one cares.

For the pointer I thank Ben Southwood.


My favorite phrase from the abstract: " the possibility that referee reports themselves become refereed by other referees."

"Our suggestions include a level of reward offered by the author of a paper to get refereed ... " Have the authors considered how this will promote "an in-crowd whose methods, opinions and innovations" are protected?" Think about the young professor who hits it big with a new method, gets an enormous salary increase, and is able to offer a reward for subsequent papers. Talk about first mover advantage. This is enough to shoot down this proposal. One might respond that we can build this cost into our research proposals, but lots of research is done without grant funding.

Or the situation that I am speculate about at my school. One of my colleagues has a rich spouse (this I am not speculating about). I suspect that she farms out all her sophisticated statistical analysis. You can see where this is going.

I don't see your point. Surely you will acknowledge that big schools with big R&D budgets do better research than a mad inventor in his mother's basement? Not talking about pioneer work, but research. And isn't First Mover advantage exactly what is rewarded in science anyway? Nobody gets credit (sadly) for confirming others work.

A bit rich that Torgler, who was caught literally abusing the peer review system, is the one writing this paper. Here's an idea to fix peer review: don't be a jerk that abuses the mutual trust on which academia relies, and instead honestly represent the novel content of one's work and the contribution to the literature.

http://olafstorbeck.blogstrasse2.de/?p=949 ; an official rebuke of Torgler and Frey was published by Autor in the JEP http://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.25.3.239

Agreed with anon above. There are no less-respected economists in Australia than Torgler (plagiarist) and Frijters:

It's in the "front matter" to the March 2005 AER:


To save you looking it up, here's the text:

"The June 2004 issue of the American Economic Review includes an article by Frijters, Haisken-DeNew, and Shields titled "Money Does Matter! Evidence from Increasing Real Income and Life Satisfaction in East Germany Following Reunification." This article is similar in content to a paper by the same authors published in the Summer 2004 issue of the Journal of Human Resources. Authors submitting papers to this journal are reminded that all prior work and its relationship to the paper being submitted should be fully described. "

I smell BS.

A) "navel gazing" is what 99.9% of science is. Navel gazing is precisely...why...science is undertaken in universities. If there were any direct "social value" to their research, they wouldn't need to peer review it and publish it in journals that no human being will ever read, other than those who referee for the journal.

B) What exactly does this have anything to do with what is of importance to...society? Society find it important to write blog posts on Beyonce. Pretty sure the entire point of the academic publishing business is to...not...do things that society finds of importance.

Jesus H Christ! If we all had to write papers of "importance to society", we'd all write papers on health outcomes and happiness, like Frijters does. And yet, some of us...who may or may not be part of society...could give two s**ts about health outcomes or happiness (other people's at least). I want to write about WTF I want to write about, Frijters. That's the point of academia.

C) How do...any...of these suggestions lead to any of the supposed intended outcomes the author suggests? I don't see it.

D) Are the authors aware that authors can write directly to the editor, suggesting reviewers, or suggesting who should not be a reviewer, precisely due to conflicts between their paper and those of potential reviewers?

E) New concepts and methods must...by definition...be unusually persuasive to pass muster.

F) Even if, what they are saying were remotely true, those who would find their "worthwhile" ideas to be hard to publish in in-crowd journals, will go publish them elsewhere. And if those ideas were indeed good ideas, they will be cited elsewhere. And the previously crappy journal will rise in rank relative to the good in-crowd journal, because it snatched a quality paper. The fact that this happens fewer times than one can count on one hand, indicates that it's not really a problem.

And yet, strangely, contradictory theories and evidence are published in the same journal all the time. Heck, if you wrote a convincing paper that contradicts prior theory or evidence, I'm pretty sure you'd be publishing in AER not in AER ;) (American Economic Review instead of Australian Economic Review)

"I want to write about WTF I want to write about, Frijters. That’s the point of academia."

That's great as long as you aren't relying on any funders that expect your work to have an impact.

Is that what funders are looking for?

According to this PDF by far the biggest category of funding is Medical where we would want concrete results.

Interestingly, research money does tail off from the most practical to least, with humanities receiving a fraction of 1%.

Social sciences usually don't receive much grants or external funding, because 95% of our work requires no money to do.

"but the main problem with the idea is simply that no one cares"

And no one should care. Social sciences are already full of 20-something bright eyed dimwits who think they are going to change the world and revolutionize everything with their...sh**ty dissertation on cell phone markets in Gabon, or transgender bathroom use in Tanzania, SAT outcomes in rural West Virginia, lumber yard industry in North-Eastern Oregon....or God forbid....the 67,000th dissertation on residential segregation (which should have been resolved 67,000 dissertations ago, but strangely wasn't)

Social science journals are already too full of completely irrelevant papers...like 90% of Frijters papers...simply because the authors and the editors think that they "make an impact on society", and they "matter". Irrelevant, to the discipline that is. Quite relevant to...well absolutely no one actually.

Doing something that actually advanced knowledge...for knowledge's sake...is far more interesting, far more influential, and far more useful than writing yet another sh**ty paper on health outcomes using the world's most obscure and idiosyncratic data source.

God forbid there come a day when papers are published on the merit of their "social value".

I used to believe that 98 percent of academia was utter useless horseshit, by and for people who simply hated being outside of their bedrooms since they were six years old.
Now that I teach at a top university (for fun) I'd amend that to say 99.5 percent of academia is utter horseshit.
What a racket! I have to admire it. A truly successful con job.

The point of academia is precisely to do things that people like you find no use for whatsoever.

"Ew!! Hippocrates, WTF are you cutting up dead bodies for? What use is that to anyone? What a dumb f**k you are Hippocrates!"

- Facebook, 400BC

Some research that appears useless might later turn out to be useful. But that doesn't mean all research will be.

As someone in the field, I think those who worry that a large portion of research is not useful are correct. I wouldn't go so far as to say "con job", but there is definitely a big problem here.

Private businesses also often go down 100 dead ends before getting a big hit. There's just more disclosure in the pubic stuff, which leads to a degree of bias due to what may be observed.

Of course a lot of promising ideas don't work out. But there is plenty of science being done that clearly has little or no promise in the first place. Subcategories of this:

1) The ideas just weren't good ideas.

2) The topic being studied wouldn't have any real-world impact.

3) Not enough rigor is applied in the methods, so that useful evidence is not being gathered (even if the topic is worthy).

The third category might be the biggest problem.

This is a lot worse in the academy than in the business world, because the incentives (grant funding and recognition via publication) are poorly aligned with real-world "success".

"1) The ideas just weren’t good ideas."

Don't get published except in C-level journals. So nobody cares

"2) The topic being studied wouldn’t have any real-world impact."

Beyonce has more "real world impact" than Acemoglu's combined volumes of papers. So?

"3) Not enough rigor is applied in the methods, so that useful evidence is not being gathered (even if the topic is worthy)."

And that stuff never gets published except at C-level journals.

When we're talking "academia", we're talking A-level journal stuff (maybe some Bs). The rest is irrelevant by function of being irrelevant even to academia.

@AIG, now saying that only top work counts as "academia" is seriously shifting the goalposts. But even ignoring that, lots of stuff published in top-level journals has insufficient rigor to be useful. The main criterion for getting into a top journal is a result that is "interesting", not the rigor of the methods or the strength of the finding.

See recent controversies about failure to replicate research findings from top journals.

dan - perhaps you're right on all those counts. But what happens in academia is available for the public record, whereas this does not apply in business. And, I think it's reasonable to group together in "business", the armies of tinkerers and wannabe inventors who invest significant time to try to find stuff that works.

The headline statistics can only overestimate the rate of failure in academic research and can only underestimate the rate of failure in business.

"Some research that appears useless might later turn out to be useful. But that doesn’t mean all research will be."

The point is...you don't know ex-ante what will be useful, or when. Hence, there's no requirement for it to be "useful". If it were "useful", it wouldn't be done by lame academics.

"As someone in the field, I think those who worry that a large portion of research is not useful are correct"

They're not correct because that's the intent of academic research (especially in social sciences).

This is a ludicrous position. It is possible to tell ex ante that studying cancer treatments has more potential benefit than studying acne treatments. It is possible to tell ex ante that one study is built on established evidence, while another doesn't have any likely basis. It is possible to tell ex ante that an underpowered study using poor methods is unlikely to generate any useful evidence, while a large rigorous study is much more likely to generate evidence.

Your claim that social scientists don't intend to be useful is amusing, since this is kind of what the critics are saying! You are basically restating Dave Mattins point while claiming that it is disagreeing with him. Personally, though, I don't take such a dim view of the social sciences.

Maybe the point of research papers is irrelevance, maybe the consumers of it have been conditioned to ignore it. Today's NYT column by Ross Douthat is a lament to Republican primary voters, Douthat's befuddlement that the primary voters don't seem to care what Trump says or what policies he supports - where have the ideological voters gone? But haven't Republican voters been conditioned to ignore what Republican candidates say, other than the dog whistles the candidates want them to hear, since what they say on the campaign and what they actually do once elected have little in common. After all, do the candidates make a show of the promise to cut rich people's taxes as the first and most important policy goal? Of course, Douthat doesn't mention dog whistles and tax cuts for rich people. Research papers, like campaigns, are about irrelevance, the most important goal for both unmentioned, the research papers and campaigns mere distractions from the possibly unpopular goal.

Wow, you REALLY wanted to talk about that, didn't you?

What if the researchers' goal is to facilitate the transfer of rich people's assets to offshore accounts where they won't be subject to U.S. tax, or to cut government regulation of fossil fuel emissions, or to facilitate the adoption of voting restrictions most likely affecting African Americans and Hispanic voters, or to cut unemployment benefits. Would the researchers announce that's their goal? No, they would announce that their policy goal is determined by wherever their research blindly takes them. The researchers' problem is that consumers of research papers, like the Republican primary voters Douthat laments, have been conditioned not to believe it. I'm a lawyer, and lawyers are taught never to ask a witness a question if the lawyer doesn't know what the answer will be. That's understood, because lawyers are advocates. Advocacy has spread to the social sciences, the consequence of which is a loss of credibility outside the circle of ideologically aligned academics. Is that a fair assessment? Is it a fair assessment of Republican candidates by Republican primary voters? Credibility is the real issue here, and unfortunately, once lost, credibility is difficult to regain.

The ideological voters haven't gone anywhere, they just never existed. At least not to the extent that the NYT and WSJ editorial board believe they existed. These guys spending hours poring over policy nuance are wasting their time - what was Obama's nuanced policy about health care reform in 2008? Who knows, because it wasn't even within spitting distance of whatever eventually got passed.

It's a good idea... people might start caring once the decades long shift to publishing all data & code with a paper is complete.

LOL. Yeah...nothing to make lay people "care" more than...publishing SAS code :)


Peer review by Wiki is the answer IMO.

Wikipedia works amazingly well, even in spite of some in-group bias. And it costs nothing.

I can imagine some sort of scientific community forming around maintainance of a knowledge wiki about some field, with different branches and positions represented, linking down to original research. All original research pages would have a talk page where peer review would effectively take place. With the wiki becoming essentially the "center" of the community, effectively the journal of record, the star contributors would gain prestige in the community just by doing a lot of peer reviews. In fact, some people might not even do any original research and just spend all their time reviewing and critiquing and still be considered experts. It would be a radical reorganization of the way science works, but perhaps we need something to shake up the centuries old tenure and mentoring system, which goes back the guild system of medieval Europe.

"Wikipedia works amazingly well"

No...it works amazingly poorly. It only works after editors step in to stop people from erasing everything they don't like, and the editors themselves are fat 15 year old neck beards in some basement somewhere.

You do realize, of course, that there are...hundreds...of journals of this sort, i.e. anything goes format. Guess who publishes there? Lots of Australian "academics" for some reason.

Really Australia, get your s**t together! Every other week I get some e-mail invitation to some conference I've never heard of, headed by someone I've never heard of at some Australian university I never knew existed. These phony conferences are always held in Bangkok, for some strange unexplainable reason.

"Really Australia, get your s**t together! Every other week I get some e-mail invitation to some conference I’ve never heard of, headed by someone I’ve never heard of at some Australian university I never knew existed. These phony conferences are always held in Bangkok, for some strange unexplainable reason."

I just want to chime in here: "Yeah, Australia, get your s**t together!"

But then again, we shouldn't be so quick to dismiss the Australian school of economics. While not in the mainstream, they have contributed greatly to the literature. Not that I can think of any off hand, but I'm on my third beer.

"but perhaps we need something to shake up the centuries old tenure and mentoring system, which goes back the guild system of medieval Europe"

Strangely, it never crosses your mind that things last so long because they...work?

I find this curious as the editor of JEBO who published the original version of the Titanic paper by Frey and Torgler that they later self-plagiarized in three other journals, getting their hands slapped for it publicly when they stupidly did so in the JEP that everybody reads so they got caught. My big problem with this proposal is that it sets up a bizarre kind of Rube Goldberg mechanism with its refereeing of referee reports (and the danger that a handful of professional referees will take over everything), while leaving out the crucial part of who is making the decisions. That is the editors, who seem to be left out of this mechanism. Do they make decisions or is it done by some committee of the refereed referees in this weird market?

This is even more ironic in that Torgler's old partner in crime, Bruno Frey, back soon after 2000 published a paper in Public Choice that he later published word for word identically in a journal in Europe (forget exact title of that one, but could dig it up if necessary) that sort of proposed just the opposite of this one. Frey argued (incorrectly) that it is editors who get all the rewards or losses for the success or failures of a journal, and so all power should be in their hands. They should simply make decisions and then only use referees to improve papers that the editors have already decided to publish. He made some similar arguments about the in-grown toenail aspect of the refereeing system (which editors should be the ones to correct on their own, if they know so much), but he spent quite a bit of time really getting all hot and bothered about "prostitute" authors who change their papers so as to please the obviously evil and corrupt referees. Really, and he published this paper twice and got away with it.

"Really, and he published this paper twice and got away with it"

He was just trying to make a point :p

How do you "self plagiarize"? Are you not allowed to say the same thing twice?

Nathan W,

Not unless you admit you are doing so by citing the other place(s) that you have already said what you are saying now. As it was, in the case of the European journal that essentially reprinted Frey's paper from Public Choice, the editor was fully aware of what he was doing and later publicly defended Frey during the whole Titanic episode, where Frey and Torgler failed to cite the other versions of their paper, the first of which I published in JEBO, as I have already said. Failing to cite other versions is the real red flag on self-plagiarism.

What is generally thought to be the case for refereed journal articles is that unless they are explicitly review articles discussing a whole bunch of previously published work, which are a small minority of published papers, they should contain something new in them. Now it is OK to have a paper repeat stuff that one has already published that is necessary for developing and presenting the new point, which may involve some theoretical wiggle or new data or a new econometric technique or a new discussion of possible policy applications, etc. But in those necessary repetitions of earlier work where that earlier work appeared and when should be cited. This is proper and ethical academic conduct. Otherwise one is engaging in self-plagiarism, which is not as bad as plagiarizing others, but is not currently viewed as ethical or acceptable within the economics journal publishing profession.

I have written at some length on broader problems related to plagiarism in academic publishing in an article entitled "Tales from the Editor's Crypt: Dealing with True, Uncertain, and False Accusations of Plagiarism," which appeared as a chapter in the MIT Press book from 2014 entitled, Secrets of Economics Editors: Experience of Journal Editors, edited by Michael Szenbeg and Lati Ramrattan. However, the version that appeared there was heavily redacted to avoid possible lawsuits, but you can read the original unredacted version on my website at http://cob.jmu.edu/rosserjb . There is a small section on the problem of self-plagiarism, not the main focus of the paper, and also not the part that suffered from all the lawsuit-avoiding redacting.

Comments for this post are closed