Open-source peer review

by on January 17, 2006 at 6:50 am in Science | Permalink

[With] open-source reviewing…the journal posts a submitted paper online and allows not just assigned reviewers but anyone to critique it. After a few weeks, the author revises, the editors accept or reject and the journal posts all, including the editors’ rationale…

Open, collaborative review may seem a scary departure. But scientists might find it salutary. It stands to maintain rigor, turn review processes into productive forums and make publication less a proprietary claim to knowledge than the spark of a fruitful exchange. And if collaborative review can’t prevent fraud, it seems certain to discourage it, since shady scientists would have to tell their stretchers in public. Hwang’s fabrications, as it happens, were first uncovered in Web exchanges among scientists who found his data suspicious. Might that have happened faster if such examination were built into the publishing process? "Never underestimate competitors," Delamothe says, for they are motivated. Science – and science – might have dodged quite a headache by opening Hwang’s work to wider prepublication scrutiny.

Here is a bit more.  What might be some arguments against this practice?

1. It is too easily manipulated by your friends, or perhaps by your enemies.

2. The resulting morass of comments must be interpreted.  We are back to editorial  discretion, but it is better to have some referees rather than none.

3. The purpose of journals is not to always make the right decision, but rather to certify the quality of outstanding work to more general audiences.  By blurring the evaluation process, open source reviewing would make journals as a whole less reliable.

4. Don’t we already have this option?  I could post a paper on this blog, open up the comments, and receive a call from the AER, asking for a submission.  I guess my answering machine isn’t working.

5. The current system allows for editorial manipulation through the choice of referees.  This is good.  An innovator needs only to convince a single editor, not a jackal-like pack of seething commentators [hey guys, that's you!].

What is the goal of publishing anyway?  To assign "just outcomes"?  To make sure that the one percent of worthwhile papers find a prestigious outlet?  To provide incentives for those papers to be written in the first place?  To increase the prestige of science as a whole?  Since I don’t understand why on-line publishing hasn’t already taken over, this scheme is hard to evaluate.  Comments are open….

james alford January 17, 2006 at 8:11 am

“What is the goal of publishing anyway?”

My goodness! Are you serious? While I’m sure you’ll take it as a badge of honor, only an economist
could ask such a silly question…

DK January 17, 2006 at 8:40 am

I am a skeptic, and I think this idea misunderstands both open source and science and publishing. I agree with Tyler’s posted issues but especially with #3. I would add:

1. “Open-source” commenting already happens, as most worthwhile papers are already circulated, among your colleagues, as working papers, on web archives as in physics, and as conference papers with question/answer sessions. If this process is already happening, why get journal editors involved? Are trusted, hierarchical authorities superior in managing open comment processes? e.g. would you rather read blogs managed by the CBS editorial staff or by independent bloggers?

2. Journals do have three benefits. The first is that they provide a quality certification to impress MSM reporters and tenure committees, both of which are — like journals — more comfortable with hierarchical authorities of supposedly objective and anonymous evaluators. People who are impressed by that are unlikely to be impressed by open source processes, (In other words, if traditional journals are obsolete, maybe the MSM and the tenure committee are obsolete, too.)

3. Also, it can be dangerous to the publishing academic to give the reporters and the tenure evaluators a chance to read the online records of their back-and-forth with open reviewers. Why don’t more academics write blogs? Many of them are afraid of showing their flaws in public.

4. The second benefit of journals is that anonymous reviewers _can_ be really great! Anonymous journal review is a serious, difficult work much harder and more time consuming than commenting or blogging, and it earns the reviewers goodwill with the editors and an opportunity to get their own editorial positions later. I have had several anonymous reviewers who gave better and more careful advice than my thesis advisor. Yes, I have had others who were embarrassments. The embarrassments will still exist in an “open” process. But will the commenters put in the time and attention of the good anonymous reviewers?

5. The third benefit of journals is the filtering function, which Tyler has already addressed. I believe that this filtering function can largely be transferred to truly open networks of colleagues (the paper-passing circuit) and to academic blogs, but again, I am not sure that a hybrid helps.

Macneil January 17, 2006 at 9:28 am

I think the more people who comment on a paper the less likely the author is going to read all of the comments. When reviewing a paper, you want the full attention of the author and you have some degree of authority to make that easier. Reviewing a paper takes the same amount of time no matter how many other people are reading it, so effectively they are asking you to do the same amount of effort for something of lesser value.

On the other hand, this could just mean the reviewers will spend less time, thus resulting in worse reviews.

I think the social element is that people review to get to know the PC better so that they could either be on the PC or chair it the next time around. Such activity is good for making tenure cases. Helping out lower-prestige researchers who probably won’t take your advice anyway will probably hurt your tenure (or promotion) case in the sense that your time could have been spent on something that would have provided better result.

Ivan Kirigin January 17, 2006 at 10:29 am

The solution to problems with comments: open-source peer-reviewed comments :-D

Put a ranking system for comments, where reviewers who agree with or appreciate a comment can make it more prominent. Flames and trolls can be suppressed as well.

Take Slashdot as an example. Most people there say stupid things. Few comments ranked 5 are very bad.

Stretch January 17, 2006 at 1:48 pm

Having worked for a scientific journal (Nature Biotechnology), I believe that the peer review system works pretty well as is. The problems with the system mentioned in the article are unlikely to be really resolved this way.

“But its anonymity allows reviewers to do sloppy work, steal ideas or delay competitors’ publication by asking for elaborate revisions (it happens) without fearing exposure.”

I certainly don’t see how open source reviewing is going to prevent the stealing of ideas or the willful delay of publication. If anything, it seems like it would exacerbate the situation. As for sloppy work, generally the editor takes care of such things. Also the anonymity of referees does help to “ensure candid evaluations and elevate merit over personal connections”, at least if the editors do their job properly.

Revealing the identity of referees, post acceptance, and even publishing their remarks seems like a fine idea. But the article compares open source reviewing with Wikipedia, except it fails to note that there is no time-frame on a Wikipedia article and there is no comparable competition on the site.

Referees are also busy people and it takes time to write a conscientious critique. Often they have to be hounded. It’s possible that the quality of reviews could even go down in a “post a comment” environment.

The process as it stands can take quite awhile as the editors need to sift through every submission and determine which papers are appropriate for consideration, find willing reviewers and get them access to the manuscript, get the reviews back (10 days at Nat Biotech, but that “deadline” was frequently missed), have the author make revisions, go through a second round of confirmation reviews and then finalize and actually edit the manuscript. Open source reviewing could significantly speed up the process, but it also might slow it down if there are too many minor or extraneous comments that need to be answered or waded through. Plus, without direction from the editors it’s possible that there won’t be sufficient critique in the time allotted (or perhaps ever).

It could certainly help with exposing inaccurate or even fraudulent data more quickly, but I honestly doubt whether it would truly make that huge of a difference, even if it might stop a Hwang incident. The reason those errors were caught was because of publication and the subsequent interest. I’m not convinced that enough people would have been exposed to the paper beforehand to actually prevent it. After all, if the claim that most articles are “flat-out wrong” is true, it remains to be seen if open source reviewing will really change that. I suspect that most of the incorrect data can only be revealed as such after the attempt to actually duplicate an experiment and not beforehand by looking at data sets.

Still, it seems to have enjoyed limited success so far and merits keeping an eye on it. But it seems premature to view it as anything other than an experiment right now. The cost and time it would take to move totally to an open source reviewing platform is enormous, so I doubt we’ll see a large scale shift until its efficacy is better known.

albatross January 19, 2006 at 10:00 am

I don’t know about other people, but I read things a lot differently when I’m reviewing a paper for publication than when I’m reading it for content, as I might off a preprint server. When I’m reviewing it, I spend a lot of time looking at whether the equations and descriptions match and make sense, whether any claimed experiments are consistent with descriptions, whether relevant prior work was cited, etc. It’s a really different process.

I’m concerned that open-source reviews migth amount to people reading the paper for content, but not doing the in-depth checking they need to do. Worse, I think it might be hard to figure out from a set of such reviews whether anyone did that checking. (Admittedly, a lot of reviews you get in the current system are pretty useless.)

James Eglin March 30, 2006 at 7:30 pm

I’m a fan of peer reviews.

O'Callaghan June 4, 2007 at 12:54 am

Lieber grumble!!! hoffe dir gehts gut da in der fremden weite… wenn man dich schon nicht persænlich besuchen kann…

laptop bettery March 6, 2009 at 1:52 am

If you need (or want) a computer that’s easy to take along,you can see it from mitac 8375 battery whcih offer the longlife and consistently reliable performance you need to get the most out of your notebook .

Shyloh J February 1, 2011 at 3:54 pm

Thank you for this post! I have seen some computers like this at the place I go to get scanning services done. I think I just might have to invest in one of my own.

Comments on this entry are closed.

Previous post:

Next post: