Referees get worse as they age

They titled the piece "Older but not Wiser."  Here is the summary result:

Michael Callaham, editor-in-chief of the Annals of Emergency Medicine in San Francisco, California, analysed the scores that editors at the journal had given more than 1,400 reviewers between 1994 and 2008. The journal routinely has its editors rate reviews on a scale of one to five, with one being unsatisfactory and five being exceptional. Ratings are based on whether the review contains constructive, professional comments on study design, writing and interpretation of results, providing useful context for the editor in deciding whether to accept the paper.

The average score stayed at roughly 3.6 throughout the entire period. The most surprising result, however, was how individual reviewers' scores changed over time: 93% of them went down, which was balanced by fresh young reviewers coming on board and keeping the average score up. The average decline was 0.04 points per year.

"I was hoping some would get better, and I could home in on them. But there weren't enough to study," says Callaham. Less than 1% improved at any significant rate, and even then it would take 25 years for the improvement to become valuable to the journal, he says.

I thank Michelle Dawson for the pointer; I wonder when the editor who ran the study, Callaham, thinks he should resign.  He's totally gray.


"constructive"? Surely the best reviews are destructive, since far too much rubbish gets published.

Not entirely surprising, considering the incentives.

Also, were reviewer identities hidden from the editor? Potential is a huge factor in everything, especially academia. Would editors have an incentive to look favorably on up-and-comers?

I've gotten the most helpful advice, well, the only really helpful advice from the oldest member of my committee, now an emeritus. He's not concerned with advancing his own career or appearing brilliant. He just likes the work. I wish I could think of smarter questions to ask him more often.


Editors know who reviewers are. They select them.

As an editor for the last eight years I have not been at it long enough to have a proper panel sample to see how
individuals' performances change as they age. The observations are more cross-sectional in that regard. Given
that, I am in a lot of agreement with Robert Bloomfield. The reports of younger referees tend to be longer than
those of older ones. However, it is a mixed bag which is better than which. Some of those longer reports are
excellent. But sometimes indeed they focus on minor items and miss the boat on what is really important or not
in a paper. And likewise, some of those reports by older reviewers are indeed sloppy or unuseful or just knee jerk,
while sometimes they are deeply insightful, cutting right to the real chase.

I do not buy the argument that opportunity costs are necessarily higher for senior people. Time for untenured
academics is very precious. They need to get pubs out as fast as they can. The incentive is that they may
feel that they need to do good, or at least thorough and lengthy, reports in order to curry favor with editors so
that they can get their papers published in their journals. This is certainly operative, and so the shorter reports
by at least some more senior reviewers may simply reflect their perceived lower need to suck up to editors.

I think there may also be a selection bias. More senior academics who get asked to review more are more likely to
be those who have been more successful and are more prominent. However, this also means that the most prominent
will get swamped by more requests, which does relate to opportunity costs. So, those more successful senior people
will often reject requests to review, which needs to be taken account of here, and also this group will also be more
likely to give those shorter reports, although given their wide knowledge, those are also more likely to be those
bluntly to the point ones that go for what is really crucial or not in a paper.

Comments for this post are closed