How to read a vita

Ugo, a loyal MR reader, asks:

If you were in a tenure committee how
would you evaluate an assistant professor who, among other things, has
two papers in a top journal with the second paper showing that the result
of his/her previous paper is wrong.

(a) Consider this situation has having
two publications in a top journal (the rationale for this is that you want
to give incentives to seek the truth and the two papers contributed to
our understanding of the problem, moreover the author showed to be able
to publish in top JNLs)

(b) Consider this situation as having
one publication in a top journal (same as above, but you recognize that
the contribution is less than two papers with a true result).

(c) Give zero value to the two papers
(because the results cancel each other).

(d) Give negative value to the two papers
(because people wasted time on a wrong result).

The best way to read a vita is to think of it in terms of a portfolio.  If all a person had on his vita was a single paper and then its repudiation, I would not think much of the combination.  If the person is producing a stream of papers, as a whole pointing toward greater knowledge and fleshing out a coherent research program, I would view the revisions and repudiations as a sign of intellectual strength.

Most questions about how to read vitas can be clarified by this portfolio approach.  For instance I am often asked how much a piece in Journal X is worth.  The correct response is to ask whether that publication complements a broader research program or not and then to ask how valuable that research program will be. 


It suppose a lot depends on what happened between the papers.

If some significant time has elapsed, the community has accepted the first paper as valuable, with some citations etc., and no-one else has spotted the flaw, then the second paper is likely to be valuable in itself: it shows the author is still the person who understands the subject best, even better now than before.

Yes, yes and yes.

One of the most pernicious habits that HR departments have fallen into is scoring resumes: scanning the resume for buzzwords and ranking them by the number found. It is obvious to everyone but the offending HR departments that this is stupid and dysfunctional.

Inside academe, though, the same mentality has hold. CVs are reduced to a number: give points for this, deduct points for that. The biggest number wins. We are sometimes a little shame-faced about this: "Deans can't read, but they can count."

But we are not judging a round of competitive resume building. The point of a resume, a CV, is to provide a basis for extrapolating future behaviour. We use the past to predict the future. We want to know if this person will pursue a productive research program, and look to what they've done in the past to help answer that question.

It's a course of life, not a score from life.

Why not just mention Emily Oster by name?

What truly annoys me about your readers question is that nowhere does the question, "Is this work valid and important?" seem to come in.

Who gives a @#$# whether Journal A or Journal B published the work in a given year? Shouldn't the question be whether the work is any good? I'm pretty sure that the snarky peer review process isn't infallible.

I think it depends a lot on the context. What was wrong with the first paper? Did the candidate (C) do empirical research, then come up with a new and innovative superior method, test the same phenomenon, and come up with a different result? That sounds like a significant contribution to me. Or, alternatively, the second paper was responding to novel and very nonobvious criticisms of the methodology of the first paper, and the second paper was a response. In that case, I think the situation should be treated no differently than if the papers had reached the same result.

Additionally, I think the candidate would deserve bonus points for clearly demonstrated intellectual honesty.

At the other extreme, if the second paper is just admitting an obvious flaw (divide by zero error) in the first paper and does not present any additional results, then things look different.

"Great Zamfir":

I get that. And I'm not saying that the question-asker should be ignoring whether any or what journals agreed to publish. And maybe the question-asker didn't include an independent analysis of the work's merit in his (a),(b),(c),(d) scenarios because that's already part of the analysis.

But I know people well enough to know that most of the people who ask these kinds of questions are lazy.

The right thing to do if you're on a tenure committee and can't form your own opinion about the quality of the author's work without resorting to these kinds of considerations is to admit to your fellow committee members that you're not the right person to judge.

The fact that someone is asking this question demonstrates that very few people are courageous enough to do that. It's academia -- who wants to admit that they don't understand something? Being recognized as the smartest person in the world is one of the biggest incentives that people have for going into academia.

So in the end I'm arguing that there should be some correction for the incentive-caused biases of EGO. Although I'm quite sure that the person who asked this question is a nice person; I'm not at all sure that he or she is aware of how fear of embarrassment may be negatively influencing his or her ability to evaluate tenure candidates.

This is less true at the tenure level, where publications tend to be the focus,
partly because of time lags involved, but at higher levels, certainly when one gets
to the professorial chair level, it is citations which count.


Raw citation counts are certainly highly imperfect, as are raw counts of publications.
We all know that things get weighted in all sorts of ways, by rankings of journals that
pubs appear in, rankings of journals that citations appear in, and so forth. It is, however,
a fact that at higher levels citations measured somehow become more important than publications
measured somehow. For better or worse, they, or some weighted version of them, are viewed not
unreasonably as a measure of the impact of a publication.

Needless to say, just as important work does not always get published in highly ranked journals,
likewise important work does not always get cited. And, there is certainly a tendency for work
published in prominent journals to get cited more partly because it is more likely to be read more.
What is interesting is when a paper in an obscure journal gets cited a lot. That usually is a sign
that it has something going for it.

BTW, I note that when Greg Mankiw has commented on likely Nobel winners, he usually lists the most
cited economists who have not gotten it as the likely candidates, although in some cases (I shall not
indulge in naming names), I do not think some of those folks deserve it, and I am always amused when
the actual recipients are viewed as surprises and have not been high values on the prediction markets,
or even on the prediction markets at all, as was the case this last time around.

Of course it must be noted in terms of the original formulation of this post that this
whole business of citations is irrelevant. These are almost never listed in vitae, so
they are not part of "how to read a vita," but extra searching that one reading a vita
might do.


Dan Klein has an interesting piece on the "blackbox" nature of the SSCI and citations in the 2004 Econ Journal Watch. It goes through thing carefully, objectively, and insightfully.

Hello, I am the guy who asked the original question. Mine was a genuine question and was not meant to criticize or judge anybody in particular. I liked Tyler’s answer and also several of the comments, especially those of Great Zamfir and Elliot Reed.

However, I am bit shocked by the violence of some of the comments and by the fact that some readers seem to imply that publishing in top journal does (or should?) not matter much.

ajb correctly points out that there is no perfect correspondence between publication record in top journals and tenure in top departments, but even without having done the calculations, I am ready to bet that the correlation between the two is fairly high. In fact, most of my friends who are in academia (even those who do have tenure) think that publishing their papers in top 4/5 journals is almost question of life and death. As a consequence, a considerable amount of time and effort is wasted after the paper has been submitted (the revise and resubmit process is long and often adds little value to the paper, see the nice JPE paper by Glenn Ellison). It is hard to model this as a rational behavior if these publications had no value.




Certainly there is a correlation between being hired
and, more to the point, getting tenure and promotion,
in a highly ranked department, and publishing in highly
ranked journals. This is unsurprising. The issue is
whether or not this correlation is perhaps stronger
than it should be, or perhaps to be more practical,
if second or third tier institutions should be attempting
emulate the top ranked schools in this way rather than
differentiating what they do by searching out more unique

To segue back to the infamous Notre Dame case, Robert
Solow of MIT at the time the events at Notre Dame were
going down (2002-2003), remarked that "what we do not need
is another third rate imitation of the MIT economics department."
Of course, the administrators there are trying to make it at
least a second rate imitation...


RE: your comments on Notre Dame: As Emerson said, "A foolish consistency is the hobgobblin of
a small mind." Surely the Notre Dame's egoistic dean has such a mind for looking at WHERE things are published as opposed to content (and for thinking that books are no more significant than articles).

But even closer to home for me is my university, where "national rankings" are pursued as fatuously, apparently from your description, as at Notre Dame. But many B-schools are into smoke and mirrors, so why should I complain? Still, there is something to what the good book says about "fighting the good fight," and so I continue to strive and complain (as you do too, it seems).

Comments for this post are closed