At least pre-Covid, most of the faculty would get together and rate the graduate students (I am not sure how it has operated for the last two years, though I suspect the same, only over Zoom). Some but not all of the students would be designated as “should work at a top school.” If you were not so rated, your chance of being hired at a top school was slim. Other schools, of course, would know not to pursue the top candidates, and would shoot lower, though some foolhardy places might try to lure them anyway. But basically if you were hiring at a high level, you would call the placement officer at a top school, and they would tier the candidates, based on where you were calling from, and recommend accordingly.
Of course this process has very little transparency and not much in the way of appeal, or even competition, or for that matter accountability to outside parties. Might it also be a factor behind a lot of the academic conformism we witness? You go through the early part of your career knowing that you are auditioning for a committee. Can any voice wreck you? Or is it majority rule? You will never know!
Unlike a lot of the whiners, I am not saying this system is necessarily bad — I am genuinely unsure, in part because of the lack of transparency, not to mention that the relevant alternative is possibly something worse yet. In any case, I find it striking how little discussion this method has received. It allocates most of the best jobs in the economics profession, and it does not obviously satisfy many of the ideals that at least some of us pay lip service to. John List, now tenured at the University of Chicago, received his initial Ph.D from University of Wyoming, not a top school, but that kind of climb up the academic ladder is extremely rare in economics.
Most lesser ranked schools, including GMU, do not rank their candidates collectively in the same manner. There is no collective ranking, rather individual faculty, or perhaps small working groups, recommend their favored candidates. In part they are competing against the other recommending faculty in their own department. There is no “secret, collusive meeting,” and so you might think there is an incentive to over-recommend and to deplete the collective credibility of the department. The market, however, understands that and takes it into account, and in that sense the credibility of the department “starts off depleted” to begin with. The recommendations then have to be somewhat exaggerated simply to “break even” in the resulting signal-jamming equilibrium.
Lower-ranked schools don’t have the option of sending most of their Ph.D. students to Tier 1 research universities, so the notion of a uniform ranking probably doesn’t make sense there. And if you have only three graduating Ph.D. students in a year, and two of them are returning home to Asia, as is the case in many of the lower-ranked programs, does it really make sense to rank them? Furthermore, the lower-ranked schools may have a higher variance of faculty quality, which would render a consensus ranking of the graduates more difficult to achieve. In contrast, almost all of the faculty at Harvard have a pretty good sense of “what it takes” to succeed at MIT or Princeton as a junior faculty member.
Which method is better? Is the current state of affairs, with a split system, optimal? Is the current system due to break down in some way? Why don’t the economists at top schools talk about this much? Is the whole thing just a plain, flat outrage? Here is one of the few discussions I can find, and yes it does affirm that the practice occurs, though it overstates its universality.
What do you all think?