Facts about economics papers

The average length of a published economics paper has more than tripled over the past four decades, and some academics are sick of wading through them. At this year’s American Economics Association conference, Massachusetts Institute of Technology professor David Autor compared a 94-page working paper about the minimum wage to “being bludgeoned to death with a Nerf bat” and started a Twitter hashtag, #ThePaperIsTooDamnedLong.

…Between 1970 and 2017, the average length of papers published in five top-ranked economics journals swelled from 16 pages to 50 pages, according to an analysis by University of California, Berkeley economists Stefano DellaVigna and David Card.

Longer papers can include more-robust statistical analysis, engage in multifaceted arguments or address complex topics. Some economists speculate paper inflation is also the product of the laborious peer-review process, in which other economists act as referees and read drafts, then demand any number of additions before publication…

Economists also tend to write defensively, including redundant material even in early versions of papers to head off possible quibbles that might come up during the review process, said Samuel Bazzi, an economics professor at Boston University.

That is from Ben Jeubsdorf at the WSJ.  One question is whether longer papers are better from a scientific point of view.  A second and more important question is whether long papers are better for attracting genius talent to the economics profession.

For the pointer I thank the excellent Samir Varma.


Paper journals have a limited number of pages, but working papers distributed electronically have no such limitation. Maybe there should be two versions of a paper -- one for publication that is more concise and also a more complete version that presents almost everything the authors looked at.

Not a bad suggestion. Indeed, a way to shorten the published versions is to have them simply refer to the various added robustness tests by citing the unpublished WP version, with a quick summary of the outcomes. "Robustness check A showed no change in outcome as did Robustness checks B and C. See WP for details of the tests."

Yeah that could work as I would bet a lot of the page inflation is robustness & specification checks which in theory I absolutely support (in practice I know a lot of it is mining for specific results in support of publication bias).

Although it's always nice to have all of the details from those checks as it helps to judge the credibility of the checks.

Applied papers could be a lot shorter if authors were willing to publish their data.

That would make papers longer rather than shorter. What is useful is authors making data available to those who request it from them.

Computers have not only improved distribution, but also the reading experience. You can't ctrl+f a paper journal.

Hilarious - Prof. Tabarrok now has a safe space in which to post, but Prof. Cowen is still living in those earlier, freer times, before the Internet hordes arrived at the door step of one America's most consciously styled Internet personalities - https://www.mercatus.org/bridge/commentary/my-personal-moonshot

What would you do with all the free time you would have on your hands if Cowen also closed the comments? My personal opinion is that Cowen would have to ask Homeland Security to keep an eye out for you.

Well, considering that I live in Germany, what Homeland Security does is completely irrelevant to me, but sure, Prof. Cowen can certainly do as he wishes.

Like closing commenting, as soon as he figures it out. Considering that it is Prof. Tabarrok that first registered mruniversity back in 2001 (https://www.whois.com/whois/mruniversity.com) one can assume that Prof. Cowen will be instructed by his esteemed colleague at moon shot mission control - after all, marginalrevolution.com was first registered in 2003.

Intriguingly, much whois information about marginalrevolution.com now enjoys 'Data Protected' status, where in the past it was open for everyone to see. But then, that is in keeping with more modern standards, where the right for money to both buy and be speech as anonymously as possible is considered of extreme importance for those pursuing the goal of the rich getting richer.

Moon shots involve a lot of dedicated effort and money - but really, who cares about those details? They aren't twitter friendly at all.

What would you do with your life if you couldn't post your cuck comments?

Obviously I'd have more time to watch my dear wife getting the old slap and tickle from a big black buck. *wink*

Does she respect Howard Stern’s penis?

Aww the first parody I've seen of msgkings. What a coincidence! You're not fooling anyone.

Let me ask you: do you respect Howard Stern’s penis?

Actually I've been hit many times like this, "Anon". This idiot doesn't bother me why should he bother you?

Do you respect Howard Stern’s penis?

I disagree.

But do you respect Howard Stern’s penis?

"Hilarious - Prof. Tabarrok now has a safe space in which to post"

This is strange. I would say that Tabarrok had been going for quality over quantity, and sometimes succeeding.

What's going on? Were there some attacks on him that got too personal?

There are definitely a few commenters who just come here to rag on the blog hosts. I doubt AT cares too much but maybe it's an experiment to see how commenting changes with only TC posts open to it?

He failed to show sufficient respect for Howard Stern’s penis.

"One question is whether longer papers are better from a scientific point of view. "

Since the article points out that the Deaton-Case article was published in a scientific journal rather than an economics journal, I'd say, "No."

I also wondered when I read it if part of the problem is an approach built around print publishing. Couldn't these papers be made shorter by using links to prior studies?

Good God, man, if a genius took up economics he might make the rest of you unemployable. Imagine trying to make a living from teaching Aristotle's physics after Newton. (Though I suppose people did - it's no more absurd than making a living teaching Marx's economic, or Freud's psychiatry.)

>....attracting genius talent to the economics profession.

Why bother? To what end?

I can't tell whether "published paper" includes "papers" that are comments, replies and rejoinders. If the averages that Tyler discusses DOES include them then the comparisons between the 1970s and now are worthless because of the implosion of Comments, Replies and Rejoinders since then: https://www.bsu.edu/-/media/www/departmentalcontent/economics/archivemedia/facultypapers/coelho2005ejwv2n2.pdf?la=en

You are correct that there has been an unfortunate implosion of comments and replies, but I do not think it comes close enough at all in quantity to explain the substantial increase in measured paper length in leading journals. I think that is due to the various reasons that have been suggested.

Not to mention Hazel Meade’s very hairy vagina.

Cowen: "A second and more important question is whether long papers are better for attracting genius talent to the economics profession." Is Cowen's implication that long papers will attract genius talent or deter genius talent? I could make an argument both ways: genius talent enjoys showing off their genius talent or genius talent doesn't have the patience to deal with lesser talent. Here's my question: does genius talent have the patience to tolerate comments/suggestions/criticisms from lesser talent or does genius talent just ignore lesser talent? My bet is that Cowen just ignores lesser talent. Some readers may recall when The Incidental Economist was open to comments. That was back during the debate over ACA. The hosts actually took comments seriously, often times engaging with commenters. It must have been exhausting for Aaron Carroll and Austin Frakt (and sometime contributor Nicholas Bagley). Many comments were critical of ACA. Dr. Carroll (he is a medical doctor) in particular seemed to take offense, but not Frakt (he is an economist) or Bagley (he is a law professor). I suppose with medical doctors things are black and white, whereas for economists and lawyers they are shades of gray.

"If I had more time, I'd have written a shorter letter," is a quote attributed, possibly correctly, to Cicero, Blaise Pascal, Benjamin Franklin and Mark Twain. Heck, I've written it myself.

It can take a lot of effort to organize and express your ideas optimally. Or elegantly, as the mathematicians would say.

This David Autor fellow makes a lot of sense.

God forbid that an academic would have to read a long paper.

Instead, I would measure the amount of math symbols per square inch over time.

It's not the writing that contributes to the page length, its the math and tables.

And graphs, all of which, in the old days, required typesetters, extra fonts, a typist with access to math software and layout artists when such content was published in journals.

It's easier today, and there is less transaction costs, to type math symbols, create tables and graphs.


Sorry, but you are wrong. The average amount of math in econ papers has been declining for some period of time now, even if it may not seem like it. What has expanded especially have been all the defensive econometric supporting tests, which, yes, do involve math but not as much as pure theory, which is often mostly math. Tables and figures usually help to shorten papers as they provide useful summaries. Authors can say, "See Fig. 7 and Table 6 for main outcomes."

It's a testable hypothesis, which will require defensive econometric mathematical supporting tests and perhaps some theory as well, supported by tables and charts, with access to excel formatted data and the disclosure statements that this research was supported by the Association of Mathematical Mindlessness.

Perspicacious parsimony precedes
Insidious Insights.

It has already been done, both more informally as well as more rigorously and econometrically. The peak of equations per page was all the way back in 1980. A recent study show some increase in equations per paper, but that increase is more than offset by the increase in the average number of pages per paper, which is the issue at hand.

Well, it should be something one could easily do through Lexis or Thomas Reuters using a filter to capture the space devoted to charts, graphs, tables and mathematical symbols. Would be interested in the source.

Card et al have been writing papers on page length and effect of page limits but did not see anything on density of math equations, graphs, charts over time or identified at all as reasons for length so look forward to your reference. Admittedly I skimmed the material, but I didn't see math charts etc compared over time or discussed. Another way to look at this would be: when AER set page limits did that increase or decrease math density, reduce charts and tables, etc.

The less formal source is work by David Colander, some of this showing up in reports on econ grad education as well as in certain other places. I am especially aware of this having coauthored with him a lot. A recent more formal paper was in 2012 by Espinosa, Rondon, and Romero, still unpublished. The funny thing is that the publicity about it was math increasing, but as I already noted they were measuring equations per paper, which have risen, not equations per page, which have fallen.

Here is a summary of the paper in an article in Quartz claiming that econ papers have gotten a whole lot mathier:


My own observation is that page length would increase with more math as you would have to define terms, etc. and often equations sit on top of each other, rather than being strung along like words in a sentence.

Let x = word at the nth position in the sentence
Let y = k plus whatever,
Yada yada yada.

Espinosa's paper claims: . "We give statistical evidence on the increasing trend of number of equations and econometric outputs per article, showing that for each of these variables there have been four structural breaks and three of them have been increasing ones. Therefore, we provide concrete measures of mathematization in Economics. Furthermore, we found that the use and training in mathematics has a positive correlation with the probability of winning a Nobel Prize in certain cases. It also appears that being an empirical researcher as measured by the average number of econometrics outputs has a negative correlation with someone's academic career success."

You can find it here: https://ideas.repec.org/p/pra/mprapa/41341.html


Note that Quartz failed to note the increase in the average number pf pages per paper, so it missed that Espinosa et al have seriously misrepresented their findings: yes, more equations per paper, but fewer per page. What they do show is a big increase in the number of econometric tests, which fits with Colander's observation of a shifr from theory to empirics, as well as probably the explosion of all these overdone robustness tests, even as some are useful.


It can be the case that both items are true: that increases in math material in econ papers AND showing robustness tests increase page length.

The nice thing about economics papers is that I can read them and know EXACTLY what they did. Maybe they are too long, but it's better to err on the side of being too clear and overly long than the reverse.

longer papers are an attempt at signaling. Look at Piketty's book. Mostly because it was so dense and long people could say, " he did his homework," and it would be believed as fact because--who would dare read the entire thing--"could you write a book that long?". long books, and long papers are a lot like glasses..they make people look smarter. But hopefully that's changing.

I would suggest that whatever proportion of the increase in length is because of economists taking greater care to make sure their econometric analysis is sound and robust is fully justified.

Yes, Jestak, and they can report the general results of the tests without presenting them in gory detail, unless they contradict the main results or need some special explanation, with readers who really want to see the gory presentation of largely confirming test sent to a Working Paper version of the paper made available easily and publicly. it is laziness and unwillingness of editors to override demanding referees that leads to all the multiple revisions and on and on stupid and useless lengthening. This problem may be worse at top journals which have become bureaucracies with sharply competing interests and very low acceptance rates so that papers end up jumping through way too many hoops to get accepted. As a longtime editor,I agree completely with Autor on this.

Regression tables, charts, literature reviews, better estimation methods. One couldn't say the economics up to 1970 prepared the ground for good economic policy in that decade. Indeed it looks like a crises of economists, a series of macro and micro errors around prices.

I have made several comments on this thread, but as a longtime editor I shall add one more, noting again that I am in full agreement with Autor on this. Once upon a time not all that long ago there was an unofficial norm that submitted papers should not exceed 30 pages double-spaced, unless they were review essays or for some special situation. I think that is still a good norm, and most of these papers exceeding that should and could be shortened without undue negative consequences.

Sounds like an admission against interest:

An editor admitting that the editorial pen was wielded too lightly against the author who was often requested to provide further support the argument with more data and explanation.

If it were a theological, rather an an economic, journal, you could ask the reader to take it on faith-- rather than demand additional support or clarification.

You don't get it, do you? What is going on is all these robustness checks, many of which may be worth doing. The problem is the reporting of each and every one of them in gory detail, which is a total waste of time and space when they largely agree with each other. Their results can be briefly summarized and the reader can then be sent to some working paper to see the wasteful discussion of the gory details. Of course, if the robustness checks raise issues, then those need to be discussed. But at those top journals, failed robustness checks will probably lead to rejection of the paper. If you are reading a paper in a top journal, you should assume going in that it holds up pretty well on these checks. It is insecurity and stupidity by editors to waste everybody's time having all that unnecessary garbage in the papers themselves.

I would note that the collapse of independent and strong editors is a more general problem in academia, certainly in economics. Everybody knows that great journals, or ones that moved up a lot, were often led by strong editors willing to override referees and board members to publish important papers that some did not like. Partly because of increasing specialization, there are fewer and fewer such editors around, who need to have a broad knowledge of what is going on in the profession and what is important as well as what is just a boring waste of time, of which there is more and more of in top journals in my view.

Just finished reading a paper on trust and the use of blockchain technology to supplant intermediaries who reputation ensured the trust.

Perhaps, if the editors were trusted to have seen the background robustness checks without having to publish them one would have shorter papers--you don't have to trust me, you can trust the editor or prior reviewers who have seen the supporting documents and checks.

Could it be there is less trust or editors or reviewers and that is why the reader is exposed to what you call all the gory detail.

As I already noted, there are fewer "great editors" around, although I think this is not a matter of demands from readers, but more the result of bureaucracies within leading journals and their dynamics, where the editors have lost a lot of authority, as I already described. I am speaking of this from the perspective of a long time editor.

Over the recent past a number of studies have suggested the individual type of empirical robustness testing via statistic has a rather poor track record. One of the reason is that the statistics would actually be more robust (IIRC even more appropriately applied) when applied to multiple experiments rather than the one any given paper produces.

Taking that to heart seems to suggest shorter papers articulating the underlying theory or idea with perhaps a "starter" experiment and resulting statistics. Then the discussion in the community turns to the repeated experiments to produce the more viable data set for producing more meaningful statistics related to the underlying idea presented in the paper.

The appears to be consistent with what is generally attempted in the academic journals but seems to be short circuited (thereby producing these bloated submission) by the process. The cynic in me says it's driven more by an underlying politics and competing ideologies across schools but perhaps it's a purely technical outcome as seems to be suggested.

Card - DellaVigna say the papers are longer, but are written by more authors, and are better. In other words, 2 short old papers => 1 new one. It would be good if Card - DellaVigna said what made them longer. Probably literature reviews?

in the medical journals, papers generally have word/page counts. For example, JAMA has a limit of 3000 words and 5 Figure/Tables. Readers wanting more detail are referred to the online Appendix which may be 100 pages.

My own journal, "Neurology," switched this year to articles appearing as a one page short format with full paper appearing online. http://www.neurology.org/Short_Form

Solve for the equilibrium.

Your journal sucks out the ass.

Comments for this post are closed