Emi Nakamura of Berkeley wins the John Bates Clark award

Here is the announcement, here is the opening paragraph of the summary:

Emi Nakamura is an empirical macroeconomist who has greatly increased our understanding of price-setting by firms and the effects of monetary and fiscal policies. Emi’s distinctive approach is notable for its creativity in suggesting new sources of data to address long-standing questions in macroeconomics. The datasets she uses are more disaggregated, or higher-frequency, or extending over a longer historical period, than the postwar, quarterly, aggregate time series that have been the basis for most prior work on these topics in empirical macroeconomics. Her work has required painstaking analysis of data sources not previously exploited, and at the same time displays a sophisticated understanding of the alternative theoretical models that the data can be used to distinguish.

Congratulations!  And there is much more of interest at the link.  And here are previous (and numerous) MR posts on her work.

Comments

Truth. Over-aggregation of data has been one of my gravest concerns.

Data reduction was necessary in the old, small data, world.

"Emi and Jón instead obtained access to the actual micro data used by the BLS, which has all the price observations collected by the BLS and for the period from 1988 to 2005."

As we transition to big data (and one hopes public access) it will be interesting to see how those old reductions hold up.

Up front: I'm an old, retired crank [redundant?].

Is Big Data the solution?

My limited, short-term memory is only "so" good. I think I remember a week or two ago a post and hundreds of comments on the recent (2008 to 2015) pomp and circumstances that began with subprime mortgages imploding trillions of $$$ of mortgage backed securities, credit default derivatives, etc. and how (I think one aspect) over-aggregation gulled Fed bank regulators and economists, credit rating agencies, et al to think of them in terms of AAA. Also, (long-term memory) approx. 30 years ago I worked around other, similar over-aggregation cases during the S&L crisis. Of course, my experiences were micro-micro with my ass in the grass, as it were, not sitting in an ivory tower.

I'm old too. If I were to claim expertise it would more be in computers than economics.

From my perspective, "big data" was a name invented to celebrate a massive increase in working capacity (both machines and methods). Maybe like a lot of things the first uses were good, then the hype got excessive, then people started to doubt the whole thing.

But what I think this shows is that if you can use this huge new working capacity effectively, you can discover things missed with smaller working models.

Will economics trend toward big data because it always should have been (but was limited by time, money, and slow computers)?

Interesting question.

It is just incremental progress: as data resolution gets higher and so data now is, on paper at least, more precise. So they want to hype stuff up calling it "big data". It is just data.

I agree, but I'm referring to this 2013 sentiment:

Pushback against 'big data' begins

Look at this funny quote from Taleb:

"Big data means anyone can find fake statistical relationships, since the spurious rises to the surface. This is because in large data sets, large deviations are vastly more attributable to variance (or noise) than to information (or signal)."

"Big data means anyone can find fake statistical relationships" .. ah, those were the days.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Big Data of course is a solution but like anything quality varies. Garbage in, garbage out. To prevent that they need to make all code and data publicly available for scrutiny. We don't want a repeat of Reinhart and Rogoff.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

From the abstract, Nakamura et al. The Elusive Costs of Inflation: Price Dispersion during the U.S. Great Inflation (2017):

A key policy question is: How high an inflation rate should central banks target? This depends crucially on the costs of inflation. An important concern is that high inflation will lead to inefficient price dispersion. Workhorse New Keynesian models imply that this cost of inflation is very large. An increase in steady state inflation from 0% to 10% yields a welfare loss that is an order of magnitude greater than the welfare loss from business cycle fluctuations in output in these models. We assess this prediction empirically using a new data set on price behavior during the Great Inflation of the late 1970’s and early 1980’s in the United States. If price dispersion increases rapidly with inflation, we should see the absolute size of price changes increasing with inflation: price changes should become larger as prices drift further from their optimal level at higher inflation rates. We find no evidence that the absolute size of price changes rose during the Great Inflation. This suggests that the standard New Keynesian analysis of the welfare costs of inflation is wrong and its implications for the optimal inflation rate need to be reassessed. We also find that (non-sale) prices have not become more flexible over the past 40 years. https://eml.berkeley.edu/~enakamura/papers/costsinflation.pdf

Respond

Add Comment

Your listing of previous Nakamura articles includes articles by non-Emi Nakamuras!

Respond

Add Comment

Wow, I am getting old. I thought it was bad enough that I know the father of a presidential candidate but not the candidate (Don Harris, dad of Kamala Harris), but now I know the mother (Alic Orcutt Nakamuura) and knew the late grandfather (Guy Orcutt) of a John Bates Clark Award winner, without knowing the actual winner. But then, this is an award for young people after all....

Respond

Add Comment

Data will rule. Good thing too.

Respond

Add Comment

Bob Hall had a beer for lunch to celebrate. He said she is the first hardcore macroeconomist to win the Clark.

Respond

Add Comment

Respond

Add Comment