Thomas Sargent, Nobel Laureate

Most of all, this is a prize about expectations, macroeconomics, and the theory and empirics of policy.  Let’s start with Sargent, noting that I will be updating throughout.

Sargent has made major contributions to macroeconomics, the theory of expectations, fiscal policy, economic history, and dynamic learning, among other areas.  He is a very worthy Laureate and an extraordinarily deep and productive scholar.  Here is Wikipedia on Sargent.  Here is his home page, rich with information. Here is Sargent on  Here is the explanation for both laureates from Sweden.  Here is a Thomas Sargent lecture on YouTube.

He now teaches at NYU, and is a fellow at Hoover, though much of his career he spent at the University of Minnesota.  Sargent is one of the fathers of “fresh water” macro, though his actual views are far more sophisticated than the critics of his approach might let on.  He has done significant work on learning and bounded rationality, for instance.  This is very much a “non Keynesian” prize.

I think of Sargent as a “foundationalist” economist who always insists on a model and who takes the results of that model seriously.  In general he would be placed in the “market-oriented” camp, though it is a mistake to view his work through the lens of politics.

Sargent was first known for his work on rational expectations in the 1970s.  He wrote a seminal paper, with Neil Wallace, on when rational expectations will mean that monetary policy does not matter.  You will find that article explained here, and the paper here.  Expected monetary growth will not do much for output because it does not fool people and thus its nominal effects wash away.

One of his most important (and depressing) papers is Sargent, Thomas J. and Neil Wallace (1981). “Some Unpleasant Monetarist Arithmetic“. Federal Reserve Bank of Minneapolis Quarterly Review 5 (3): 1–17.  The main idea of this paper is that good monetary policy requires good fiscal policy.  Otherwise the fight against inflation will not be credible.  This is probably his most important paper.

He followed up this paper with Sargent, Thomas J. (1983). “The Ends of Four Big Inflations” in: Inflation: Causes and Effects, ed. by Robert E. Hall, University of Chicago Press, for the NBER, 1983, p. 41–97.  This is a masterful work of economic history, showing that monetary stabilizations, from hyperinflation, first required some fiscal policy successes.  I view this as his second most important paper, following up on and illustrating “unpleasant monetarist arithmetic.”

These two papers inspired work from other researchers on a “fiscal theory of the price level,” integrating monetary and fiscal theories.  In Sargent’s view the quantity theory is a special case of a more general theory of asset-backed monies, and for fiat monies the relevant backing cannot be determined without referring to the fiscal stance of the money-issuing government.

His Dynamic Macroeconomic Theory has been an important Ph.d. text for macro.

Sargent also has important work on computational learning, such as Sargent, Thomas J. and Albert Marcet (1989). “Convergence of Least Squares Learning in Environments with Hidden State Variables and Private Information”. Journal of Political Economy 97 (6): 251. doi:10.1086/261603.  A short summary of his work on learning can be found here; I will admit I have never grasped the intuitive kernel behind this work.  I have not read Sargent’s work on neutral networks, you will find some of it here.  It may someday be seen as path breaking, but so far it has influenced only specialists in that particular area.  It is considered to be of high quality technically.  Here is his piece, with Marimon and McGrattan, on how “artificially intelligent” traders might converge upon a monetary medium of exchange; think of this as a modern and more technical extension of Carl Menger.

Here is an old paper with Sims, co-laureate, on how to do macro econometrics with a minimum of theoretical assumptions; this reflected a broad move away from structural models and toward “theory-less” approaches such as Vector Auto Regression.  Here is his introductory paper on how to understand the VAR method.  Sargent’s worry had been that structural models estimate parameters, but then those parameters will vary with policy choices and in essence the economist will be using an “out of date” model.  VAR models are an attempt to do without structural estimation as much as possible, though critics might suggest this enterprise was not entirely successful.

Here is Sargent’s take on the history of the Fed; basically the Fed first had an OK model, then forgot it for a while (the 1970s), then relearned it again.  In July 2010 he penned a defense of the Greenspan-era FOMC, based on the view that they were tackling worst case scenarios.  Here is Sargent’s paper, with Tim Cogley, on what the Fed should do when it does not know the true model.

Circa 2010, in an interview, Sargent defends the relevant of freshwater macro during the recent financial crisis.  While my view is not exactly his, it is a good corrective to a lot of what you read in the economics blogosphere.  This is the single most readable link in this entire post and the best introduction to Sargent on policy and method for non-economists.  The last few pages of the interview have a good discussion of how the euro was an “artificial gold standard,” how it was based on an understanding of the “unpleasant monetarist arithmetic point, and how breaking the fiscal rules has led to the possible collapse of the euro.  Recommended.

He has a very interesting 1973 paper on when the price level path will be determinate, again with Neil Wallace.  Here is his old paper on whether Keynesian economics is a dead end.  Here is his appreciation of Milton Friedman’s macroeconomics.  Here is his recent paper on whether financial regulation is needed, in a context of efficiency vs. stability.  Sargent has toyed with free banking ideas over the decades, casting them in the context of “the real bills doctrine.”  Here is a recent paper on determinants of the debt-gdp ratio.

He is not primarily known for his work on unemployment, but he has a lot of good papers in the area, many of them are listed hereHere he uses layoff taxes and unemployment compensation to explain the behavior of unemployment in Europe over the decades.

His work on “catastrophe,” with Cogley and others, suggests that the equity premium changes with historical memory.

With Velde, Sargent wrote a detailed and excellent book on the history of small change; why was small change scarce for so many centuries?  Hint: the answer involves Gresham’s Law.  There is an MR discussion of this book here.  This book illustrates just how deep Sargent’s learning and erudition runs.

Here are his new papers, Sargent remains very active.

Overall: Sargent really is one of the smartest, deepest, and most scholarly of all contemporary economists.  The word “impressive” resonates.  He has enough contributions for 1.6 Nobel Prizes, maybe more.  He has influenced the thought of all good macroeconomists.  The economic history is dedicated and path breaking.  If I had to come up with a criticism, I find that some of his papers have an excess of rigor and don’t leave the reader with a clear intuitive result.  I am not as enamored of foundations as he is.  Still, that is being picky and this is a very very good choice for the prize.  I would have considered a co-award with Neil Wallace, however, since two of Sargent’s most important papers (JPE 1975) and “unpleasant monetarist arithmetic” were written with Wallace.

Probably I won’t be updating this post any more!


This is the last section of a Sims's paper on the ISLM model (1998):

4. Conclusion

• Keynesian reasoning ought to be essentially forward looking and to emphasize expectational factors in savings and investment decisions. Traditional ISLM hides and inhibits development of this aspect of Keynesian modeling.

• ISLM ignores connections between monetary and fiscal policy that are enforced by the government budget constraint. In many policy contexts, this is a major gap.

• It remains to be seen whether there is a way to capture these aspects of Keynesian modeling in a package as neat and non-technical as ISLM, but that should not be an excuse for continuing to make ISLM the core of our teaching and informal policy discussion.

and this is the abstract

Abstract. ISLM inhibits attention to expectations in macroeconomics, going against the spirit of Keynes’s own approach. This can lead to mistaken policy conclusions and to unnecessarily weak responses to classical critiques of Keynesian modeling. A coherent Keynesian approach, accounting for endogenous expectations, implies very strong effects of monetary and fiscal policy and leads to greater attention to the role of the government budget constraint in making the effects of monetary policy conditional on prevailing fiscal responses, and vice versa.

Here's an interview with Sargent on macroeconomics and the crisis

He's clearly of the view that macro doesn't have to change dramatically in response to the lessons of the crisis.

This is one of those obvious and long-overdue prizes. Well deserved by both Sims and Sargent.

Rational expectations won Sargent and Lucas the Nobel prize. But some of their arguments to support their theories won't stand the test of time, especially that consumption is independent of income. This 30 years mistake was first published at the Journal of Political Economy in 1981 (Lucas, editor), and then in Rational Expectations and Econometric Practice (Lucas and Sargent, editors). By applying the same math found in Sargent's Macroeconomic textbook, one can easily derive the proof that change in savings is a function of income growth, which is far more reaching in explaining the U.S. and other economies than their original conclusion. In fact, it helps explain why positive growth can lead to negative savings. And in particular, to understand the effect of trade on income and therefore on savings you must reject some of their supporting arguments. See my paper at

It should be recognized that Sargent abandoned simple-minded ratex nearly 20 years ago. The remnant of it in his view is that while in reality people are using adaptive learning processes, those tend to converge on rational expectations. Sargent and Sims are both more sophisticated than those who continue to mindlessly churn out models assuming ratex and declaring that these reflect what is really going on in the macroeconomy.

Yes, see Alex's post.

Sargent is a giant, finally a well deserved prize. If anyone has pushed the boundaries of macro regularly, it was him. And I still think that his book 'Robustness' marks the future path of quantitative macro. It has been neglected so far, but the idea of robust control as a way of incorporating uncertainty about the underlying model is definitely a must for the future of monetary policy.

Quoting Sims paper on the ISLM just shows how outdated these macro bloggers are.

Well, we can move to the frontier. This is the conclusion of Sims's survey of "Rational Inattention and Monetary Economics", chapter 4 of the Handbook of Monetary Economics, Vol. 3 (2011):

Rational inattention has cast a critical light on much existing financial and macroeconomic modeling, suggesting that the now-standard technical apparatus of rational expectations could easily give misleading conclusions. At the same time, formally incorporating rational inattention into macroeconomic and financial models is an immense technical challenge. While the modest progress to date on these technical challenges may be discouraging, we might take comfort in the fact that rational expectations itself was seen as imposing immense technical challenges at the outset, so that it took decades for it to become a regular part of policy modeling.

AS BACKGROUND to understand the context in which Sims has been analyzing the idea of "rational inattention" readers may take a look at his 2002 paper on "Implications of Rational Inattention" whose introduction says:

Keynes’s seminal idea was to trace out the equilibrium implications of the hypo thesis that markets did not function the way a seamless model of continuously optimizing agents, interacting in continuously clearing markets would suggest. His formal device, price “stickiness”, is still controversial, but those critics of it who fault it for being inconsistent with the assumption of continuously optimizing agents interacting in continuously clearing markets miss the point. This is its appeal, not its weakness.

The influential competitors to Keynes’s idea are those that provide us with some other description of the nature of the deviations from the seamless model that might account for important aspects of macroeconomic fluctuations. Lucas’s 1973 classic “International Evidence...” paper uses the idea that agents may face a signal-extraction problem in distinguishing movements in the aggregate level of prices and wages from movements in the specific prices they encounter in transactions. Much of subsequent rational expectations macroeconomic modeling has relied on the more tractable device of assuming an “information delay”, so that some kinds of aggregate data are observable to some agents only with a delay, though without error after the delay. The modern sticky-price literature provides stories about individual behavior that explain price stickiness and provide a handle for thinking about what determines dynamic behavior of prices.

Most recently, theories that postulate deviations from the assumption of rational, computationally unconstrained agents have drawn attention. One branch of such thinking is in the behavioral economics literature (Laibson, 1997; Benabou and Tirole, 2001; Gul and Pesendorfer, 2001, e.g.), another in the learning literature (Sargent, 1993; Evans and Honkapohja, 2001, e.g.), another in the robust control literature (Giannoni, 1999; Hansen and Sargent, 2001; Onatski and Stock, 1999, e.g.). This paper suggests yet another direction for deviation from the seamless model, based on the idea that individual people have limited capacity for processing information.

That people have limited information-processing capacity should not be controversial. It accords with ordinary experience, as do the basic ideas of the behavioral, learning, and robust control literatures. The limited information-processing capacity idea is particularly appealing, though, for two reasons. It accounts for a wide range of observations with a relatively simple single mechanism. And, by exploiting ideas from the engineering theory of coding, it arrives at predictions that do not depend on the details of how information is processed.1

In this paper we work out formally the implications of adding information-processing constraints to the kind of dynamic programming problem that is used to model behavior in many current macroeconomic models. It turns out that doing so alters the behavior implied by these models in ways that seem to accord, along several dimensions, with observed macroeconomic behavior. It also suggest changes in the way we model the effects of a change in policy “rule” and in the way we construct welfare criteria in assessing macroeconomic policy. These aspects of the results are discussed in detail in later sections of the paper, which can be read independently of the mathematical detail in earlier sections.


I hope you agree that long ago Sims, Sargent and other macroeconomists moved well beyond ISLM models and even rational expectations. Unfortunately policy debates still rely on ad hoc corrections to ISLM models. Earlier today, when Sims was interviewed as part of the Prize announcement, he was asked about the relevance of his work for the ongoing policy debate. He politely answered that it would take too long to explain his work and to relate it to the ongoing policy debate. Indeed, the work of scholars at the frontier of macroeconomics is not something one would recommend bloggers, particularly journalists and other mercenaries of fraudulent clowns, to discuss. Let them entertain themselves discussing about what Keynes said and what Dernburg & McDougall's ISLM model implies for policy making.

The question in the literature has been settled for ages. To use papers from the literature to talk about the problem of why policy makers keep recurring to ISLM is missing the point. Seriously, when was the last ISLM paper published in a serious journal? I definitely agree with you about the frontier of macro, regarding journalists (not sure who you mean by mercenaries, perhaps I'm out of context here, but it seems you have someone particular in mind). But between ISLM and frontier macro there is a huge gap, monetary authorities understandably do not yet use frontier macro but are much more supported in modern macro theory. For example, the ECB has for long used a mix of empirical and theoretical models for both forecasting and policy advice. That's where the policy debate could be, IMO. Frontier stuff should provide some caveats but should not be taken as a base for policy until we are sure it's relevant (by when it will no longer be frontier).

Also, rational inattention models do not deviate from rational expectations. Rational expectations are about how agents used the information they have at hand, NOT about perfect information. Agents who are rationally inattentive form rational expectations about the future, conditional on their information set. The only difference is that the information set is made endogenous, by the assumption of limited capacity to gather information. So perhaps you are misguided when you oppose the two.

The big surprise is that Hansen wasn't included. He worked under Sims as a grad student, developed the best econometric advance in the past 30 years (GMM), and works closely with Sargent on robustness in macro models.

That's clearly because Lars Hansen will win on his own!

Here is one student's view on Sargent, by the way...

I know it's not relevant to the prize, but I feel the need to add that Sargent is also an excellent teacher. He taught a fantastic advanced macro course for undergrads a couple years ago--one of the best courses I've ever had the privilege of taking.

I think your comment is very relevant. This is a man that has spent his life in academia, and has shared his knowledge with several generations of students. Yes he is an extraordinary economist, but he is also a great teacher.

I am very pleased to find one of the two professors I had in the mid eighties as a recipient of this esteemed award. Professors Sargent and Wallace had a profound affect on me as an undergraduate at the University of Minnesota. They were part of an Economics department in the 1980's that was a very exciting place. They taught me economic theory, but more importantly they sparked my passion for learning. I had spent a few years studying engineering and found professors in that field very unapproachable by undergraduates. I took a class from Neil Wallace on Rational Expectations and I was hooked and quickly changed my major to Economics. Thomas Sargent, Neil Wallace, and there rest of the Economics department was truly interested in teaching and sharing their passion in economic theory. I found these professors very accessible, their doors were always open to all students, undergraduate or graduate students; something I found very rare at the time at Minnesota.

They lit a passion in learning in me that still burns today. I graduated from Minnesota with a Bachelor of Science Degree in Economics. While I did not continue my graduate studies in Economics, I did take the knowledge, methods, and passion I gained from these men and applied it to my graduate studies in Manufacturing System.

Congratulations Professor Sargent.

I did not become an economist, but I did take Money II with Sargent at Chicago back in 1993.

Sargent swept into the classroom and unpacked some notes onto the lectern. He said, "OK, let's get started," turned, and began writing differential equations on the blackboard. This occupied most of the next six weeks and then one day he said, "Now that you have some intuition for [these models], let's talk about economics." What followed was utterly simple: when the price [of money] goes up, people demand less. But this is indeed what falls out (perhaps in a sort of extended tautology) of the representative-agent models in _Dynamic Macroeconomic Theory_.

I remember Sargent as genuinely sweet. He engaged other theorists as voices in a conversation rather than as opponents to be disparaged or dismissed. He appeared to be completely free of contempt, even if he was arguing against something he thought was totally wrongheaded.

I'm (much more) cynical now, but Sargent seemed for all the world to be a genuinely good person working in good faith with the assumption that others were too. The lesson that this is a productive way to work and an excellent way to live stuck long after the equations faded.

At the end of the quarter he received an extended ovation, in which I clapped wholeheartedly. It was the most sincere expression of public appreciation I've yet witnessed.

Sargent was generous and rigorous. I wanted to be like him.

It's also interesting to see this celebrated as a non-Keynesian prize when Sargent and Sims seem to have been given this mostly for their contributions to structural VARs...which are the preferred tool of modern Keynesians.

That said, Sargent's contributions were so much more vast than that, it's almost strange he received the award without much mention of his contributions to economic theory.

Nobel committee's mistakes continues. Choosing proponents of unrealistic mathematical economics and avoiding non-American and heterodox economists

Err, just recently Chris Pissarides (2010) is from Cyprus, Elinor Estrom (2009) is a political scientist.

The YouTube link seems broken?

Nobel prize to a neoclassical libertarian. No surprise there.

TC boldly declares that this was a 'non Keynesian prize.' Before making such a pronouncement, he should have waited to see:

Comments for this post are closed