Month: October 2011

Christopher Sims, Nobel Laureate

Here is Sim’s home page, lots of content.  Here is his Wikipedia page.  Here is Sims on scholar.google.com.  Here is a video of Sims speaking.  Sims is currently at Princeton but most closely associated with the University of Minnesota.  Basically this is a prize in praise of Minnesota macro, fresh water macro of course, and lots of econometrics.  Think of Sims as an economist who found the traditional Keynesian methods “just not good enough” and who worked hard to improve them.  He brought a lot more rigor into empirical macro and he helped define a school of thought at the University of Minnesota.  His influence will endure.  Some of his results raised the status of the “real shocks” approach to business cycles, although I think of Sims’s work as more defined by a method than by any set of conclusions.

I think of Sims as having three major contributions: vector autoregression as a macroeconomic method, impulse response functions, and deep examinations of money-income causality.  Via Tim Harford, here are powerpoint slides on the first two, first rate presentation.  If you know some math, this is the place to go on Sims.

Here are Jim Hamilton’s mathematical notes on impulse response functions.  It has helped economists sort out the differences between expected and unexpected shocks and it has become a regular part of the macroeconomic toolkit.  The Swedes give a simple — perhaps too simple — exposition of impulse response functions.  Wikipedia has a simple introduction:

In signal processing, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse. More generally, an impulse response refers to the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time (or possibly as a function of some other independent variable that parameterizes the dynamic behavior of the system).

Here is one good brief survey of VAR techniques.  Here is another: tough stuff!  Here is one of Sim’s seminal papers related to VAR techniques.  Basically this stuff is saying we don’t know as much as we might like to think we do, most of all about macroeconomics.  It is suggested that empirical work proceed with extreme caution and that we should see what we can scoop out of the data in a robust fashion.

He has done serious work on extending concepts of Granger causality; in this context the question is whether money causes output or is it output causing money?  Sims’s empirical techniques helped bring people to the conclusion that it was often output causing money and in the 1980s this was a revelation of sorts (though not a new idea to economics).

Here are his files on the topic of rational inattention, coming out of Shannon’s communications theory, not what he is best known for but he has made contributions in that area as well.  In this paper he tries to show how rational inattention can give rise to partially Keynesian results.  With Sargent, he also has contributions to the fiscal theory of the price level.

Here is a 2007 interview with Sims, quite accessible.  He says that monetary policy doesn’t matter as much as you think.  He does favor explicit monetary targets, and he worries about the fiscal foundations of the euro.  Here is a more technical interview, on statistics, Bayesian reasoning, and GMM, it’s Sims putting some of the math into words, sort of.

Overall: Sims is one of the most important figures in macro econometrics in the last thirty years, if not the most important.  He clearly deserves a Nobel Prize.

From the comments, on Sims and IS-LM

This is from E. Barandiaran and it relates to recent controversies in the blogosphere:

This is the last section of a Sims’s paper on the ISLM model (1998):

4. Conclusion

• Keynesian reasoning ought to be essentially forward looking and to emphasize expectational factors in savings and investment decisions. Traditional ISLM hides and inhibits development of this aspect of Keynesian modeling.

• ISLM ignores connections between monetary and fiscal policy that are enforced by the government budget constraint. In many policy contexts, this is a major gap.

• It remains to be seen whether there is a way to capture these aspects of Keynesian modeling in a package as neat and non-technical as ISLM, but that should not be an excuse for continuing to make ISLM the core of our teaching and informal policy discussion.

and this is the abstract

Abstract. ISLM inhibits attention to expectations in macroeconomics, going against the spirit of Keynes’s own approach. This can lead to mistaken policy conclusions and to unnecessarily weak responses to classical critiques of Keynesian modeling. A coherent Keynesian approach, accounting for endogenous expectations, implies very strong effects of monetary and fiscal policy and leads to greater attention to the role of the government budget constraint in making the effects of monetary policy conditional on prevailing fiscal responses, and vice versa.

http://sims.princeton.edu/yftp/Bergamo/Bergamo.pdf

Nobel for Sargent and Sims

The Nobel in Economics goes to Thomas Sargent and Christopher Sims, for empirical macroeconomics.

Let’s go back to the Lucas Critique of 1976. Lucas looked at the large econometric models of the 1970s, models that contained hundreds of variables relating economic aggregates like income, consumption, unemployment and so forth. Lucas then asked whether these models could be used to predict the impact of new policies. One could certainly take the regression coefficients from these models and forecast but Lucas argued that such a method was invalid because the regression coefficients themselves would change with new policies.

If you wanted to understand the effects of a new policy you had to go deeper, you had to model the decision rules of individuals based on deep, invariant or “structural” factors, factors such as how people value labor and leisure, that would not change as policy changed and you had to include in your macro model another deep factor, expectations.

The Nobel for Christopher Sims and Thomas Sargent is for work each did in their quite different ways to develop ideas and techniques to address the Lucas Critique. Sargent’s (1973, 1976) early work showed how models incorporating rational expectations could be tested empirically. In many of these early models, Sargent showed that including rational expectations in a model could lead to invariance results, nominal shocks caused by changes in the money supply, for example, wouldn’t matter.

Sargent’s name thus became connected with rational expectations and new-classical invariance results. Sargent himself, however, has long moved past rational expectations models towards models that incorporate learning. What will people do when they don’t know the true model of the economy? How will they update their model of the economy based on observations? In these learning models the goal is to look for a self-confirming equilibrium. The interesting thing about a self-confirming equilibrium is that people’s expectations and learning can converge on a false model of the economy! Sargent has thus evolved in a very different direction than one might have imagined in 1976.

Sargent is also a very good economic historian, having written important pieces on monetary history (and also here on America) that combine history with theory.

Sims was also unsatisfied with the standard econometric models of the 1970s. In response, he developed vector auto regressions. In its simplest form a VAR is just a regression of a variable on its past values and the past values of other related variables. It’s easy to run a VAR on unemployment, inflation and output, for example. Such a VAR doesn’t tell you much about structural parameters but surprisingly even very simple VARs have quite good forecasting ability relative to the macro models of the 1970s, this was another reason why those models declined in importance.

Sims, however, took the models a step further by showing that you could identify fundamental shocks in these models by making assumptions about the dynamics or ordering of the shocks. Interest rates respond to government spending, for example, before government spending responds to interest rates. Note that these ordering assumptions tend to be quite neutral with respect to different economic models so VARs could be used to test different theories and could also be used by practioniers of many different stripes. Thus VAR models caught on very quickly and have come to dominate macro-economic modelling.

VAR models can also be identified in different ways, instead of identifying based on ordering, for example, one can identify based on what economic theory predicts about long-run relationships. For example, a monetary shock should affect the price level but not the output level in the long-run. More generally, modern macro models are dynamic models–they make predictions about how variables evolve over time–so relating a VAR to a model thus creating a structural or identified VAR has been the natural way to examine the data and to test modern models.

With identification in hand one can then use these models to plot impulse response functions. How does a shock to oil prices work its way through the economy? When does GDP begin to fall and by how much? How long does it take the economy to recover? What about a shock to monetary policy? Sims (1992), for example, looks at monetary shocks in five modern economies. Understanding these dynamics has played an important role in recent debates over the importance of money, government spending and real shocks.

Thomas Sargent, Nobel Laureate

Most of all, this is a prize about expectations, macroeconomics, and the theory and empirics of policy.  Let’s start with Sargent, noting that I will be updating throughout.

Sargent has made major contributions to macroeconomics, the theory of expectations, fiscal policy, economic history, and dynamic learning, among other areas.  He is a very worthy Laureate and an extraordinarily deep and productive scholar.  Here is Wikipedia on Sargent.  Here is his home page, rich with information. Here is Sargent on scholar.google.com.  Here is the explanation for both laureates from Sweden.  Here is a Thomas Sargent lecture on YouTube.

He now teaches at NYU, and is a fellow at Hoover, though much of his career he spent at the University of Minnesota.  Sargent is one of the fathers of “fresh water” macro, though his actual views are far more sophisticated than the critics of his approach might let on.  He has done significant work on learning and bounded rationality, for instance.  This is very much a “non Keynesian” prize.

I think of Sargent as a “foundationalist” economist who always insists on a model and who takes the results of that model seriously.  In general he would be placed in the “market-oriented” camp, though it is a mistake to view his work through the lens of politics.

Sargent was first known for his work on rational expectations in the 1970s.  He wrote a seminal paper, with Neil Wallace, on when rational expectations will mean that monetary policy does not matter.  You will find that article explained here, and the paper here.  Expected monetary growth will not do much for output because it does not fool people and thus its nominal effects wash away.

One of his most important (and depressing) papers is Sargent, Thomas J. and Neil Wallace (1981). “Some Unpleasant Monetarist Arithmetic“. Federal Reserve Bank of Minneapolis Quarterly Review 5 (3): 1–17.  The main idea of this paper is that good monetary policy requires good fiscal policy.  Otherwise the fight against inflation will not be credible.  This is probably his most important paper.

He followed up this paper with Sargent, Thomas J. (1983). “The Ends of Four Big Inflations” in: Inflation: Causes and Effects, ed. by Robert E. Hall, University of Chicago Press, for the NBER, 1983, p. 41–97.  This is a masterful work of economic history, showing that monetary stabilizations, from hyperinflation, first required some fiscal policy successes.  I view this as his second most important paper, following up on and illustrating “unpleasant monetarist arithmetic.”

These two papers inspired work from other researchers on a “fiscal theory of the price level,” integrating monetary and fiscal theories.  In Sargent’s view the quantity theory is a special case of a more general theory of asset-backed monies, and for fiat monies the relevant backing cannot be determined without referring to the fiscal stance of the money-issuing government.

His Dynamic Macroeconomic Theory has been an important Ph.d. text for macro.

Sargent also has important work on computational learning, such as Sargent, Thomas J. and Albert Marcet (1989). “Convergence of Least Squares Learning in Environments with Hidden State Variables and Private Information”. Journal of Political Economy 97 (6): 251. doi:10.1086/261603.  A short summary of his work on learning can be found here; I will admit I have never grasped the intuitive kernel behind this work.  I have not read Sargent’s work on neutral networks, you will find some of it here.  It may someday be seen as path breaking, but so far it has influenced only specialists in that particular area.  It is considered to be of high quality technically.  Here is his piece, with Marimon and McGrattan, on how “artificially intelligent” traders might converge upon a monetary medium of exchange; think of this as a modern and more technical extension of Carl Menger.

Here is an old paper with Sims, co-laureate, on how to do macro econometrics with a minimum of theoretical assumptions; this reflected a broad move away from structural models and toward “theory-less” approaches such as Vector Auto Regression.  Here is his introductory paper on how to understand the VAR method.  Sargent’s worry had been that structural models estimate parameters, but then those parameters will vary with policy choices and in essence the economist will be using an “out of date” model.  VAR models are an attempt to do without structural estimation as much as possible, though critics might suggest this enterprise was not entirely successful.

Here is Sargent’s take on the history of the Fed; basically the Fed first had an OK model, then forgot it for a while (the 1970s), then relearned it again.  In July 2010 he penned a defense of the Greenspan-era FOMC, based on the view that they were tackling worst case scenarios.  Here is Sargent’s paper, with Tim Cogley, on what the Fed should do when it does not know the true model.

Circa 2010, in an interview, Sargent defends the relevant of freshwater macro during the recent financial crisis.  While my view is not exactly his, it is a good corrective to a lot of what you read in the economics blogosphere.  This is the single most readable link in this entire post and the best introduction to Sargent on policy and method for non-economists.  The last few pages of the interview have a good discussion of how the euro was an “artificial gold standard,” how it was based on an understanding of the “unpleasant monetarist arithmetic point, and how breaking the fiscal rules has led to the possible collapse of the euro.  Recommended.

He has a very interesting 1973 paper on when the price level path will be determinate, again with Neil Wallace.  Here is his old paper on whether Keynesian economics is a dead end.  Here is his appreciation of Milton Friedman’s macroeconomics.  Here is his recent paper on whether financial regulation is needed, in a context of efficiency vs. stability.  Sargent has toyed with free banking ideas over the decades, casting them in the context of “the real bills doctrine.”  Here is a recent paper on determinants of the debt-gdp ratio.

He is not primarily known for his work on unemployment, but he has a lot of good papers in the area, many of them are listed hereHere he uses layoff taxes and unemployment compensation to explain the behavior of unemployment in Europe over the decades.

His work on “catastrophe,” with Cogley and others, suggests that the equity premium changes with historical memory.

With Velde, Sargent wrote a detailed and excellent book on the history of small change; why was small change scarce for so many centuries?  Hint: the answer involves Gresham’s Law.  There is an MR discussion of this book here.  This book illustrates just how deep Sargent’s learning and erudition runs.

Here are his new papers, Sargent remains very active.

Overall: Sargent really is one of the smartest, deepest, and most scholarly of all contemporary economists.  The word “impressive” resonates.  He has enough contributions for 1.6 Nobel Prizes, maybe more.  He has influenced the thought of all good macroeconomists.  The economic history is dedicated and path breaking.  If I had to come up with a criticism, I find that some of his papers have an excess of rigor and don’t leave the reader with a clear intuitive result.  I am not as enamored of foundations as he is.  Still, that is being picky and this is a very very good choice for the prize.  I would have considered a co-award with Neil Wallace, however, since two of Sargent’s most important papers (JPE 1975) and “unpleasant monetarist arithmetic” were written with Wallace.

Probably I won’t be updating this post any more!

John Ralston Saul on the decline of political speech

Via www.bookforum.com, from an interesting interview:

GB Who are the best speakers in the world today, politically?

JRS Long silence. The reason for which there is a ‘long silence’ is that, with the gradual bureaucratization of politics, we have ended up with – through the 1970s, 1980s and 1990s – politicians increasingly reading speeches written for them by somebody else; that is, politicians being made to feel that they were not the real political leaders, but rather – in a sense – heads of a large bureaucracy. The result has been that politicians may think that they have a responsibility to speak in a solid and measured way – with the consequence that they not only became boring and bad speakers, but sound artificial and are not listened to. Modern speech writers started adding in ‘rhetoric,’ which sounded artificial, and led to people listening even less to political speeches. This also came with a rise in populism; that is, we saw the revival of populist speaking – with populist politicians winning power here and there – meaning that the speech writers started putting populist rhetoric in as a gloss on top of the boring managerial material that they had been producing. So what we now have are sensible, elected leaders giving speeches that, at one level, are boring, solid stuff and, at another level, cheap rhetoric.

…Many political leaders think that it is dangerous to speak well. In fact, they are looking to bore people – and we feel that. As a result, when we stand up and say real things, people are quite shocked. And that is because they are always working on this level of measurement. If we take someone like a Trudeau or an FDR, or an LBJ, or a de Gaulle – someone like that – they knew that speeches are not about who will like them and dislike them. Speeches are actually about whether people will respect you because you have spoken to them in a way that they take to be honest – as if they are treated in a way that is intelligent. Trudeau was often boring, but his secret was that, even when he was being insulting, he was talking to you as if you were as smart as he was.

Not a CLASS Act

When President Obama’s health care proposal was being debated we were repeatedly told that the “The president’s plan represents an important step toward long-term fiscal sustainability.” Indeed, a key turning point in the bill’s progress was when the CBO scored it as reducing the deficit by $130 billion over 10 years making the bill’s proponents positively giddy, as Peter Suderman put it at the time. Of course, many critics claimed that the cost savings were gimmicks but their objections were overruled.

One of the budget savings that the critics claimed was a gimmick was that a new long-term care insurance program, The Community Living Assistance Services and Supports program or CLASS for short, was counted as reducing the deficit. How can a spending program reduce the deficit? Well the enrollees had to pay in for at least five years before collecting benefits so over the first 10 years the program was estimated to reduce the deficit by some $70-80 billion. Indeed, these “savings” from the CLASS act were a big chunk of the 10-yr $130 billion in deficit reduction for the health care bill.

The critics of the plan, however, were quite wrong for it wasn’t a gimmick, it was a gimmick-squared, a phantom gimmick, a zombie gimmick:

They’re calling it the zombie in the budget.

It’s a long-term care plan the Obama administration has put on hold, fearing it could go bust if actually implemented. Yet while the program exists on paper, monthly premiums the government may never collect count as reducing federal deficits.

Real or not, that’s $80 billion over the next 10 years….

“It’s a gimmick that produces phantom savings,” said Robert Bixby, executive director of the Concord Coalition, a nonpartisan group that advocates deficit control..

“That money should have never been counted as deficit reduction because it was supposed to be set aside to pay for benefits,” Bixby added. “The fact that they’re not actually doing anything with the program sort of compounds the gimmick.”

Moreover, there were many people inside the administration who thought that the program could not possibly work and who said so at the time. Here is Rick Foster, Chief Actuary of HHS’ Centers for Medicare and Medicaid Services on an earlier (2009) draft of the proposal:

The program is intended to be “actuarially sound,” but at first glance this goal may be impossible. Due to the limited scope of the insurance coverage, the voluntary CLASS plan would probably not attract many participants other than individuals who already meet the criteria to qualify as beneficiaries. While the 5-year “vesting period” would allow the fund to accumulate a modest level of assets, all such assets could be used just to meet benefit payments due in the first few months of the 6th year. (italics added)

So we have phantom savings from a zombie program and many people knew at the time that the program was a recipe for disaster.

Now some people may argue that I am biased, that I am just another free market economist who doesn’t want to see a new government program implemented no matter what, but let me be clear, this isn’t CLASS warfare, this is math.

Hat tip: Andrew S.

More Russ Roberts on TGS

Russ has written a reply to my response, read his whole piece, using his numbers I will put a few follow-up responses under the fold…

1. Male median wage data (down since 1969) suggest divorce is not the main issue; in any case divorce is an economic and psychological catastrophe for many people, and defending living standards by invoking the effects of divorce in the data strikes me as actually more pessimistic than my view.  I suspect Russ’s own cultural values are in accord with this perspective.  Russ’s postulated effect also does not explain 1998-2011 median wage stagnation very well.

3. The key question is the net bias of statistics, not the bias for consumer durables alone.  Our real economic performance on a lot of services — a huge and growing part of the economy — is extremely weak.  As durables get cheaper, the biases in measuring their quality become less important.

4. I don’t see that Russ has made an actual counter to my argument here.

6. In successful periods growth shows up in the major mainstream economic statistics, including the median.  If it doesn’t, at the very least we should conclude that growth is considerably slower than usual.

9. There is no measured median income progress since 1997 and very little since 1973; that’s not just a cyclical phenomenon.  The supposedly good years of the noughties now look like a bubble, not the reality.

On panel data, I read the Pew Report which Russ cites.  Over a more than thirty year time period, only 63 percent of children had incomes exceeding those of their parents, and that comparison includes some pre-TGS, quite high-growth years.  I don’t find that number impressive at all.  In any case the key question is a comparative one, and while the study has not been done, it is highly likely one would find much stronger cross-generational measures of progress for earlier generations.

On reconciling the per capita gdp and median stories, the concept of rent-seeking — most of all through the service sectors and finance and government — will suffice.  I know that Russ already agrees with the finance side of this story, maybe the government side too and who knows, perhaps education and medicine as well?

Overeducation in the UK

Chevalier and Lindley have a new paper:

During the early Nineties the proportion of UK graduates doubled over a very short period of time. This paper investigates the effect of the expansion on  early labour market attainment, focusing on over-education. We define  over-education by combining occupation codes and a self-reported measure for the  appropriateness of the match between qualification and the job. We therefore  define three groups of graduates: matched, apparently over-educated and  genuinely over-educated; to compare pre- and post-expansion cohorts of graduates. We find the proportion of over-educated graduates has doubled, even though over-education wage penalties have remained stable. This suggests that the labour market accommodated most of the large expansion of university graduates. Apparently over-educated graduates are mostly indistinguishable from matched graduates, while genuinely over-educated graduates principally lack non-academic skills such as management and leadership. Additionally, genuine over-education increases unemployment by three months but has no impact of the number of jobs held. Individual unobserved heterogeneity differs between the three groups of graduates but controlling for it, does not alter these conclusions.

For the pointer I thank Alan Mattich, a loyal MR reader.

Which countries target their welfare spending most effectively?

Australia has the most “target efficient” system of social security benefits of any OECD country. For each dollar of spending on benefits our system reduces income inequality by about 50 per cent more than the United States, Denmark or Norway, twice as much as Korea, two and a half times as much as Japan or Italy, and three times as much as France.

Other countries that are similar to Australia in this regard include New Zealand, the United Kingdom and Ireland, and also Denmark and Finland. In fact, nearly all of the high-spending Scandinavian welfare states target to the poor more than does the United States.

The full story is here, and if you’re wondering I too am confused by the double invocation of Denmark.  Hat tip goes to www.bookforum.com.

On related themes, see the new Lane Kenworthy book Progress for the Poor.