Results for “nobel”
599 found

The 2014 Nobel Laureate in economics is Jean Tirole

A theory prize!  A rigor prize!  I would say it is about principal-agent theory and the increasing mathematization of formal propositions as a way of understanding economics.  He has been a leading figure in formalizing propositions in many distinct areas of microeconomics, most of all industrial organization but also finance and financial regulation and behavioral economics and even some public choice too.  He is a broader economist than many of his fans realize.

Tirole is a Frenchman, he teaches at Toulouse, and his key papers start in the 1980s.  In industrial organization, you can think of him as extending the earlier work of Ronald Coase and Oliver Williamson with regard to opportunism and recontracting, but applying more sophisticated and more mathematical forms of game theory.  Tirole also has been a central figure in procurement theory and optimal contracts when there is asymmetric information about costs.  The idea of mechanism design runs throughout his papers in many different guises.  Many of his papers show “it’s complicated,” rather than presenting easily summarizable, intuitive solutions which make for good blog posts.  That is one reason why his ideas do not show up so often in blogs and the popular press, but they nonetheless have been extremely influential in the economics profession.  He has shown a remarkable breadth and depth over the course of the last thirty or so years.

His possible pick had been heralded for some numbers of years now, this award should not be considered a surprise at all.  You will note that the Swedes mention Jean-Jacques Laffont, who died a decade ago, and who co-authored many of the key papers in this area with Tirole.  Such a mention is considered a nod in the direction of implying that Laffont, had he lived, would have shared in the prize.

Here is Tirole’s home page.  Here is Tirole on Wikipedia.  Here is a short biography.  Here is Tirole on scholar.google.com.  Here is the press release.  Here is background from the Swedes.  Here is the 54-page document on why he won, one of the best places to start.  Here is the Twitter commentary.

One idea of Tirole’s I use frequently has to do with renegotiability.  Let’s say a regulator and a monopolist agree to a scheme of regulation and provision, creating some surplus for both parties.  As time passes, will each side of that bargain stick with the original agreement?  A simple example here is the defense contractor.  After a procurement contract is written, sometimes the supplier has the incentive to conduct a hold-up, to report that costs are higher than expected, and to ask for more money in return for timely fulfillment of the contract.  Of course this is a contract breach, but if no other supplier can step in and do the job, it may be optimal for the government to give in to these demands to some degree.  The question then is: how should the contract best be designed in advance, so as to prevent this problem from popping up later on?  Or should the renegotiation simply be allowed?  Anyone wishing to tackle these questions likely would start with the papers of Tirole on this topic.  For one thing, these papers help explain why a second-best optimal contract may offer some rents to agents and appear to give the agent “too good a deal.”

Some of his key papers focus on asymmetric information about costs.  Say a firm knows its costs and the regulator can only guess.  Ideally the regulator would likely to make the firm price at marginal cost, but the firm will pretend marginal cost is higher than it really is.  The regulator and the firm thus play a game.  Tirole figured out with rigor which principles govern how this game works and what a second-best regulatory solution might look like.  With Laffont, here is his key paper in that area.  David Baron made contributions to this area as well.  Again, there is a potential argument for an “agent rent,” to limit the incentive of the agent to lie too much about costs, for fear of losing that rent if the cooperative relationship breaks down.

Tirole, writing sometimes with Rey, wrote some important papers on vertical agreements and how they can be used to extend market power, for instance when can buying up parts of a supply chain help extend monopoly power?  His paper with Oliver Hart figures out some of the conditions under which vertical acquisitions can help foreclose a market.  With Rey, Tirole surveys the literature on vertical relations and foreclosure.

This early 1984 paper, with Drew Fudenberg, laid out the conditions when firms should overinvest in capacity to deter competitive entry, or when firms should instead look “lean and mean” for entry deterrence.  The underlying analysis has shaped many a business school discussion.

I am a fan of this 1996 paper on how we can think of firms as credible ways of carrying reputations in a collective sense.  For instance the existence of a firm called “Google” transmits real information about the qualities of the people you deal with when you are transacting with members of the Google firm.  This was an important addition to the usual Coasean vision of thinking of a firm in terms of economizing transaction costs.

He has written some key papers on financial intermediation, collateral, and the agency problems associated with lending, here is one well-cited paper by him and Holmstrom. Here is a non-gated version (pdf).  A key argument is that a decline in the value of the collateral in a lending relationship can lower efficiency and also output, and this can help explain some features of business cycles.  This 1997 paper was well ahead of its time and it remains one of Tirole’s most widely cited works.  Arguably it is relevant for recent financial crises.

He has a 1994 book with Mathias Dewatripont on the prudential regulation of banks and how to apply the proper incentives to make sure banks do not take too much risk at public expense.  Obviously this also has since become a much more important topic.  How many of you know his 1996 paper with Rochet on “Interbank Lending and Systemic Risk“?  They show the contradictions which can plague a “too big to fail” policy and the attempts of central banks to maintain a “creative ambiguity” about what kinds of bailouts will occur, using rigorous game theory of course.

With Rochet, he has a well-known paper on platform competition, laying out the basics of how these “two-sided” markets work.  Think of internet or payment portals which must get both sides of the market on board.  What are the efficiency properties of such markets and what are the game-theoretic issues?  In this setting, how do for-profits compare to non-profits?  Competition to monopoly?  Rochet and Tirole laid out some of the basics here, here is their survey piece on the field as a whole.  Alex’s post above has much more on these points, and Joshua Gans covers this area too, here is Vox.

In public choice economics, he and Laffont have an important paper on when regulatory capture is actually likely to occur.  I have yet to see the insights of this paper incorporated into the rest of the literature adequately.  His paper on the internal organization of government considers the relative appropriateness of high- vs. low-powered incentives as applied to government employees, among other matters.  His 1999 paper with Mathias Dewatripont, “Advocates,” shows in game-theoretic terms why something like the Anglo-American system of competing lawyers might make sense as the best way of discovering information and adjudicating the truth.  This paper shows how career concerns affect bureaucratic incentives and what is the optimal degree of specialization within a government bureaucracy.

He has thought very deeply about the nature of liquidity and what is the optimal degree of liquidity in a securities market.  There can be some side benefits to illiquidity, namely that it forces parties to stay committed to an economic relationship.  This must be weighed against the more obvious benefits of liquidity, which include having better benchmarks for measuring managerial performance, namely stock price (see this paper with Holmstrom).  This kind of analysis can be applied to the question of whether the shares of a firm should stay privately traded or be put on a public exchange.  This 1998 paper, with Holmstrom, is a key forerunner of the current view that the global economy does not have enough in the way of safe assets.

Here is his paper on vertical structure and collusion in bureaucracies (pdf).  Here is his very useful survey article, with Holmstrom, on the theory of the firm.

His textbook on Industrial Organization is a model of clarity and remains a landmark in the field, even though it came out almost thirty years ago.

He has written a book on telecommunications regulation (with Laffont) although I have never read that material.

In finance he wrote this key 1985 paper, deriving the conditions under which you can have an asset bubble in a market with rational expectations.  The problem of course is that the price of the asset tends to keep rising, relative to the size of the economy as a whole, and eventually it becomes impossible to keep on buying the asset.  This has to mean an eventual crash, unless the growth rate of the economy exceeds the general rate of return on assets.  This paper helped us think through some issues which recently have resurfaced with the work of Thomas Piketty.  His earlier 1982 paper on speculation is also relevant to this topic.  Most economists think of Tirole as game theory, finance, and industrial organization, but his contributions to finance are significant as well.

Just to show his breadth, here is his paper with Roland Benabou on incentives and when they undermine the intrinsic desire to do a good job.  For instance if you pay kids to get good grades, will that backfire and kill off their own reasons for wanting to do well?  Alex covers that paper in more detail.  This other paper with Benabou, “Self-Confidence and Personal Motivation,” is a great deal of fun.  It analyzes the benefits of overconfidence, namely greater motivation, and shows how to weigh those benefits against the possible costs, namely making more mistakes.  It shows Tirole dipping a foot into the waters of behavioral economics and again reflects his versatility in terms of fields.  I like this sentence from the abstract: “On the supply side, we develop a model of self-deception through endogenous memory that reconciles the motivated and rational features of human cognition.”  Again with Benabou, here is his paper on willpower and personal rules, very much in the vein of Thomas Schelling.

Here is Tirole on intellectual property and health in developing countries, with plenty on policy.

It’s an excellent and well-deserved pick.  One point is that some other economists, such as Oliver Hart and Bengt Holmstrom, may be disappointed they were not joint picks, this would have been the time to give them the prize too, so it seems their chances have gone down.

Overall I think of Tirole as in the tradition of French theorists starting with Cournot in 1838 (!) and Jules Dupuit in the 1840s, economics coming from a perspective with lots of math and maybe even some engineering.  I don’t know anything specific about his politics, but to my eye he reads very much like a French technocrat in terms of approach and orientation.

Jean Tirole is renowned as an excellent teacher and a very nice person.

The Nobel Prize in Chemistry

Eric Betzig, one of today’s winners of the Nobel Prize in Chemistry is a team leader at Janelia Farms the stunning Howard Hughes Medical Institute campus located nearby in Ashburn, VA. I’ve been out to the labs at Janelia a number of times for public talks and seen how Betzig’s work creating much higher resolution microscopes has impacted research in chemistry, biology and brain science. The new microscopes can be used to look at the dynamic operation of live cells. Check out some of the “movies” produced by these techniques. Be sure to scroll down and click right to see the movie of chromosome separation (no it’s not an animiation!).

Betzig has had a very unusual career. After working at Bell Labs for six years he quit science to work in manufacturing, optimizing machines in his father’s factory. After 10 years of that he wanted to get back into science but he hadn’t had any publications for a decade so he knew that he couldn’t just ask for job. Instead, he spent long hours at his cottage thinking of the ideas that would bring him job offers and eventually the Nobel.

Hat tip: Monique van Hoek.

Addendum: Here is Derek Lowe with more on the techniques.

Who will win the next Nobel Prize in economics?

Jon Hilsenrath says Bernanke deserves one, I agree.  I would gladly see a Bernanke-Woodford-Svensson prize, perhaps working in Mark Gertler too.

But for this year’s pick, due October 13, I am predicting William J. Baumol, possibly with William G. Bowen, for work on the cost-disease.  As you probably know, this hypothesis suggested that the costs of education and health care would continue to rise in relative terms, thereby creating significant economic problems.  Not a bad prediction for 1966, and of course it has become a truly important issue.

One problem is that the initial Baumol and Bowen hypothesis focused on the performing arts, rather than health care and education.  A lot of live performance is pretty robust, although not always European high culture, and furthermore the internet has proven a much closer substitute in the minds of consumers than many people had expected.  So the cost-disease argument, in the area where it was originally formulated, hasn’t panned out but rather has evolved into a kind of merit good demand — “I wish more people were paying for Mozart rather than for sports and live music in bars.”

A second problem is whether it should be Baumol or Baumol and Bowen.  Bowen was co-author on the major and initial work, but Baumol has numerous other contributions, including contestability, operations research and economics, entrepreneurship, externalities and Pigouvian taxes, portfolio theory, and even in the older literatures on money demand and also sales maximization for business firms.  One can well imagine Baumol paired with one or two other people, perhaps from industrial organization, and the cost-disease as one but not the only reason for the prize.  Or if they give it to him and Bowen, it looks more like an “economics of education” prize, with a mention of health care tacked on.

So yes, that’s my pick.  Keep in mind people, in the past I have never, ever gotten the timing of the pick right.  Not once.  But Baumol is now ninety-two, so I think this will be his year.  Of course the Bayesian will note that last year he was ninety-one.

Banksy Comments on the Nobel Prize?

Mashable: Street artist Banksy set up a stall in New York’s Central Park Saturday, selling his original pieces — worth tens of thousands of dollars each — for $60.

The event was documented on video and posted on Banksy’s website. It took several hours for the first artwork to be sold, to a lady who managed to negotiate a 50% discount for two small canvases. There were only two more buyers, and by 6 p.m. the stall was closed with total earnings of $420.

For comparison, in 2007 Banksy’s work “Space Girl & Bird” was purchased for $578,000, and in 2008 his canvas “Keep it Spotless” was sold for $1,870,000.

What would Fama, Shiller and Hansen say about these asset prices?

Maximizing revenue for non-reproducible art is a matching process, the artist must find the handful of buyers in the world willing to pay the most (see An Economic Theory of Avant-Garde and Popular Art) so perhaps one can explain this as a failure of marketing.

An alternative explanation is that modern art is a bubble, people buy only because they expect to sell to others–take away this expectation and the art doesn’t sell. (Fashions and fads can help the latter explanation a long but there still needs to be an expectation of a future sucker buyer.)

Or perhaps Banksy is commenting on an earlier Nobel winner.

Lars Peter Hansen Nobelist

Hansen’s work is the most technical and most difficult to explain to a layperson. The brief version is that in 1982 Hansen developed the Generalized Method of Moments a new and elegant way to estimate many economic models that requires fewer assumptions and is often more powerful than other methods.

Here is the basic idea in a nutshell. The method of moments is an old and intuitive technique for estimating the parameters of a data generating process. A moment is an expectation of the form E(X^r), where r can be an integer. For example if r=1 then the first moment, E(X), is just the mean (you may also know that together the first and second moments, E(X^2), define the variance). If the true mean in the data generating process is M then we can write a moment condition, E(X)-M=0. Now the method of moments says to estimate M we should solve that condition by replacing, E(X), with the sample mean. In other words, a good estimator for the unknown population mean is the sample mean, i.e. the mean in the data that you have. Pretty obvious so far.

Now let’s start to generalize. First, there are many moments other than the mean and variance. Indeed economic theory often provides moment conditions that may be written E(f(X,M))=0 where M now stands for a moment not necessarily the mean and f can be a non-linear function. For example, rational expectation models often provide conditions that E(f(X))-M=0, i.e. that forecasts should equal true values, or macro models imply that various differences, such as in consumption levels should not be correlated and so forth. Indeed, finance and macroeconomic theory provided a surplus of moment conditions and many of these different conditions imply something about the same parameter. Now, and this is key, when we have more moment conditions than parameters we can’t choose the parameters to make all the moment conditions true, i.e. we can’t make all those moment conditions equal to zero. So what to do?

What Hansen did with the generalized method of moments is show that when we have more moment conditions than parameters we can best estimate those parameters by giving more weight to the conditions that we have better information about. In other words, if we have two conditions and we can’t force both of them to zero by a choice of parameter then choose the parameter such that the moment condition we know the most about (least variance) is closer to zero than the one we know less about. Again, the idea is intuitive, but Hansen showed how to make these choices and then he proved that when the parameters are chosen in this way they have good statistical properties such as consistency (they get closer to the true values as the sample size increases). Importantly, estimating a model using these moment conditions does not require untenable assumptions on the entire distribution of returns. Hansen then also showed, such as with Hansen and Hodrick (1980) and Hansen and Singleton (1982, 1983) how these methods could be applied to a large class of macro models and finance models including asset pricing, the latter of which links Hansen with the work of Fama and Shiller as does the important bound discovered by Hansen and Jaganatthan (1991).

Lars Peter Hansen, Nobel Laureate

This is a prize given to time series econometrics and how to deal with imperfect data and changing variances for variables being estimated.  Can you say “Generalized Method of Moments” (GMM)?  Hansen teaches at the economics department of the University of Chicago.

For years now journalists have asked me if Hansen might win, and if so, how they might explain his work to the general reading public.  Good luck with that one.

Here is Hansen on scholar.google.com.  Here is Hansen’s home page.  Here is Hansen on Wikipedia.  Hansen developed GMM in 1982, and here are some lecture notes on GMM:

Unlike maximum likelihood estimation (MLE), GMM does not require complete knowledge of the distribution of the data. Only specified moments derived from an underlying model are needed for GMM estimation. In some cases in which the distribution of the data is known, MLE can be computationally very burdensome whereas GMM can be computationally very easy. The log-normal stochastic volatility model is one example. In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the proposed model. This is an important feature that is unique to GMM estimation.

Here is a highly technical piece on how GMM is useful for testing macroeconomic propositions.  This is a slightly more intuitive treatment than most sources.

If you read this piece with Hodrick, you will see that Hansen’s work is instrumental for testing the advanced versions of the propositions of Fama and Shiller. In this critical sense, the three prizes are quite tightly unified.  And see this paper too with Singleton.  Here’s the most concrete sentence you are going to squeeze out of me on this one: if you want to do serious analysis of whether changing risk premia can help rationalize observed asset price movements, Hansen’s contributions will prove essential.

Robert Shiller Nobelist

Robert Shiller is best known for warning about the internet stock market bubble and later the housing bubble. What is most impressive to me, however, is that most people who think that markets can be inefficient are anti-market. Shiller’s solution to market problems, however, is more markets! The housing market, for example, has traditionally had two problems. Since each house is unique there has been no market index of housing prices so that people couldn’t easily see bubbles and if they could see them on the ground there was no easy way to short the market (to try to profit from the bubble in a way that would moderate the bubble). Moreover, because there haven’t been good housing indexes a very large amount of each average person’s wealth has been tied up with an asset that can fluctuate substantially in price. Most house buyers, in other words, are putting all their eggs in one basket and crossing their fingers that the basket doesn’t go bust. In recent years, that has been a very unfortunate bet.

Shiller’s solution to the problems in the housing market has been to make the market better—he created with Case and Weiss–the Case-Shiller Index. For the first time, it’s possible to see in real time housing prices and compare with averages over time and it possible to buy options and futures on the index which will help for forecasting. Moreover, it’s possible that in the future insurance products can be built based on local versions of the index–thus you could insure yourself against big declines in the price of housing in your neighborhood.

Shiller’s housing index is also a window into how macro markets could also be used to create livelihood insurance, a type of private unemployment insurance. Moral hazard and adverse selection make it difficult to protect any single individual from unemployment but indexes in the unemployment rate of dentists or construction workers could be used to provide some insurance for workers in these fields when conditions in their entire industry are poor.

I featured one of Shiller’s biggest ideas in Entrepreneurial Economics, markets in GDP. A GDP market would allow shares of GDP to be bought and sold, add to this Hansonian prediction markets and you are long way towards an ideal way to evaluate the effect of major policies. Moreover, a GDP market would allow the creation of many insurance products. We are all less diversified than is ideal. It would be optimal to trade some shares in US GDP for shares in World GDP which is more stable. We can do this if we create Shiller GDP markets throughout the world.

Shiller’s book Macro Markets is truly visionary and I hope the Nobel brings a lot of attention to these ideas.

Robert Shiller, Nobel Laureate

Robert Shiller spent much of his career at Yale University.  He is a famous economist for his analysis of speculative bubbles and price overreaction to new information, first in stock markets and then later in real estate markets.  He has been a leading candidate for a Nobel Prize for some time now.

Here is Shiller’s home page.  Here is Shiller on Wikipedia.  Here are short columns by Shiller on Project Syndicate.  He also writes regularly for the Sunday New York Times, and some of his columns are here.  Here is a 2005 David Leonhardt profile of Shiller.

Shiller’s most famous piece is from 1981, “Do Stock Prices Move Too Much to be Justified by Subsequent Changes in Dividends?”  Here Shiller developed the very important “variance bounds test” for scrutinizing the rationality of stock prices.  The intuition runs like this.  Let’s say you were trying to forecast the result of Miami Heat vs. San Antonio Spurs.  The results of the actual games would show higher variance than your forecast, which would reflect your single best guess of which team is better.  But in reality sometimes Miami will win, sometimes the Spurs will win, by a little, by a lot, etc.   That is a basic principle of forecasting rationality, in other words that actual outcomes should show higher variance than forecasts.  But now consider stocks.  According to Shiller, the “actual results” — namely the realized returns — are the dividends.  The forecasts of the dividends are stock prices.  Yet dividends hardly vary, while stock prices move around a great deal.  It would appear that stock prices violate this variance bounds test because the forecast has a higher variance than the actual outcomes.  (For some push back on this, see the papers by Allan Kleidon from the 1980s.  For non-stationary time series for instance a variance bounds test may not hold because the population variance, as opposed to the sample variance, is infinite and thus undefined.  Another point is that stock prices may move because the stock is an option on the real assets of the firm, which have changing value, whether or not dividends ever vary and of course dividends are consciously smoothed.)

Shiller also extended his variance bounds test to the term structure of interest rates, for instance in his work with John Campbell.  Think of long rates as being forecasts of future short rates.  According to a variance bounds test, the long rates should be less volatile, but in market data we generally observe the opposite.  This again raises the possibility that markets are overreacting to new information rather than estimating values rationally.

You will note that some of Shiller’s models imply systematic returns to betting against the market and expecting some long-run mean reversion.  If the market is overreacting to new information right now, over some longer time horizon it will have to return closer to fundamental values.  So if you, as an investor, have enough patience, you should (on average) bet against short-term market movements.  Of course this hypothesis has received a good deal of empirical testing and perhaps there is some (slight) long run mean reversion, although it is not clear how much those gains can be captured after transactions costs are paid.  In any case, there may be some excess returns to buying right after prices have fallen, contrary to what a strict interpretation of efficient markets theory would suggest.

Shiller’s 1984 piece, “Stock Prices and Social Dynamics,” with Ben Friedman, started laying out a “trends” and “fads” approach to stock and also housing prices.  This integration of psychology and economics might help explain why markets appear to overreact to short-term information.

One intriguing side of Shiller is his advocacy of derivatives and prediction markets to help individuals better hedge risk.  On that see Shiller’s book Finance and the Good Society.  Shiller for instance would like to see explicit futures or forward markets in gdp, and individuals could hedge with those markets to bet against bad business prospects.  One also can imagine laborers insuring their future income by transacting in indicators of economic health.  Shiller has raised the idea of using housing price indices to help hedge against home price risk, whether for future sellers or future buyers.  This aspect of Shiller’s thought has perhaps disappointed some of his fans who have wanted him to take a more critical attitude toward finance.  Shiller instead thinks that a properly reconstituted financial sector could bring the world very significant gains, typically through superior risk hedging.  It remains a general puzzle why so many of these markets continue not to exist, and when they are sometimes started up, they fail to attract sufficient liquidity.

Shiller’s greatest practical contribution is the Case-Shiller housing price index, described by Wikipedia like this:

The Standard & Poor’s Case–Shiller Home Price Indices are repeat-sales house price indices for the United States. There are multiple Case–Shiller home price indices: A national home price index, a 20-city composite index, a 10-city composite index, and twenty individual metro area indices.

This index has become a staple of real world financial analysis and discussion and it is reported in the financial press on a regular basis.

Shiller is also famous for having predicted the housing price bubble which played an important role in America’s Great Recession.  Here is one of his early pieces on that issue.  One of his research innovations was the common sense idea of simply asking home buyers what kind of future price gains they were expecting and then analyzing whether those expectations were realistic (hint: they weren’t).

Here is an on-line course with Shiller, on finance.

Eugene Fama Nobelist

As an undergraduate Fama worked for a stock forecasting service and he was tasked with coming up with rules to make money in the market. Time and time again he would find profitable rules only to find that they didn’t work in new data or out of sample. In graduate school he started talking to Merton Miller, Lester Telser, and Benoît Mandelbrot and finally hit on the idea that in an efficient market price changes would not be forecastable. The rest is history.

Fama’s dissertation and famous 1970 review article, Efficient Capital Markets: A Review of Theory and Empirical Work made efficient markets a touchstone for modern economists and finance theorists but practitioners hated and still hate the idea. Nevertheless, test after test showed that very few mutual fund managers beat the market and those that beat the market this year are not more likely to beat the market the next year. Chance and perhaps a few, very rare, geniuses explain the data. Eventually, hundreds of billions of dollars began to flow into index funds and today index funds manage over $7 trillion dollars worth of assets worldwide, making Fama the 7 trillion dollar man. Fama’s ideas have made an enormous contribution to how people invest, saving them billions in fees which generated beautiful homes for fortunate mutual fund managers but less than nothing for their customers.

The no free lunch principle is the most robust of the findings of the early Fama/efficient markets school. Other early findings such as non-forecastability of returns have been revised. The initial finding was that returns were not forecastable and that is true for short durations but it is now clear that returns can be forecastable over longer horizons! In particular, variables such as the dividend/price ratio can predict stock return variation years in advance! (Robert Shiller pioneered many of these kinds of studies as did Campbell and Cochrane).  Fama, however, contrary to how he is sometimes represented did not reject these findings. Indeed, the less well known part of the story is that Fama working with French (e.g. Fama and French (1988a,1988b, 1993) has been among the pioneers in documenting and explaining these findings. What Fama’s later work has shown is that many of the anomalies such as time varying returns and the higher return to so-called value firms are real but they are not anomalies they are better explained as variations in risk premia tied to changes in the business cycle.

The CAPM (for which Markowtiz and Sharpe won the Nobel) suggested that the only source of true (priced) risk was risk that varied with the market return. That is one important source of risk but it’s not the only one, other types of macro risk which appear to vary with the business cycle are also priced and they are correlated with markers like the dividend/price ratio and the prospects for value and small-cap stocks. Thus, Fama showed that many of the seemingly anomalies (not all, however!) of the early efficient market tests can be better explained by a market model that incorporates more sources of risk. All of this work has really been a tour de force. It’s not often that the same person creates the theory and then participates in the first revolution overturning (some of) that theory.

Fama also pioneered the first event study! Fama, Fisher, Jensen, and Roll (1969) studied something a bit prosaic, stock splits, but the methodology, looking at how the stock market reacts to unexpected events, has seen been used to study what happens when Senators die unexpectedly (firms they support fall), what happens in close elections, which part was responsible for the Challenger space shuttle crash and many other events.

Eugene Fama, Nobel Prize Laureate

Fama has been long-deserving for some time now.  He won basically for his empirical work on asset prices.

Here is Fama on Wikipedia.  Here is Fama on scholar.google.com.

Fama teaches at the Booth School of Business at the University of Chicago and he is very much in the classic mold of a Chicago School economist.  I feared that with the financial crisis Fama would be too unpopular a pick, because many people misinterpret “efficient markets theory,” so this was a prize which valued the economic impact of the research over the trendiness.  That said, giving the prize to Shiller as well is a nod in the direction of behavioral finance and inefficient markets theory, so the committee covered all the bases.

Fama’s first key piece dates from 1965, and then through the 1970s he laid down the foundations for tests of efficient markets theory.  The bottom line to these early tests is that publicly available information does not predict excess returns.  That does not mean market prices have to reflect fundamental values of stocks (though Fama sometimes flirts with that claim too), but it does suggest that stock-picking, in the absence of information, will not on average outperform diversification plus random stock purchases.  The variables which earlier theorists thought might predict excess returns turn out not to.  Here is a good survey article, with Burt Malkiel, on where the efficient markets hypothesis stood circa 1970.

The remarkable thing about Fama — and a point which critics often neglect — is that Fama, working with Ken French, also has provided some of the best evidence against the efficient markets hypothesis.  See for instance this 1992 piece.  And see this piece too, with French from 1993.  Contrary to the earlier Fama work, it turns out there are some empirical predictors of excess returns, if you look hard enough for them.  The most notable of these are firm size and book to market value.  To put that more concretely, small firms, when they sink in market valuation relative to their book values, appear to yield excess returns in the future periods.  In this regard Fama’s later work is closely in tune with some of the later research by Robert Shiller.  One possible way of reading this empirical result (which I believe by the way Fama has never endorsed) is that the share prices of small firms show some mean-reversion upwards after they are hit by bad news; that could result for instance from imperfect liquidity in those share markets, combined with some measure of contagious investor sentiment.

The 1992 and 1993 pieces with French are landmarks in empirical finance and they set off a much longer literature trying to find predictors of excess returns on stocks.  Note that Fama holds open the possibility that real rates of return are changing in the economy, or risk premia are changing, and thus he does not automatically identify these empirical results with market inefficiency, as he had defined that concept in his articles from the 1960s.

Perhaps surprisingly, many of Fama’s best cited pieces are on organizations, control, and property rights, in the vein of Ronald Coase.  These pieces are not what won him the prize, but he provided an early and very insightful typology and analysis of different organizational forms, such as limited liability, non-profits, publicly vs. privately-held firms, and so on.  With Jensen he wrote a seminal piece on the separation of ownership and control, and why transactions costs might give rise to such organizational forms.  I am a big fan of his 1980 piece “Agency Problems and the Theory of the Firm.”  One contribution of this article is to show how the market for managers can boost firm efficiency, even when takeovers and other ways of disciplining managers are working imperfectly.  Also see his “Agency Problems and Residual Claims,” again with Jensen.  Fama’s work on organizational forms tends to suggest that observed market structures work relatively well, for reasons related to the work of Ronald Coase and transactions costs.

Fama has many other contributions.  He wrote an insightful piece asking what is special about banks.  It could be that deposit-taking banks have special information about the businesses they are dealing with.  This was a very important piece about inflation and stock returns.  Finally, one of Fama’s 1980 papers suggested that control of currency would suffice to pin down the price level and also control aggregate demand.  This piece continues to exert a strong influence on our own Scott Sumner, who argues that currency will suffice to achieve a nominal gdp target, even when credit markets are not operating very well (I don’t agree with that claim, by the way).

It is remarkable how many top flight, empirically important, and conceptually sharp pieces Fama contributed from 1965 through the mid 1990s, pretty much a thirty year period.  And after that he hardly fell apart in terms of research.  This is a very well deserved Nobel Prize.

Here is a fun New Yorker interview with Fama.  Here is Fama’s home page.  He is also an avid golfer.

Alice Munro wins the Nobel Prize in literature

She is one of my favorite authors.  If you are looking for one place to start try Hateship, Friendship, Courtship, Loveship, Marriage: Stories, but all of her books are worth reading.  As writers go, she falls into the “behavioral” camp, so to speak.  Here is a story on Alice Munro retiring, as of earlier this year.  Here is an associated slide show.  I liked this article about her decision to retire.  She was 37 when her first collection of stories was published, her first story was published in 1977, but she had been sending stories to The New Yorker as early as the 1950s.  Here is a good interview with Munro.