Results for “nobel”
562 found

Josh Angrist’s Nobel Prize Lecture

The Nobel prize lectures were online this year which gave Josh Angrist and MRU an opportunity to produce a Nobel prize lecture unlike any ever before! Josh gives a commanding yet down-to-earth talk with lots of graphics, animations and even a few guitar riffs! Indeed, Josh’s Nobel Prize lecture includes a clip from his MRU videos. Future Nobel laureates take note!

I’m also very happy that Josh focused much of his lecture on his very important work on charter schools. Watch for the stunning graph showing how Boston charter schools close the black-white achievement gap.

Josh’s work with MRU has really paid off on camera! Congrats Josh!

David Card gives a more traditional but very good lecture. Guido Imbens lecture is excellent and nicely complements Josh’s lecture and also includes some great graphics. Nobel lectures will never be the same.

The First Nobel Prize for Marginal Revolution University!

The Nobel Prize in economics this year goes to David Card, Joshua Angrist and Guido Imbens. I describe their contributions in greater detail in A Nobel Prize for the Credibility Revolution.

It’s also fun to note that Joshua Angrist mostly teaches at MIT but he also teaches a course on Mastering Econometrics at Marginal Revolution University so this is our first Nobel Prize! Here is Master Joshua on instrumental variables.

A Nobel Prize for the Credibility Revolution

The Nobel Prize goes to David Card, Joshua Angrist and Guido Imbens. If you seek their monuments look around you. Almost all of the empirical work in economics that you read in the popular press (and plenty that doesn’t make the popular press) is due to analyzing natural experiments using techniques such as difference in differences, instrumental variables and regression discontinuity. The techniques are powerful but the ideas behind them are also understandable by the person in the street which has given economists a tremendous advantage when talking with the public. Take, for example, the famous minimum wage study of Card and Krueger (1994) (and here). The study is well known because of its paradoxical finding that New Jersey’s increase in the minimum wage in 1992 didn’t reduce employment at fast food restaurants and may even have increased employment. But what really made the paper great was the clarity of the methods that Card and Krueger used to study the problem.

The obvious way to estimate the effect of the minimum wage is to look at the difference in employment in fast food restaurants before and after the law went into effect. But other things are changing through time so circa 1992 the standard approach was to “control for” other variables by also including in the statistical analysis factors such as the state of the economy. Include enough control variables, so the reasoning went, and you would uncover the true effect of the minimum wage. Card and Krueger did something different, they turned to a control group.

Pennsylvania didn’t pass a minimum wage law in 1992 but it’s close to New Jersey so Card and Kruger reasoned that whatever other factors were affecting New Jersey fast food restaurants would very likely also influence Pennsylvania fast food restaurants. The state of the economy, for example, would likely have a similar effect on demand for fast food in NJ as in PA as would say the weather. In fact, the argument extends to just about any other factor that one might imagine including demographics, changes in tastes and changes in supply costs. The standard approach circa 1992 of “controlling for” other variables requires, at the very least, that we know what other variables are important. But by using a control group, we don’t need to know what the other variables are only that whatever they are they are likely to influence NJ and PA fast food restaurants similarly. Put differently NJ and PA are similar so what happened in PA is a good estimate of what would have happened in NJ had NJ not passed the minimum wage.

Thus Card and Kruger estimated the effect of the minimum wage in New Jersey by calculating the difference in employment in NJ before and after the law and then subtracting the difference in employment in PA before and after the law. Hence the term difference in differences. By subtracting the PA difference (i.e. what would have happened in NJ if the law had not been passed) from the NJ difference (what actually happened) we are left with the effect of the minimum wage. Brilliant!

Yet by today’s standards, obvious! Indeed, it’s hard to understand that circa 1992 the idea of differences in differences was not common. Despite the fact that differences in differences was actually pioneered by the physician John Snow in his identification of the causes of cholera in the 1840 and 1850s! What seems obvious today was not so obvious to generations of economists who used other, less credible, techniques even when there was no technical barrier to using better methods.

Furthermore, it’s less appreciated but not less important that Card and Krueger went beyond the NJ-PA comparison. Maybe PA isn’t a good control for NJ. Ok, let’s try another control. Some fast food restaurants in NJ were paying more than the minimum wage even before the minimum wage went into effect. Since these restaurants were always paying more than the minimum wage the minimum wage law shouldn’t influence employment at these restaurants. But these high-wage fast-food restaurants should be influenced by other factors influencing the demand for and cost of fast food such as the state of the economy, input prices, demographics and so forth. Thus, Card and Krueger also calculated the effect of the minimum wage by subtracting the difference in employment in high wage restaurants (uninfluenced by the law) from the difference in employment in low-wage restaurants. Their results were similar to the NJ-PA comparison.

The importance of Card and Krueger (1994) was not the result (which continue to be debated) but that Card and Krueger revealed to economists that there were natural experiments with plausible treatment and control groups all around us, if only we had the creativity to see them. The last thirty years of empirical economics has been the result of economists opening their eyes to the natural experiments all around them.

Angrist and Krueger’s (1991) paper Does Compulsory School Attendance Affect Schooling and Earnings? Is one of the most beautiful in all of economics. It begins with a seemingly absurd strategy and yet in the light of a few pictures it convinces the reader that the strategy isn’t absurd but brilliant.

The problem is a classic one, how to estimate the effect of schooling on earnings? People with more schooling earn more but is this because of the schooling or is it because people who get more schooling have more ability? Angrist and Krueger’s strategy is to use the correlation between a student’s quarter of birth and their years of education to estimate the effect of schooling on earnings. What?! What could a student’s quarter of birth possibly have to do with how much education a student receives? Is this some weird kind of economic astrology?

Angrist and Krueger exploit two quirks of US education. The first quirk is that a child born in late December can start first grade earlier than a child, nearly the same age, who is born in early January. The second quirk is that for many decades an individual could quit school at age 16. Put these two quirks together and what you get is that people born in the fourth quarter are a little bit more likely to have a little bit more education than similar students born in the first quarter. Scott Cunningham’s excellent textbook on causal inference, The Mixtape, has a nice diagram:

Putting it all together what this means is that the random factor of quarter of birth is correlated with (months) of education. Who would think of such a thing? Not me. I’d scoff that you could pick up such a small effect in the data. But here come the pictures! Picture One (from a review paper, Angrist and Krueger 2001) shows quarter of birth and total education. What you see is that years of education are going up over time as it becomes more common for everyone to stay in school beyond age 16. But notice the saw tooth pattern. People who were born in the first quarter of the year get a little bit less education than people born in the fourth quarter! The difference is small, .1 or so of a year but it’s clear the difference is there.

Ok, now for the payoff.  Since quarter of birth is random it’s as if someone randomly assigned some students to get more education than other students—thus Angrist and Krueger are uncovering a random experiment in natural data. The next step then is to look and see how earnings vary with quarter of birth. Here’s the picture.

Crazy! But there it is plain as day. People who were born in the first quarter have slightly less education than people born in the fourth quarter (figure one) and people born in the first quarter have slightly lower earnings than people born in the fourth quarter (figure two). The effect on earnings is small, about 1%, but recall that quarter of birth only changes education by about .1 of a year so dividing the former by the latter gives an estimate that implies an extra year of education increases earnings by a healthy 10%.

Lots more could be said here. Can we be sure that quarter of birth is random? It seems random but other researchers have found correlations between quarter of birth and schizophrenia, autism and IQ perhaps due to sunlight or food-availability effects. These effects are very small but remember so is the influence of quarter of birth on earnings so a small effect can still bias the results. Is quarter of birth as random as a random number generator? Maybe not! Such is the progress of science.

As with Card and Kruger the innovation in this paper was not the result but the method. Open your eyes, be creative, uncover the natural experiments that abound–this was the lesson of the credibility revolution.

Guido Imbens of Stanford (grew up in the Netherlands) has been less involved in clever studies of empirical phenomena but rather in developing the theoretical framework. The key papers are Angrist and Imbens (1994), Identification and Estimation of Local Treatment Effects and Angrist, Imbens and Rubin, Identification of Causal Effects Using Instrumental Variables which answers the question: When we use an instrumental variable what exactly is it that we are measuring? In a study of the flu, for example, some doctors were randomly reminded/encouraged to offer their patients the flu shot.  We can use the randomization as an instrumental variable to measure the effect of the flu shot. But note, some patients will always get a flu shot (say the elderly). Some patients will never get a flu shot (say the young). So what we are really measuring is not the effect of the flu shot on everyone (the average treatment effect) but rather on the subset of patients who got the flu shot because their doctor was encouraged–that latter effect is known as the local average treatment effect. It’s the treatment effect for those who are influenced by the instrument (the random encouragement) which is not necessarily the same as the effect of the flu shot on groups of people who were not influenced by the instrument.

By the way, Imbens is married to Susan Athey, herself a potential Nobel Prize winner. Imbens-Athey have many joint papers bringing causal inference and machine learning together. The Akerlof-Yellen of the new generation. Talk about assortative matching. Angrist, by the way, was the best man at the wedding!

A very worthy trio.

The Ig Nobel Prizes

img-z2-1_357.jpgThe Ig Nobel Prize in Economics this year went to Pavlo Blavatskyy for Obesity of politicians and corruption in post-Soviet countries:

We collected 299 frontal face images of 2017 cabinet ministers from 15 post-Soviet states (Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan). For each image, the minister’s body-mass index is estimated using a computer vision algorithm. The median estimated body-mass index of cabinet ministers is highly correlated with conventional measures of corruption (Transparency International Corruption Perceptions Index, World Bank worldwide governance indicator Control of Corruption, Index of Public Integrity). This result suggests that physical characteristics of politicians such as their body-mass index can be used as proxy variables for political corruption when the latter are not available, for instance at a very local level.

The Transportation prize went to researchers led by Cornell University’s Robin W. Radcliffe for determining that it is safe to transport an airborne rhinoceros upside down.

Other prizes here.

You may laugh but don’t forget that the great Andre Geim won an Ig Nobel prize in 2000 for levitating a frog and then won a Nobel prize in 2010 for graphene. I consider this one of the greatest accomplishments in all of science.

Photo Credit: Journal of Wildlife Diseases.

The Nobel Prize: Milgrom and Wilson

The 2020 Nobel Prize in Economics goes to Paul Milgrom and Robert Wilson for auction theory and the improvement of auction designs. The Nobel Committee has a popular introduction and good scientific overview of auction theory. Billions of dollars of spectrum and other natural resources have been allocated using auctions designed by Milgrom and Wilson and their co-authors.

The money won’t mean much to these winners, who have made plenty of money advising firms about how to bid in the auctions that they designed. Milgrom’s firm Auctionomics advertises its service and Milgrom notes:

Milgrom has advised bidders in radio spectrum auctions, power auctions, and bankruptcy auctions. One advisee, Comcast and its consortium, SpectrumCo, followed the advice of a Milgrom’s team in FCC Auction 66 to achieve the most exceptional performance in US spectrum auction history. SpectrumCo saved nearly $1.2 billion on its spectrum license purchases compared to the prices paid by other large bidders – such as T-Mobile and Verizon – for comparable spectrum acquired at the same time in the same auction. SpectrumCo’s tactics included a $750 million jump bid – the largest in the history of US spectrum auctions and a move that prompted the FCC to change the auction rules.

You can figure that Milgrom got a percentage of those savings! Milgrom also advised Yahoo and Google, among other tech firms, on their advertising auctions.

My post Mechanism Design for Grandma written for the Hurwicz, Maskin and Myerson Nobel, has some background on auctions.

Auction theory and auction practice arose together–this is not a case of theory being rediscovered decades later by practitioners but of the demands by practitioners leading to new theory and new theory leading to new institutions. The Nobel committee notes:

In the early 1990s, an explosion of the demand for mobile communication made the U.S. federal government decide to use an auction for allocating radio-spectrum licenses among telecommunication firms. Previously, the U.S. Federal Communications Commission (FCC) had only been allowed to rely either on administrative procedures—commonly referred to as “beauty contests”—or on lotteries. These methods had notably failed in a number of complex settings, at the expense of both taxpayers and end-users…The obvious alternative is to adopt an auction to as-sign licenses. In fact, as early as in the 1950s, the 1991 Laureate Ronald H. Coase argued that the basic principle should be to allocate objects, such as broadcasting licenses, to the firms who will make the most efficient use of them, and the best way to identify these firms is to assign the objects through a competitive price mechanism (Coase, 1959).

…Following the FCC policy shift, multi-object auctions turned from an esoteric topic at the fringe of microeconomic theory to a hot research topic almost overnight.

…For the 1994 FCC auction, the final version of the newly designed auction was the Simultaneous Multiple Round Auction (SMRA)…[which] raised some $20 billion for the U.S. federal government, twice the forecasted amount. This outcome attracted considerable media attention and led other governments to set up their own auctions. The U.K. 3G spectrum auction that concluded in 2000 raised about $34 billion for the British government (Binmore and Klemperer, 2002). The SMRA auction format became the dominant design for spectrum sales worldwide, and versions of it have been used in Canada, Finland, Germany, India, Norway, Poland, Spain, Sweden, theU.K., and the U.S. These auctions have generated hundreds of billions of dollars for governments worldwide.

Perhaps the most impressive culmination of this work was the 2017 incentive auctions which “simultaneously” bought licenses from over-the air broadcast television stations and resold them to modern cellular phone bidders while respecting constraints so that over-the-air frequencies could be repackaged in ways such that they would not interfere with one another. The auction bought licenses for $10 billion, sold them for $20 billion, generating $10 billion in profit and generating an even larger increase in consumer surplus.

The first is a reverse auction that determines a price at which the remaining over-the-air broadcasters voluntarily relinquish their existing spectrum-usage rights. The second is a forward auction of the freed-up spectrum. In 2017, the reverse auction removed 14 channels from broadcast use, at a cost of $10.1 billion. The forward auction sold 70 MHz of wireless internet licenses for $19.8 billion, and created 14 MHz of surplus spectrum. The two stages of the incentive auction thus generated just below $10 billion to U.S. taxpayers, freed up considerable spectrum for future use, and presumably raised the expected surpluses of sellers as well as buyers.

These auctions also brought home that economics is now tied to computer science. The complexity of the allocation process was so high that new algorithms had to be devised. In particular, repackaging of the frequencies involved solving hundreds of thousands of graph-coloring problems, an NP-hard problem. Computer scientist Kevin Leyton-Brown was brought in to design and optimize the necessary algorithms. At the same time, Milgrom and Segal had to prove that their auction could be characterized in such a way that it could be solved in reasonable time by known algorithms.

Computer scientist Tim Roughgarden has an excellent video lecture on how implementing the incentive auction required a combination of cutting-edge economics and computer science. More generally, mechanism design in the real world, including auction design, Uber’s supply and demand mechanism, blockchains like bitcoin and many other examples, requires both economists and computer scientists to devise institutions and algorithms that incentivize socially beneficial behavior and that can also be solved in real time for real populations.

See Tyler’s post on Milgrom and on Tyler’s post on Wilson for much more, going well beyond their contributions to auction theory.

Paul Milgrom, Nobel Laureate

Most of all this is a game theory prize and an economics of information prize, including game theory and asymmetric information.  Much of the work has had applications to auctions and finance.  Basically Milgrom was the most important theorist of the 1980s, during the high point of economic theory and its influence.

Here is Milgrom’s (very useful and detailed) Wikipedia page.  Most of his career he has been associated with Stanford University, with one stint at Yale for a few years.  Here is Milgrom on  A very good choice and widely anticipated, in the best sense of that term.  Here is his YouTube presence.  Here is his home page.

Milgrom, working with Nancy Stokey, developed what is called the “no trade” theorem, namely the conditions under which market participants will not wish to trade with each other.  Obviously if someone wants to trade with you, you have to wonder — what does he/she know that I do not?  Under most reasonable assumptions, it is hard to generate a high level of trading volume, and that has remained a puzzle in theories of finance and asset pricing.  People are still working on this problem, and of course it relates to work by Nobel Laureate Robert Aumann on when people should rationally disagree with each other.

Building on this no-trade result, Milgrom wrote a seminal piece with Lawrence Glosten on bid-ask spread.  What determines bid-ask spread in securities markets?  It is the risk that the person you are trading with might know more than you do.  You will trade with them only when the price is somewhat more advantageous to you, so markets with higher degrees of asymmetric information will have higher bid-ask spreads.  This is Milgrom’s most widely cited paper and it is personally my favorite piece of his, it had a real impact on me when I read it.  You can see that the themes of common knowledge and asymmetric information, so important for the auctions work, already are rampant.

Alex will tell you more about auctions, but Milgrom working with Wilson has designed some auctions in a significant way, see Wikipedia:

Milgrom and his thesis advisor Robert B. Wilson designed the auction protocol the FCC uses to determine which phone company gets what cellular frequencies. Milgrom also led the team that designed the 2016-17 incentive auction, which was a two-sided auction to reallocate radio frequencies from TV broadcast to wireless broadband uses.

Here is Milgrom’s 277-page book on putting auction theory to practical use.  Here is his highly readable JEP survey article on auctions and bidding, for an introduction to Milgrom’s prize maybe start there?

Here is Milgrom’s main theoretical piece on auctions, dating from Econometrica 1982 and co-authored with Robert J. Weber.  it compared the revenue properties of different auctions and showed that under risk-neutrality a second-price auction would yield the highest price.  Also returning to the theme of imperfect information and bid-ask spread, it showed that an expert appraisal would make bidders more eager to bid and thus raise the expected price.  I think of Milgrom’s work as having very consistent strands.

With Bengt Holmstrom, also a Nobel winner, Milgrom wrote on principal-agent theory with multiple tasks, basically trying to explain why explicit workplace incentives and bonuses are not used more widely.  Simple linear incentives can be optimal because they do not distort the allocation of effort across tasks so much, and it turned out that the multi-task principal agent problem was quite different from the single-task problem.

People used to think that John Roberts would be a co-winner, based on the famous Milgrom and Roberts paper on entry deterrence.  Basically incumbent monopolists can signal their cost advantage by making costly choices and thereby scare away potential entrants.  And the incumbent wishes to be tough with early entrants to signal to later entrants that they better had stay away. In essence, this paper was viewed as a major rebuttal to the Chicago School economists, who had argued that predatory behavior from incumbents typically was costly, irratoinal, and would not persist.

The absence of Roberts’s name on this award indicates a nudge in the direction of auction design and away from game theory a bit — the Nobel Committee just loves mechanism design!

That said, it is worth noting that the work of Milgrom and co-authors intellectually dominated the 1980s and can be identified with the peak of influence of game theory at that period of time.  (Since then empirical economics has become more prominent in relative terms.)

Milgrom and Roberts also published a once-famous paper on supermodular games in 1990.  I’ve never read it, but I think it has something to do with the possible bounding of strategies in complex settings, but based on general principles.  This was in turn an attempt to make game theory more general.  I am not sure it succeeded.

Milgrom and Roberts also produced a well-known paper finding the possible equilibria in a signaling model of advertising.

Milgrom and Roberts also wrote a series of papers on rent-seeking and “influence activities” within firms.  It always seemed to me this was his underrated work and it deserved more attention.  Among other things, this work shows how hard it is to limit internal rent-seeking by financial incentives (which in fact can make the problem worse), and you will see this relates to Milgrom’s broader work on multi-task principal-agent problems.

Milgrom also has a famous paper with Kreps, Wilson, and Roberts, so maybe Kreps isn’t going to win either.  They show how a multi-period prisoner’s dilemma might sustain cooperating rather than “Finking” if there is asymmetric information about types and behavior.  This paper increased estimates of the stability of tit-for-tat strategies, if only because with uncertainty you might end up in a highly rewarding loop of ongoing cooperation.  This combination of authors is referred to as the “Gang of Four,” given their common interests at the time and some common ties to Stanford.  You will note it is really Milgrom (and co-authors) who put Stanford economics on the map, following on the Kenneth Arrow era (when Stanford was not quite yet a truly top department).

Not what he is famous for, but here is Milgrom’s paper with Roberts trying to rationalize some of the key features of modern manufacturing.  If nothing else, this shows the breadth of his interests and how he tries to apply game theory generally.  One question they consider is why modern manufacturing has moved so strongly in the direction of greater flexibility.

Milgrom also has a 1990 piece with North and Weingast on the medieval merchant guilds and the economics of reputation, showing his more applied side.  In essence the Law Merchant served as a multilateral reputation mechanism and enforced cooperation.  Here is a 1994 follow-up.  This work paved the way for later work by Avner Greif on related themes.

Another undervalued Milgrom piece is with Sharon Oster (mother of Emily Oster), or try this link for it.  Here is the abstract:

The Invisibility Hypothesis holds that the job skills of disadvantaged workers are not easily discovered by potential new employers, but that promotion enhances visibility and alleviates this problem. Then, at a competitive labor market equilibrium, firms profit by hiding talented disadvantaged workers in low-level jobs.Consequently, those workers are paid less on average and promoted less often than others with the same education and ability. As a result of the inefficient and discriminatory wage and promotion policies, disadvantaged workers experience lower returns to investments in human capital than other workers.

With multiple, prestigious co-authors he has written in favor of prediction markets.

He was the doctoral advisor of Susan Athey, and in Alex’s post you can read about his auction advising and the companies he has started.

His wife, Eva Meyersson Milgrom, is herself a renowned social scientist and sociologist, and he met her in 1996 while seated next to her at a Nobel Prize dinner in Stockholm.  Here is one of his papers with her (and Ravi Singh), on whether firms should share control with outsiders.  Here is the story of their courtship.

Robert B. Wilson, Nobel Laureate

Here is his home page.  He has been at Stanford Business School since 1964, and born in Geneva, Nebraska.  Here is his personal website.  Here is his Wikipedia page.  He has a doctorate in business administration from Harvard, but actually no economics Ph.D. (bravo!)  Here is the Nobel designation.

Most of all Wilson is an economic theorist, doing much of his most influential work in or around the 1980s.  He is a little hard to google (no, he did not work with Philip Glass), but here are his best-cited papers.  To be clear, he won mainly for his work in auction theory and practice, covered by Alex here.  But here is some information about the rest of his highly illustrious career.

He and David Kreps wrote a very famous paper about deterrence.  Basically an incumbent wishes to develop a reputation for being tough with potential entrants, so as to keep them out of the market.  This was one of the most influential papers of the 1980s, and it also helped to revive some of the potential intellectual case for antitrust activism.  Here is Wilson’s survey article on strategic approaches to entry deterrence.

Wilson has a famous paper with Kreps, Milgrom, and Roberts.  They show how a multi-period prisoner’s dilemma might sustain cooperating rather than “Finking” if there is asymmetric information about types and behavior.  This paper increased estimates of the stability of tit-for-tat strategies, if only because with uncertainty you might end up in a highly rewarding loop of ongoing cooperation.  This combination of authors is referred to as the “Gang of Four,” given their common interests at the time and some common ties to Stanford.

His 1982 piece with David Kreps on “sequential equilibria” was oh so influential on game theory, here is the abstract:

We propose a new criterion for equilibria of extensive games, in the spirit of Selten’s perfectness criteria. This criterion requires that players’ strategies be sequentially rational: Every decision must be part of an optimal strategy for the remainder of the game. This entails specification of players’ beliefs concerning how the game has evolved for each information set, including information sets off the equilibrium path. The properties of sequential equilibria are developed; in particular, we study the topological structure of the set of sequential equilibria. The connections with Selten’s trembling-hand perfect equilibria are given.

Here is a more readable exposition of the idea.  This was part of a major effort to figure out how people actually would play in games, and which kinds of solution concepts economists should put into their models.  I don’t think the matter ever was settled, and arguably it has been superseded by behavioral and computational and evolutionary approaches, but Wilson was part of the peak period of applying pure theory to this problem and this might have been the most important theory piece in that whole tradition.

From Wikipedia:

Wilson’s paper “The Theory of the Syndicates,”JSTOR 1909607 which was published in Econometrica in 1968 influenced a whole generation of students from economics, finance, and accounting. The paper poses a fundamental question: Under what conditions does the expected utility representation describe the behavior of a group of individuals who choose lotteries and share risk in a Pareto-optimal way?

Link here, this was a contribution to social choice theory and fed into Oliver Hart’s later work on when shareholder unanimity for a corporation would hold.  It also connects to the later Milgrom work, some of it with Wilson, on when people will agree about the value of assets.

Here is Wilson’s book on non-linear pricing: “What do phone rates, frequent flyer programs, and railroad tariffs all have in common? They are all examples of nonlinear pricing. Pricing is nonlinear when it is not strictly proportional to the quantity purchased. The Electric Power Research Institute has commissioned Robert Wilson to review the various facets of nonlinear pricing.”  Yes, he is a business school guy.  Here is his survey article on electric power pricing, a whole separate direction of his research.

Here is his 1989 law review article about Pennzoil vs. Texaco, with Robert H. Mnookin.

Wilson also did a piece with Gul and Sonnenschein, laying out the different implications of various game-theoretic conjectures for the Coase conjecture, namely the claim that a durable goods monopolist will end up having to sell at competitive prices, due to the patience of consumers and their unwillingness to buy at higher prices.

Wilson was the dissertation advisor of Alvin E. Roth, Nobel Laureate, and here the two interview each other, recommended.  Excerpt:

Wilson: As an MBA student in 1960, I wrote a class report on how to bid in an auction that got a failing grade because it was not “managerial.”

And here is an Alvin Roth blog post on the prize and the intellectual lineage.

The bottom line?  If you are a theorist, Stockholm is telling you to build up some practical applications  — at the very least pull something out of your closet and sell it on eBay!  A lot of people thought Roberts and maybe Kreps would be in on this Prize, but they are not.  The selections themselves are clearly deserving and have been “in play” for many years in the Nobel discussions.  But again, we see the committee drawing clear and distinct lines.

Let’s see what they do next year!

Are Nobel Prizes worth less these days?

It would seem so, now there are lots of them, here is one part of my Bloomberg column:

The Nobel Peace Prize this year went to the World Food Programme, part of the United Nations. Yet the Center for Global Development, a leading and highly respected think tank, ranked the winner dead last out of 40 groups as measured for effectiveness. Another study, by economists William Easterly and Tobias Pfutze in 2008, was also less than enthusiastic about the World Food Programme.

The most striking feature of the award is not that the Nobel committee might have gotten it wrong. Rather, it is that nobody seems to care. The issue has popped up on Twitter, but it is hardly a major controversy.

I also noted that the Nobel Laureates I follow on Twitter, in the aggregate, seem more temperamental than the 20-year-olds (and younger) that I follow.  Hail Martin Gurri!

And this:

The internet diminishes the impact of the prize in yet another way. Take Paul Romer, a highly deserving laureate in economics in 2018. To his credit, many of Romer’s ideas, such as charter cities, had been debated actively on the internet, in blogs and on Twitter and Medium, for at least a decade. Just about everyone who follows such things expected that Romer would win a Nobel Prize, and when he did it felt anticlimactic. In similar fashion, the choice of labor economist David Card (possibly with co-authors) also will feel anticlimactic when it comes, as it likely will.

Card with co-authors, by the way, is my prediction for tomorrow.

A Fine Theorem on RCTs and the new Nobel Laureates

In this vein, randomized trials tend to have very small sample sizes compared to observational studies. When this is combined with high “leverage” of outlier observations when multiple treatment arms are evaluated, particularly for heterogeneous effects, randomized trials often predict poorly out of sample even when unbiased (see Alwyn Young in the QJE on this point). Observational studies allow larger sample sizes, and hence often predict better even when they are biased. The theoretical assumptions of a structural model permit parameters to be estimated even more tightly, as we use a priori theory to effectively restrict the nature of economic effects.

We have thus far assumed the randomized trial is unbiased, but that is often suspect as well. Even if I randomly assign treatment, I have not necessarily randomly assigned spillovers in a balanced way, nor have I restricted untreated agents from rebalancing their effort or resources. A PhD student of ours on the market this year, Carlos Inoue, examined the effect of random allocation of a new coronary intervention in Brazilian hospitals. Following the arrival of this technology, good doctors moved to hospitals with the “randomized” technology. The estimated effect is therefore nothing like what would have been found had all hospitals adopted the intervention. This issue can be stated simply: randomizing treatment does not in practice hold all relevant covariates constant, and if your response is just “control for the covariates you worry about”, then we are back to the old setting of observational studies where we need a priori arguments about what these covariates are if we are to talk about the effects of a policy.

There is much more of interest in the post, very high quality as you might expect given the source.

Michael Kremer, Nobel laureate

To Alex’s excellent treatment I will add a short discussion of Kremer’s work on deworming (with co-authors, most of all Edward Miguel), here is one summary treatment:

Intestinal helminths—including hookworm, roundworm, whipworm, and schistosomiasis—infect more than one-quarter of the world’s population. Studies in which medical treatment is randomized at the individual level potentially doubly underestimate the benefits of treatment, missing externality benefits to the comparison group from reduced disease transmission, and therefore also underestimating benefits for the treatment group. We evaluate a Kenyan project in which school-based mass treatment with deworming drugs was randomly phased into schools, rather than to individuals, allowing estimation of overall program effects. The program reduced school absenteeism in treatment schools by one-quarter, and was far cheaper than alternative ways of boosting school participation. Deworming substantially improved health and school participation among untreated children in both treatment schools and neighboring schools, and these externalities are large enough to justify fully subsidizing treatment. Yet we do not find evidence that deworming improved academic test scores.

If you do not today have a worm, there is some chance you have Michael Kremer to thank!

With Blanchard, Kremer also has an excellent and these days somewhat neglected piece on central planning and complexity:

Under central planning, many firms relied on a single supplier for critical inputs. Transition has led to decentralized bargaining between suppliers and buyers. Under incomplete contracts or asymmetric information, bargaining may inefficiently break down, and if chains of production link many specialized producers, output will decline sharply. Mechanisms that mitigate these problems in the West, such as reputation, can only play a limited role in transition. The empirical evidence suggests that output has fallen farthest for the goods with the most complex production process, and that disorganization has been more important in the former Soviet Union than in Central Europe.

Kremer with co-authors also did excellent work on the benefits of school vouchers in Colombia.  And here is Kremer’s work on teacher incentives — incentives matter!  His early piece on wage inequality with Maskin, from 1996, was way ahead of its time.  And don’t forget his piece on peer effects and alcohol use: many college students think the others are drinking more than in fact they are, and publicizing the lower actual level of drinking can diminish alcohol abuse problems.  The Hajj has an impact on the views of its participants, and “… these results suggest that students become more empathetic with the social groups to which their roommates belong,.” link here.

And don’t forget his famous paper titled “Elephants.”  Under some assumptions, the government should buy up a large stock of ivory tusks, and dump them on the market strategically, to ruin the returns of elephant speculators at just the right time.  No one has ever worked through the issue before of how to stop speculation in such forbidden and undesirable commodities.

Michael Kremer has produced a truly amazing set of papers.

The Nobel Prize in Economic Science Goes to Banerjee, Duflo, and Kremer

The Nobel Prize goes to Abhijit Banerjee, Esther Duflo and Michael Kremer (links to home pages) for field experiments in development economics. Esther Duflo was a John Bates Clark Medal winner, a MacArthur “genius” award winner, and is now the second woman to win the economics Nobel and by far the youngest person to ever win the economics Nobel (Arrow was the previous youngest winner!). Duflo and Banerjee are married so these are also the first spouses to win the economics Nobel although not the first spouses to win Nobel prizes–there was even one member of a Nobel prize winning spouse-couple who won the Nobel prize in economics. Can you name the spouses?

Michael Kremer wrote two of my favorite papers ever. The first is Patent Buyouts which you can find in my book Entrepreneurial Economics: Bright Ideas from the Dismal Science. The idea of a patent buyout is for the government to buy a patent and rip it up, opening the idea to the public domain. How much should the government pay? To decide this they can hold an auction. Anyone can bid in the auction but the winner receives the patent only say 10% of the time–the other 90% of the time the patent is bought by the government at the market price. The value of this procedure is that 90% of the time we get all the incentive properties of the patent without any of the monopoly costs. Thus, we eliminate the innovation tradeoff. Indeed, the government can even top the market price up by say 15% in order to increase the incentive to innovate. You might think the patent buyout idea is unrealistic. But in fact, Kremer went on to pioneer an important version of the idea, the Advance Market Commitment for Vaccines which was used to guarantee a market for the pneumococcal vaccine which has now been given to some 143 million children. Bill Gates was involved with governments in supporting the project.

My second Kremer paper is Population Growth and Technological Change: One Million B.C. to 1990. An economist examining one million years of the economy! I like to say that there are two views of humanity, people are stomachs or people are brains. In the people are stomachs view, more people means more eaters, more takers, less for everyone else. In the people are brains view, more people means more brains, more ideas, more for everyone else. The people are brains view is my view and Paul Romer’s view (ideas are nonrivalrous). Kremer tests the two views. He shows that over the long run economic growth increased with population growth. People are brains.

Oh, and can I add a third Kremer paper? The O-Ring Model of Development is a great and deep paper. (MRU video on the O-ring model).

The work for which the Nobel was given is for field experiments in development economics. Kremer began this area of research with randomized trials of educational policies in Kenya. Duflo and Banerjee then deepened and broadened the use of field experiments and in 2003 established the Poverty Action Lab which has been the nexus for field experiments in development economics carried on by hundreds of researchers around the world.

Much has been learned in field experiments about what does and also doesn’t work. In Incentives Work, Dufflo, Hanna and Ryan created a successful program to monitor and reduce teacher absenteeism in India, a problem that Michael Kremer had shown in Missing in Action was very serious with some 30% of teachers not showing up on a typical day. But when they tried to institute a similar program for nurses in Putting a Band-Aid on A Corpse the program was soon undermined by local politicians and “Eighteen months after its inception, the program had become completely ineffective.” Similarly, Banerjee, Duflo, Glennerster and Kinnan find that Microfinance is ok but no miracle (sorry fellow laureate Muhammad Yunus). A frustrating lesson has been the context dependent nature of results and the difficult of finding external validity. (Lant Pritchett in a critique of the “randomistas” argues that real development is based on macro-policy rather than micro-experiment. See also Bill Easterly on the success of the Washington Consensus.)

Duflo, Kremer and Robinson study How High Are Rates of Return to Fertilizer? Evidence from Field Experiments in Kenya. This is an especially interest piece of research because they find that rates of return are very high but that farmers don’t use much fertilizer. Why not? The reasons seem to have much more to do with behavioral biases than rationality. Some interventions help:

Our findings suggest that simple interventions that affect neither the cost of, nor the payoff to, fertilizer can substantially increase fertilizer use. In particular, offering farmers the option to buy fertilizer (at the full market price, but with free delivery) immediately after the harvest leads to an increase of at least 33 percent in the proportion of farmers using fertilizer, an effect comparable to that of a 50 percent reduction in the price of fertilizer (in contrast, there is no impact on fertilizer adoption of offering free delivery at the time fertilizer is actually needed for top dressing). This finding seems inconsistent with the idea that low adoption is due to low returns or credit constraints, and suggests there may be a role for non–fully rational behavior in explaining production decisions.

This is reminiscent of people in developed countries who don’t adjust their retirement savings rates to take advantage of employer matches. (A connection to Thaler’s work).

Duflo and Banerjee have conducted many of their field experiments in India and have looked at not just conventional questions of development economics but also at politics. In 1993, India introduced a constitutional rule that said that each state had to reserve a third of all positions as chair of village councils for women. In a series of papers, Duflo studies this natural experiment which involved randomization of villages with women chairs. In Women as Policy Makers (with Chattopadhyay) she finds that female politicians change the allocation of resources towards infrastructure of relevance to women. In Powerful Women (Beaman et al.) she finds that having once had a female village leader increases the prospects of future female leaders, i.e. exposure reduces bias.

Before Banerjee became a randomistas he was a theorist. His A Simple Model of Herd Behavior is also a favorite. The essence of the model can be explained in a simple example (from the paper). Suppose there are two restaurants A and B. The prior probability is that A is slightly more likely to be a better restaurant than B but in fact B is the better restaurant. People arrive at the restaurants in sequence and as they do they get a signal of which restaurant is better and they also see what choice the person in front of them made. Suppose the first person in line gets a signal that the better restaurant is A (contrary to fact). They choose A. The second person then gets a signal that the better restaurant is B. The second person in line also sees that the first person chose A, so they now know one signal is for A and one is for B and the prior is A so the weight of the evidence is for A—the second person also chooses restaurant A. The next person in line also gets the B signal but for the same reasons they also choose A. In fact, everyone chooses A even if 99 out of 100 signals are B. We get a herd. The sequential information structure means that the information is wasted. Thus, how information is distributed can make a huge difference to what happens. A lot of lessons here for tweeting and Facebook!

Banerjee is also the author of some original and key pieces on Indian economic history, most notably History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India (with Iyer).

Duflo’s TED Talk. Previous Duflo posts; Kremer posts; Banerjee posts on MR.

Before last year’s Nobel announcement Tyler wrote:

I’ve never once gotten it right, at least not for exact timing, so my apologies to anyone I pick (sorry Bill Baumol!). Nonetheless this year I am in for Esther Duflo and Abihijit Banerjee, possibly with Michael Kremer, for randomized control trials in development economics.

As Tyler predicted he was wrong and also right. Thus, this years win is well-timed and well-deserved. Congratulations to all.