lloyd shapley

Nobel Prizes: Al Roth and Lloyd Shapley

Great choices. Al Roth for matching and the design of new types of markets. Lloyd Shapley for fundamental contributions to game theory and mathematical economics including the Gale-Shapley algorithm which is a cornerstone of the matching methods Al Roth pioneered. I am especially pleased about this because of Roth’s great work on improving kidney allocation. Here is Roth’s blog, Market Design and here he is giving a talk at Google. Here is what I wrote in 2010 about Roth

Roth has applied heavy-duty theory to the very practical problems of matching doctors to residency programs, children to schools, economists to departments and kidneys to patients in a way that is stable, incentive-compatible, and maximizes the gains from exchange.  In my view, Roth is the most influential economist working today. Influential among other economists?  Yes.  But what I really mean is influential in the world.

Previous posts on MR about Roth (also here). Roth’s papers.

More soon.

Noble Matching

In honor of the Nobel prizes to Al Roth and Lloyd Shapley, here is a primer on matching theory. Matching is a fundamental property of many markets and social institutions. Jobs are matched to workers, husbands to wives, doctors to hospitals, kidneys to patients.

The field of matching may be said to start with the Gale-Shapley deferred choice algorithm. Here is how it works, applied to men and women and marriage (n.b. the algorithm can also work for gay marriage but it’s a little easier to explain and implement with men and women). Each man proposes to his first ranked choice. Each woman keeps her top-ranked suitor but defers accepting the proposal. Each woman also rejects her lower ranked suitors. Each rejected man proposes to his second ranked choice. Each woman rejects again any lower-ranked suitors, which may include previous suitors who have now become lower-ranked. The process repeats until no further proposals are made; each woman then accepts her top-ranked suitor and the matches are made.

A similar process works when proposal receivers may accept more than one suitor, not that useful for marriage in most of the United States but very useful for when students are applying to schools and each school accepts many students.

Now what is good about this algorithm? First, Gale and Shapley proved that the algorithm converges to a solution for a very wide range of preferences. Second, the algorithm is stable in the sense that there is no man and no woman who would rather be matched to each other than to their current match. There are of course, men who would prefer to marry other women and there are women who would prefer to marry other men but no mutually preferable match is possible. Thus, the algorithm produces a stable match.

The application to men and women is somewhat fanciful, although Match.com should clearly adopt this idea!, but the application to students and schools is very real. Gale and Shapley concluded their paper by writing:

It is our opinion, however, that some of the ideas introduced here might usefully be applied to certain phases of the admissions problem.

Indeed, this is exactly what has happened. Students in New York and in Boston are now matched to schools using versions of this algorithm. Even before Gale and Shapley the algorithm had been used, without much theorizing, by doctors allocating residents to hospitals and since Gale-Shapley and Roth the idea has been used much more extensively all over the world .The algorithm, by the way, has been picked up and extended by computer scientists notably including Knuth.

I said above that the men propose to the women–this matters because when the women propose to the men you also get a stable match but it may be a somewhat different match and in general it is better to be the one proposing. Matching becomes more difficult when, as in modern times, both men and women may propose. Fortunately, in many problems, such as with students and schools, the proposers and receivers can be fixed.

Another question is whether the algorithm can be strategically manipulated. In an Impossibility Theorem with much the same flavor as Arrow’s Theorem and the Gibbard-Satterthwaite theorem, Roth and Roth and Sotomayor proved that there is always some possibility for manipulation but the G-S algorithm can be said to minimize the opportunity for strategic manipulation; in particular for the proposers, men or say students applying to schools. it is a dominant strategy to reveal one’s true preferences.

The importance of a stable matching algorithm can be seen in what happens when such algorithms are not used. In trying to allocate residents to hospitals, for example, what typically happens when a stable algorithm is not used is unraveling and chaos. Unraveling occurs when offers are made earlier and earlier in an attempt to get a jump on the competition. Prior to the currently used National Residency Matching Program, for example, hospitals were making offers to residents up to two years in advance! All kinds of chaos arose as hospitals would make exploding offers, accept now or the offer explodes! Such offers would inevitable lead to recriminations and backing out of the offers as better matches were sought.

What Roth has done is extend the Gale-Shapley algorithm to more complicated matches and to actually design such algorithms to solve real problems. In the 1970s, for example, the medical residency algorithm began to run into trouble because of a new development, the dual career couple. How to match couples, both doctors, to hospitals in the same city? By the 1990s assortative matching in the marriage market was beginning to derail matching in the doctor-hospital market! Roth was called in to solve the problem and moved from being a theorist to a market designer. Roth and Peranson designed the matching algorithm that is now used by Orthodontists, Psychologists, Pharmacists, Radiologists, Pediatric surgeons and many other medical specialties in the United States.

Most famously, Roth has worked on improving kidney allocation. I first wrote about this in 2004 (see also these posts):

Your spouse is dying of kidney disease. You want to give her one of your kidneys but tests show that it is incompatible with her immune system. Utter anguish and frustration. Is there anything that you can do? Today the answer is yes. Transplant centers are now helping to arrange kidney swaps. You give to the spouse of another donor who gives to your spouse. Pareto would be proud. Even a few three-way swaps have been conducted.

But why stop at three? What about an n-way swap? Let’s add in the possibility of an exchange that raises your spouse on the queue for a cadaveric kidney. And let us also recognize that even if your kidney is compatible with your spouse’s there may be a better match. Is there an allocation system that makes all donors and spouses better off (or at least no worse off) and that maximizes the number of beneficial swaps? In an important paper (Warning! Very technical. Requires NBER subscription.) Alvin Roth and co-authors describe just such a mechanism and show that it could save many lives. Who says efficiency is a pedestrian virtue?

Since that time we have seen many such swaps including this record of 60 people and 30 kidneys. Truly a noble match.

Minor editing Oct. 23.

2012 Nobel Laureates in economics

Alvin Roth and Lloyd Shapley!

Great picks.  Both have done work on matching theory, bargaining theory, allocation theory, and market design. Here is Roth’s blog, he often reads MR by the way and sometimes sends us links.  I now need to repack and travel, my apologies, but Alex is likely to have more to say.  Alex in particular has many excellent past posts on Roth.  Here is an excellent overview of the contributions of Shapley.  Here is Wikipedia on Shapley.  Here is a Forbes profile of Roth.  Here is the Swedish information.

I think of this as a prize about how theory can be turned into usable results, how trade and matching can be made more efficient in concrete ways, how trade is a coordination game, and the intimate connection between issues of trade and issues of distribution.

Richly deserved by both men.

Cognitive Democracy: Condorcet with Competence

We usually think of democracy as a way of aggregating diverse preferences but we can also imagine that we share similar preferences and that what we disagree about is the best way to achieve those preferences. From this perspective, democracy can be thought of as a tool for information aggregation. Using simple probability theory, Condorcet showed in 1785 that even when each individual voter has only a slightly better than chance probability of choosing the bettier of two options the probability that majority rule chooses the better outcome quickly goes to 1 as the number of voters increases (the wisdom of the crowds).

A number of writers at Crooked Timber have been discussing Knight and Johnson’s The Priority of Democracy, one strand of which involves such an cognitive defense of democracy. Cosma Shalizi, for example, writes:

Democratic debate is a tool for cognition, for harnessing the dispersed knowledge of the citizens and their diversity of perspectives and insights.

But does an cognitive defense of democracy lead to universal suffrage? Or does it suggest what Melissa Schwartzberg calls “epistocracy”, rule by the educated? (See also Henry Farrell’s comments). The wisdom of the crowds breaks down when the crowd’s errors are systematically biased rather than random. As Peter Boettke notes, Bryan Caplan makes a strong case in The Myth of the Rational Voter that better educated voters are less systematically biased than the average voter and more likely to agree with experts on questions of fact.

When voters are not equally competent some remarkable mathematical results show that the best cognitive democracy is not universal suffrage and one-person, one-vote but a specific form of weighted voting.

Begin with a simple example. Suppose there is one correct decision and there are three voters each trying to reach the correct decision with competence levels of {.55, .55, .55}, where the competence levels are just the probabilities that each voter chooses the correct decision. The best a dictator could do in choosing the correct decision is .55 but if use majority rule the probability of reaching the correct decision is 0.57475, higher than that of any individual voter. (We reach the correct decision if all three voters reach the correct decision which has prob .55^3 or if two voters reach the correct decision and one does not, as this can happen in three ways the probability of the latter is 3*.55*.55*(1-.55) for a grand total of .57475.) Moreover, if we were to increase the number of voters to 100, the probability of majority rule reaching the correct decision goes to 84%–far above that of any dictator, this is the essence of Condorcet’s theorem.

Now let’s assume that the voters have competences of {.55,.60,.70}. Majority rule, using the same reasoning as before, gets us a democratic competence level of .673, not bad but notice that this is less than the competence level of the highest competence individual. The ideal voting system in this case would weight voter three enough so that she determines the outcome, thus giving democracy a competence level of .7.

More generally, if the voter competences levels are {p1,p2,p3} then the cognitively most efficient voting scheme gives each voter a weight of Log[pi/(1-pi)]–the result is remarkable for a being such a simple formula of the voter’s own competence level (note that the individual’s weighting is not a function of the competency levels of the other voters.) The result was shown first in this context by Nitzan and Paroush, Nobel-prize winner Lloyd Shapely and Bernard Grofman also made important contributions and see Grofman, Owen, Feld for some related results.)

Democracies make many decisions which are information based (Does Iraq have weapons of mass destruction? Will an invasion make the US safer? Do phthalates cause significant health risks?). Note also that we might also use this method for many committee decisions. Which scientific approach is deserving of greater funding? Which marketing plan should we adopt? Is surgery the best option? and in these decisions weighting votes by a measure of competence, which can be estimated from past decisions, may lead to significant improvements in outcomes.

Voters have diverse preferences not just competences but we could combine cognitive and preference aggregation theories of democracy by using high competence voters from different demographics categories to estimate what people would think about issues if only they were better informed. In this way we can distinguish differences due to knowledge from those due to preferences and we could upweight the competent while maintaining demographic balance thus creating a cognitive democracy based on enlightened preferences.

Robert Aumann, one of two new Nobel Laureates

Here is one summary of Aumann’s work on game theory:

Robert J. Aumann’s has been one of the leading figures in the mathematical surge that has characterized Neo-Walrasian economics and game theory in the past forty years. Aumann entered into economics via cooperative game theory –

In Neo-Walrasian theory, Robert Aumann is perhaps best known for his theory of core equivalence in a "continuum" economy. Aumann introduced measure theory into the analysis of economies with an infinite number of agents – formalizing the "perfectly competitive" scenario. In his classical 1964 paper, Aumann proved the equivalence of the Edgeworthian core and Walrasian equilibrium allocations when there are an uncountable infinite number of agents – thereby providing the limit case for future work on core convergence. In order to prove this result was not vacuous, Aumann went on to prove the existence of equilibrium (1966) in this "perfectly competitive" scenario. On his way, he contributed to mathematics itself by providing a definition of the "integral" of a correspondence (1965), which was previously absent.

Previously, Aumann (1962) had swung Ockham’s razor and helped remove the axiom of completeness of preferences from the Walrasian theory of choice. In another classical paper with F.J. Anscombe in 1964, Aumann formalized the notion of "subjective probability", a concept that had been earlier forwarded by Leonard Savage, that profoundly changed the theory of choice under uncertainty.

His contributions to game theory have perhaps been no less path-breaking. Aumann entered game theory in 1959 to carefully distinguish between infinitely and finitely repeated games. With Bezalel Peleg in 1960, Aumann formalized the notion of a coalitional game without transferable utility (NTU) – one of the organizing beacons of his later research. With Michael Maschler (1963), he introduced the concept of a "bargaining set". In 1974, Aumann went on to identify "correlated equilibrium" in Bayesian games. In 1975, Aumann went on to prove a convergence theorem for the Shapley value. In 1976, he formally defined the concept of "Common Knowledge". Also in 1976, in an unpublished paper with Lloyd Shapley, Aumann provided the perfect folk theorem using the limit of means criterion.

Here is a strange but fascinating interview with Aumann.  It covers John Nash, religion, and the Cold War, among other matters.  Here is his home page, with links to articles.

My favorite Aumann paper is his 1976 piece on agreeing to disagree.  He proved the startling result that if two rational, truth-seeking people have common "priors," in a Bayesian sense rigorously definable, then those two individuals should not disagree once they exchange opinions.  Imagine I think there are 200 balls in the urn, but Robin Hanson thinks there are 300 balls in the urn.  Once Robin tells me his estimate, and I tell him mine, we should converge upon a common opinion.  In essence his opinion serves as a "sufficient statistic" for all of his evidence.  (This analysis also led Aumann to clarify the important game-theoretic concept of "common knowledge.")  Yet people disagree all the time.  Does this mean that priors are rarely common?  That we are rarely rational truth-seekers?  A bit of both?  Robin Hanson is doing much work on this topic.  Here is my paper with Robin, we argue you that if you disagree with your "epistemic peers," you are probably not a truth-seeker.

Congratulations to both Aumann and Schelling, comments are open…