Here is his home page. He has been at Stanford Business School since 1964, and born in Geneva, Nebraska. Here is his personal website. Here is his Wikipedia page. He has a doctorate in business administration from Harvard, but actually no economics Ph.D. (bravo!) Here is the Nobel designation.
Most of all Wilson is an economic theorist, doing much of his most influential work in or around the 1980s. He is a little hard to google (no, he did not work with Philip Glass), but here are his best-cited papers. To be clear, he won mainly for his work in auction theory and practice, covered by Alex here. But here is some information about the rest of his highly illustrious career.
He and David Kreps wrote a very famous paper about deterrence. Basically an incumbent wishes to develop a reputation for being tough with potential entrants, so as to keep them out of the market. This was one of the most influential papers of the 1980s, and it also helped to revive some of the potential intellectual case for antitrust activism. Here is Wilson’s survey article on strategic approaches to entry deterrence.
Wilson has a famous paper with Kreps, Milgrom, and Roberts. They show how a multi-period prisoner’s dilemma might sustain cooperating rather than “Finking” if there is asymmetric information about types and behavior. This paper increased estimates of the stability of tit-for-tat strategies, if only because with uncertainty you might end up in a highly rewarding loop of ongoing cooperation. This combination of authors is referred to as the “Gang of Four,” given their common interests at the time and some common ties to Stanford.
His 1982 piece with David Kreps on “sequential equilibria” was oh so influential on game theory, here is the abstract:
We propose a new criterion for equilibria of extensive games, in the spirit of Selten’s perfectness criteria. This criterion requires that players’ strategies be sequentially rational: Every decision must be part of an optimal strategy for the remainder of the game. This entails specification of players’ beliefs concerning how the game has evolved for each information set, including information sets off the equilibrium path. The properties of sequential equilibria are developed; in particular, we study the topological structure of the set of sequential equilibria. The connections with Selten’s trembling-hand perfect equilibria are given.
Here is a more readable exposition of the idea. This was part of a major effort to figure out how people actually would play in games, and which kinds of solution concepts economists should put into their models. I don’t think the matter ever was settled, and arguably it has been superseded by behavioral and computational and evolutionary approaches, but Wilson was part of the peak period of applying pure theory to this problem and this might have been the most important theory piece in that whole tradition.
Wilson’s paper “The Theory of the Syndicates,”JSTOR 1909607 which was published in Econometrica in 1968 influenced a whole generation of students from economics, finance, and accounting. The paper poses a fundamental question: Under what conditions does the expected utility representation describe the behavior of a group of individuals who choose lotteries and share risk in a Pareto-optimal way?
Link here, this was a contribution to social choice theory and fed into Oliver Hart’s later work on when shareholder unanimity for a corporation would hold. It also connects to the later Milgrom work, some of it with Wilson, on when people will agree about the value of assets.
Here is Wilson’s book on non-linear pricing: “What do phone rates, frequent flyer programs, and railroad tariffs all have in common? They are all examples of nonlinear pricing. Pricing is nonlinear when it is not strictly proportional to the quantity purchased. The Electric Power Research Institute has commissioned Robert Wilson to review the various facets of nonlinear pricing.” Yes, he is a business school guy. Here is his survey article on electric power pricing, a whole separate direction of his research.
Here is his 1989 law review article about Pennzoil vs. Texaco, with Robert H. Mnookin.
Wilson also did a piece with Gul and Sonnenschein, laying out the different implications of various game-theoretic conjectures for the Coase conjecture, namely the claim that a durable goods monopolist will end up having to sell at competitive prices, due to the patience of consumers and their unwillingness to buy at higher prices.
Wilson was the dissertation advisor of Alvin E. Roth, Nobel Laureate, and here the two interview each other, recommended. Excerpt:
Wilson: As an MBA student in 1960, I wrote a class report on how to bid in an auction that got a failing grade because it was not “managerial.”
And here is an Alvin Roth blog post on the prize and the intellectual lineage.
The bottom line? If you are a theorist, Stockholm is telling you to build up some practical applications — at the very least pull something out of your closet and sell it on eBay! A lot of people thought Roberts and maybe Kreps would be in on this Prize, but they are not. The selections themselves are clearly deserving and have been “in play” for many years in the Nobel discussions. But again, we see the committee drawing clear and distinct lines.
Let’s see what they do next year!
It would seem so, now there are lots of them, here is one part of my Bloomberg column:
The Nobel Peace Prize this year went to the World Food Programme, part of the United Nations. Yet the Center for Global Development, a leading and highly respected think tank, ranked the winner dead last out of 40 groups as measured for effectiveness. Another study, by economists William Easterly and Tobias Pfutze in 2008, was also less than enthusiastic about the World Food Programme.
The most striking feature of the award is not that the Nobel committee might have gotten it wrong. Rather, it is that nobody seems to care. The issue has popped up on Twitter, but it is hardly a major controversy.
I also noted that the Nobel Laureates I follow on Twitter, in the aggregate, seem more temperamental than the 20-year-olds (and younger) that I follow. Hail Martin Gurri!
The internet diminishes the impact of the prize in yet another way. Take Paul Romer, a highly deserving laureate in economics in 2018. To his credit, many of Romer’s ideas, such as charter cities, had been debated actively on the internet, in blogs and on Twitter and Medium, for at least a decade. Just about everyone who follows such things expected that Romer would win a Nobel Prize, and when he did it felt anticlimactic. In similar fashion, the choice of labor economist David Card (possibly with co-authors) also will feel anticlimactic when it comes, as it likely will.
Card with co-authors, by the way, is my prediction for tomorrow.
Hawtrey came from a family long associated with Eton, where he was educated himself, before coming up to Trinity in 1898. In 1901 he was 19th Wrangler; in 1903 he briefly entered the Admiralty, before going to the Treasury, where he found his vocation as an economist and remained for forty-one years. He was a very faithful Apostle, attending every annual dinner until 1954, when he was prevented from going by ill health. He was devoted to Moore, whose impassioned singing of Die Beiden Grenadiere made him realize how horrible war was for the soldiers who actually did the fighting: this constituted an epiphany for Hawtrey, and reinforced his life-long Liberalism. Moore was so much the most important influence on the life and career of Sir Ralph Hawtrey that he spent his last years working on a systematic philosophical treatise (inspired also by Robin Mayor), which was to have been a summa of his twenty-odd books and the hundreds of letters he published in The Times. He was married to the famous pianist Titi d’Aranyi.
That is from Paul Levy’s book Moore: G.E. Moore and the Cambridge Apostles. Here is more on Titi, also known as Hortense, who studied with Bartok and received numerous letters from him. And here is Scott Sumner on Hawtrey, one of the great monetary economists.
This paper studies the heterogeneous impacts of the US-China trade war through linkages in global value chains. By building a two-stage, multi-country, multi-sector general equilibrium model, this paper discusses how imports tariffs effect domestic producers through internal linkage within industry and external linkage across industries. The model validates that imports tariffs on Chinese upstream intermediate goods negatively affects US downstream exports, outputs and employment. Effects are strong in the US industries that rely much on targeted Chinese intermediate goods. In addition, this paper differentiates the impacts of the two rounds of the trade war by comparing tariffs on intermediate goods and consumption goods. This paper estimates that the trade war increases US CPI by 0.09% in the first round and 0.22% in the second round. Finally, this paper studies the welfare effects of the trade war. This paper estimates that the trade war costs China $35.2 billion, or 0.29% GDP, costs US $15.6 billion, or 0.08% GDP, and benefits Vietnam by $402.8 million, or 0.18% GDP.
That is by Yang Zhou of the University of Minnesota, via the excellent Kevin Lewis. Those numbers should not come as a surprise, they do indicate that both countries are worse off, but they also show that a lot of the bargaining power does in fact reside on the side of the United States.
I do not feel qualified to have an opinion here, but this piece, by Benjamin Y. Hayden and Yael Niv, seems of some interest:
Much of traditional neuroeconomics proceeds from the hypothesis that value is reified in the brain, that is, that there are neurons or brain regions whose responses serve the discrete purpose of encoding value. This hypothesis is supported by the finding that the activity of many neurons and brain regions covaries with subjective value as estimated in specific tasks. Here we consider an alternative: that value is not represented in the brain. This idea is motivated by close consideration of the economic concept of value, which places important epistemic constraints on our ability to identify its neural basis. It is also motivated by the behavioral economics literature, especially work on heuristics. Finally, it is buoyed by recent neural and behavioral findings regarding how animals and humans learn to choose between options. In light of our hypothesis, we critically reevaluate putative neural evidence for the representation of value, and explore an alternative: that brains directly learn action policies. We delineate how this alternative can provide a robust account of behavior that concords with existing empirical data.
Via Benjamin Lyons.
This started in the late 1980s, and was led by GMU economist Don Lavoie, who earlier had been a computer programmer. Here is one bit from Don’s extensive essay, co-authored with Howard Baetjer and William Tulloh:
The market for scholarly ideas is now badly compartmentalized, due to the nature of our institutions for dispersing information. One important aspect of the limitations on information dispersal is the one-way nature of references in scholarly literature. Suppose Professor Mistaken writes a persuasive but deeply flawed article. Suppose few see the flaws, while so many are persuaded that a large supportive literature results. Anyone encountering a part of this literature will see references to Mistaken’s original article. References thus go upstream towards original articles. But it may be that Mistaken’s article also provokes a devastating refutation by Professor Clearsighted. This refutation may be of great interest to those who read Mistaken’s original article, but with our present technology of publishing ideas on paper, there is no way for Mistaken’s readers to be alerted to the debunking provided by Clearsighted. The supportive literature following Mistaken will cite Mistaken but either ignore Professor Clearsighted or minimize her refutations.
In a hypertext system such as that being developed at Xanadu, original work may be linked downstream to subsequent articles and comments. In our example, for instance, Professor Clearsighted can link her comments directly to Mistaken’s original article, so that readers of Mistaken’s article may learn of the existence of the refutation, and be able, at the touch of a button, to see it or an abstract of it. The refutation by Clearsighted may similarly and easily be linked to Mistaken’s rejoinder, and indeed to the whole literature consequent on his original article. Scholars investigating this area of thought in a hypertext system would in the first place know that a controversy exists, and in the second place be able to see both (or more) sides of it with ease. The improved cross-referencing of, and access to, all sides of an issue should foster an improved evolution of knowledge.
A potential problem with this system of multidirectional linking is that the user may get buried underneath worthless “refutations” by crackpots. The Xanadu system will include provisions for filtering systems whereby users may choose their own criteria for the kinds of cross-references to be brought to their attention. These devices would seem to overcome the possible problem of having charlatans clutter the system with nonsense. In the first place, one would have to pay a fee for each item published on the system. In the second place, most users would choose to filter out comments that others had adjudged valueless and comments by individuals with poor reputations. In other words, though anyone could publish at will on a hypertext system, if one develops a bad reputation, very few will ever see his work.
Miller and Drexler envision the evolution of what they call agoric open systems–extensive networks of computer resources interacting according to market signals. Within vast computational networks, the complexity of resource allocation problems would grow without limit. Not only would a price system be indispensible to the efficient allocation of resources within such networks, but it would also facilitate the discovery of new knowledge and the development of new resources. Such open systems, free of the encumbrances of central planners, would most likely evolve swiftly and in unexpected ways. Given secure property rights and price information to indicate profit opportunities, entrepreneurs could be expected to develop and market new software and information services quite rapidly.
Secure property rights are essential. Owners of computational resources, such as agents containing algorithms, need to be able to sell the services of their agents without having the algorithm itself be copyable. The challenge here is to develop secure operating systems. Suppose, for example, that a researcher at George Mason University wanted to purchase the use of a proprietary data set from Alpha Data Corporation and massage that data with proprietary algorithms marketed by Beta Statistical Services, on a superfast computer owned by Gamma Processing Services. The operating system needs to assure that Alpha cannot steal Beta’s algorithms, that Beta cannot steal Alpha’s data set, and that neither Gamma or the George Mason researcher can steal either. These firms would thus under-produce their services if they feared that their products could be easily copied by any who used them.
In their articles, Miller and Drexler propose a number of ways in which this problem might be overcome. In independent work, part of the problem apparently has already been overcome. Norm Hardy, senior scientist of Key Logic Corporation, whom we met at Xanadu, has developed an operating system caned KeyKOS which accomplishes what many suspected to be impossible: it assures by some technical means (itself an important patented invention) the integrity of computational resources in an open, interconnected system. To return to the above example, the system in effect would create a virtual black box in Gamma’s computer, in which Alpha’s data and Beta’s algorithms are combined. The box is inaccessible to anyone, and it self-destructs once the desired results have been forwarded to the George Mason researcher.
There is really quite a bit more at the link, noting that at the time Don had assembled a group of about ten people working on these ideas. As for the hyperlinks, I recall thinking at the time something like: “People don’t value reading so much, so making reading better with hyperlinks won’t have a huge marginal value!”
A further Covid-19 India Prize goes to award winning journalist Barkha Dutt for her reporting on the Covid pandemic and related crises in India.
Because of the Covid lockdown (March-June 2020), Indian news reporting and broadcasting faced severe disruptions in March-April 2020. For the first 50 days, as television networks remained studio-bound, Dutt and her small team traveled across India to report from the ground, producing over 250 ground reports. All the videos and reports are available on the MoJo youtube channel.
One of the world’s most severe lockdowns unleashed a massive internal migration from the cities to the villages in India. Dutt’s team was one of the first to shed light on the erroneous state policies concerning economic migrants in India during the lockdown,, often while walking alongside migrants. Her sustained coverage eventually led other stations and newspapers to follow and report similar stories and invoked a policy response from the government.
Another Covid-19 India Prize goes to award winning data journalist Rukmini S, for The Moving Curve Podcast, covering the data issues in India. She is currently an independent journalist writing for Mint, The Print, India Today (where she is tracking the pandemic daily) and India Spend (she is tracking Covid mortality) and writes occasionally for The Guardian, SCMP and The Hindu.
She distills all the information, data, and her daily insights into a 5-7-minute audio update in the form of a free podcast, now at 92 episodes. The episodes range from getting to the heart of India’s death statistics, interviewing a rural doctor about what it’s like waiting for Covid to hit, to attempting to cut through India’s public/ private healthcare binary, and they have had significant influence on many state governments. The Moving Curve podcast is produced by a small team of two – Rukmini S and sound engineer Anand Krishnamoorthi. The podcast is available on the major platforms as well as on medium.
I am very happy to see this new and urgently needed study. They have heeded the stricture to show their work. The authors are Donald A. Berry, Scott Berry, Peter Hale, Leah Isakov, Andrew W. Lo, Kien Wei Siah, and Chi Heem Wong, and here is the abstract:
We compare and contrast the expected duration and number of infections and deaths averted among several designs for clinical trials of COVID-19 vaccine candidates, including traditional randomized clinical trials and adaptive and human challenge trials. Using epidemiological models calibrated to the current pandemic, we simulate the time course of each clinical trial design for 504 unique combinations of parameters, allowing us to determine which trial design is most effective for a given scenario. A human challenge trial provides maximal net benefits—averting an additional 1.1M infections and 8,000 deaths in the U.S. compared to the next best clinical trial design—if its set-up time is short or the pandemic spreads slowly. In most of the other cases, an adaptive trial provides greater net benefits.
And what is an adapted trial you may be wondering?:
An adaptive version of the traditional vaccine efficacy RCT design (ARCT) is based on group sequential methods. Instead of a fixed study duration with a single final analysis at the end, we allow for early stopping for efficacy via periodic interim analyses of accumulating trial data…While this reduces the expected duration of the trial, we note that adaptive trials typically require more complex study protocols which can be operationally challenging to implement for test sites unfamiliar with this framework. In our simulations, we assume a maximum of six interim analyses spaced 30 days apart, with the first analysis performed when the first 10,000 subjects have been monitored for at least 30 days.
That means of course you might cut the trial short. Kudos to the authors for producing one of the most important papers of this year.
Here is a new study by Valerie Michelman, Joseph Price, and Seth D. Zimmerman:
This paper studies social success at elite universities: who achieves it, how much it matters for students’ careers, and whether policies that increase interaction between rich and poor students can integrate the social groups that define it. Our setting is Harvard University in the 1920s and 1930s, where students compete for membership in exclusive social organizations known as final clubs. We combine within-family and room-randomization research designs with new archival and Census records documenting students’ college lives and career outcomes. We find that students from prestigious private high schools perform better socially but worse academically than others. This is important because academic success does not predict earnings, but social success does: members of selective final clubs earn 32% more than other students, and are more likely to work in finance and to join country clubs as adults, both characteristic of the era’s elite. The social success premium persists after conditioning on high school, legacy status, and even family. Leveraging a scaled residential integration policy, we show that random assignment to high-status peers raises rates of final club membership, but that overall effects are driven entirely by large gains for private school students. Residential assignment matters for long-run outcomes: more than 25 years later, a 50-percentile shift in residential peer group status raises the rate at which private school students work in finance by 37.1% and their membership in adult social clubs by 23.0%. We conclude that the social success premium in the elite labor market is large, and that its distribution depends on social interactions, but that the inequitable distribution of access to high-status social groups resists even vigorous attempts to promote cross-group cohesion.
You can think of this as another attempt to explain the relatively high returns to education, without postulating that students learn so much, and without emphasizing signaling so much. Going to Harvard is in fact winning access to a very valuable set of networks (which in turn is signaling as well, to be clear).
For the pointer I thank Tyler Ransom.
In this issue:
Five cities, five stories? Robert Kaestner explores the heterogeneity of results across Baltimore, Boston, Chicago, Los Angeles, and New York in the work of Raj Chetty, Nathaniel Hendren, and Lawrence Katz, arguing that it is misleading to suggest that moving before the age of 13 to lower-poverty neighborhoods promises better outcomes. Chetty, Hendren, and Katz respond.
The AEA: Republicans need not apply: Mitchell Langbert investigates the American Economic Association, using voter-registration data and political-contribution data to show that the AEA officers, editors, authors, and other players are quite thoroughgoingly Democratic.
The AER: How much space is given to articles on gender, race and ethnicity, and inequality?: Jeremy Horpedahl and Arnold Kling track the trends 1991–2020 for the American Economic Review and Papers & Proceedings.
Lockdowns and covid hospitalizations: John Spry criticizes a JAMA research letter by Soumya Sen, Pinar Karaca-Mandic, and Archelle Georgiou about the effectiveness of stay-at-home orders, for eliding available placebo comparisons. Sen, Karaca-Mandic, and Georgiou reply.
Reading, writing, and Adam Smith: Scott Drylie uses Smith’s final words on school financing to review interpretations of Smith on schooling.
Carl Menger: The Errors of Historicism in German Economics: The first English translation of Menger’s 1884 reply to Gustav Schmoller is provided by Karen Horn and Stefan Kolev, whose Foreword analyzes the not-so-amicable Methodenstreit.
Data alteration: Ron Michener rejoins to Farley Grubb, explaining why he thinks that Grubb had no grounds for altering John McCusker’s data series and thereby generating outliers on which his results depend. (Professor Grubb received Professor Michener’s comment too late to allow for concurrent reply but will reply in the next issue of this journal.)
Frictionless note: With the approval and gratitude of Jeffrey Bergstrand, Nico Stoeckmann corrects the constant in the equation for a special, frictionless case of Bergstrand’s gravity equation for international trade.
Liberalism in Brazil: Lucas Berlanza provides a historical and modern guide to the fortunes of liberal ideas and trends in Brazil, extending the Classical Liberalism in Econ, by Country series to 20 articles.
Readworthy 2050: Nine correspondents respond to the question: What 21st-Century Works Will Merit a Close Reading in 2050?
Karen Horn and Stefan Kolev on Menger vs. Schmoller: The translators discuss Menger’s 1884 The Errors of Historicism in German Economics and the broader Methodenstreit.
How persistent are economic gaps across ethnicities? The convergence of ethnic gaps through the third generation of immigrants is difficult to measure because few datasets include grandparental birthplace. I overcome this limitation with a new three-generational dataset that links immigrant grandfathers in 1880 to their grandsons in 1940. I find that the persistence of ethnic gaps in occupational income is 2.5 times stronger than predicted by a standard grandfather-grandson elasticity. While part of the discrepancy is due to measurement error attenuating the grandfather-grandson elasticity, mechanisms related to geography also partially explain the stronger persistence of ethnic occupational differentials.
That is the abstract of a piece by Zachary Ward, from American Economic Journal: Applied Economics. In a number of regards this paper goes well beyond the previous literature. Here is another interesting sentence:
…I find that 51 percent of initial ethnic gaps in occupational income remained after three generations.
The author also notes:
Rather than argue for an ethnic-specific causal mechanism, I instead point to measurement error and geography as key reasons for the stronger persistence of ethnic differentials across three generations.
I am not so convinced, as where you choose to live is endogenous to your expected labor market quality. I am somewhat more persuaded by this point:
…the ethnic mean provides more information about the father’s true occupational status.
Iin other words, what appears to be an influence of ethnicity might instead be a transmission channel through the background of one’s own father.
At times the author seems naive, at other times Straussian, or maybe just afraid? To be clear, I am myself an extreme culturalist, and that is not a Straussian remark. This is in any case a major contribution to a contentious debate.
Ho hum, nothing to see here…
Central banks sometimes evaluate their own policies. To assess the inherent conflict of interest, we compare the research findings of central bank researchers and academic economists regarding the macroeconomic effects of quantitative easing (QE). We find that central bank papers report larger effects of QE on output and inflation. Central bankers are also more likely to report significant effects of QE on output and to use more positive language in the abstract. Central bankers who report larger QE effects on output experience more favorable career outcomes. A survey of central banks reveals substantial involvement of bank management in research production.
That is a new NBER working paper by Brian Fabo, Martina Jančoková, Elisabeth Kempf, and Ľuboš Pástor. Here is very good commentary and analysis from John Cochrane.
Yes, in short. Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò and Sergio Pastorello cover this topic in the latest issue of the American Economic Review:
Increasingly, algorithms are supplanting human decision-makers in pricing goods and services. To analyze the possible consequences, we study experimentally the behavior of algorithms powered by Artificial Intelligence (Q-learning) in a workhorse oligopoly model of repeated price competition. We find that the algorithms consistently learn to charge supracompetitive prices, without communicating with one another. The high prices are sustained by collusive strategies with a finite phase of punishment followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand, changes in the number of players, and various forms of uncertainty.
Here is the paper.
The most productive part of medical care is treatment for cardiovascular disease, both acute conditions and risk factors. Productivity estimates for acute cardiovascular diseases are $89,000 in aggregate — 79% of the total increase [in health care productivity from 1999 to 2012].
There has been very little progress over that same period in treating mental illness, arthritis, and musculoskeletal conditions. How about this:?
Despite a vast increase in the number of people treated with drugs for mental illness, the population’s mental health showed essentially no change over time.
Overall medical care was increasing in productivity over that period by about 0.7% a year, still great stagnation territory as they say.
That is all from a new paper by David M. Cutler, Kaushik Ghosh, Kassandra Messer, Trivellore Raghunathan, Allison B. Rosen, and Susan T. Stewart.
That is the topic of my latest Bloomberg column, here is one excerpt:
Now consider another of my favorite pastimes, watching professional basketball. I have been following the NBA bubble with great interest. The Miami Heat are now favored to reach the NBA Finals, even though they were only the fifth-ranked team in the East at the end of the regular season. What happened? They have played with grit and determination, and their entire active roster showed up in first-rate physical shape. That’s not easy to do after a five-month layoff, as it required tremendous discipline.
In contrast, the Los Angeles Clippers were among the favorites to win the NBA title. They were recently eliminated by the Denver Nuggets, a very good team but not previously a top contender. In the final quarter of the last game of the series, the victorious Nuggets played with energy and verve, while the Clippers seemed to be gasping for air. After their defeat, some of the Clippers admitted that inferior conditioning was part of their problem.
So “staying in shape during a five-month layoff” is now a critical skill for a basketball player. But this doesn’t necessarily mean the Clippers need to revamp their roster. Maybe they should just wait for a return to normal times.
Might these changes in quality affect your choices beyond work — such as your decisions about friends, family relations, romance, and much more? Should you buy a dog, knowing you probably won’t be homebound two years from now? How about dating? On a first date, presumably, looks should matter less and social carefulness more. But again, for how long? It would be very strange, and probably unwise, to form a lasting relationship based on how well your romantic interest wears a mask.
Sadly the world has entered a new paralysis, most of all because no one knows when things will return to normal, or what might become normal, or what might remain strange. When this pandemic ends, one thing we can all look forward to is making better plans.
Recommended, at least until the pandemic is over.