Results for “quadratic voting”
12 found

Quadratic Voting in the Field

The Democratic caucus in the Colorado state legislature wanted to get their member’s feedback on the bills most important to them. That’s hard to do because each member has an incentive to claim that their pet bill is by far the most important bill to them. Thus, Chris Hansen, the chair of the House Appropriations Committee, who also happens to have a PhD in economics, decided to use a modified form of quadratic voting. Each voter was given 100 tokens to vote and the price of x votes for a policy was x^2 so you could buy 10 votes on your favorite policy for 100 but you could also buy 5 votes on each of your four favorite policies (5^2+5^2+5^2+5^2=100).

Wired: So in mid-April, the representatives voted. Sure, each one could have put ten tokens on their pet project. But consider the or: Nine votes on one (cost: 81 tokens) but then three votes on another (cost: nine tokens). Or five votes each (25 tokens) on four different bills!

In Colorado at least, it worked, kind of. “There was a pretty clear signal on which items, which bills, were the most important for the caucus to fund,” Hansen says. The winner was Senate Bill 85, the Equal Pay for Equal Work Act, with 60 votes. “And then there’s kind of a long tail,” Hansen says. “The difference was much more clear with quadratic voting.”

Square Dancing Bees and Quadratic Voting

It’s well known that bees dance to convey where useful resources are located but how do bees convey the quality of the resource and what makes this information credible? Rory Sutherland and Glen Weyl argue that the bees have hit upon a key idea, quadratic dancing or as I like to put it, square dancing.

Seeley’s research shows that the time they spend on dances grows not linearly but quadratically in proportion to the attractiveness of the site they encountered. Twice as good a site leads to four times as much wiggling, three times as good a site leads to nine times as lengthy a dance, and so forth.

Quadratic dancing has some useful properties which can be duplicated in humans with quadratic voting.

Under Quadratic Voting (QV), by contrast, individuals have a vote budget that they can spread around different issues that matter to them in proportion to the value those issues hold for them. And just as with Seeley’s bees, it becomes increasingly costly proportionately to acquire the next unit of influence on one issue. This approach highlights not only frequency of preferences but also intensity of preferences, by forcing individuals to decide how they will divide their influence across issues, while penalising the single-issue fanatic’s fussiness of putting all one’s weight on a single issue. It encourages individuals to distribute their points in precise proportion to how much each issue matters to them.

They offer a useful application

Consider a firm that wants to learn whether customers care about particular product attributes: colour, quality, price, and so on. Rather than simply ask people what they care about — which leads to notoriously inaccurate results, often where people affect strong views just to maximise their individual influence — a business, or a public service, could supply customers with budgets of credits which they then used to vote, in quadratic fashion, for the attributes they want. This forces the group of respondents, like the swarm of bees, to allocate more resources to the options they care most about.

Weyl’s paper with Eric Posner is a good introduction to quadratic voting and here are previous MR posts on quadratic voting.

My thoughts on quadratic voting and politics as education

That is the new paper by Lalley and Weyl.  Here is the abstract:

While the one-person-one-vote rule often leads to the tyranny of the majority, alternatives proposed by economists have been complex and fragile. By contrast, we argue that a simple mechanism, Quadratic Voting (QV), is robustly very efficient. Voters making a binary decision purchase votes from a clearinghouse paying the square of the number of votes purchased. If individuals take the chance of a marginal vote being pivotal as given, like a market price, QV is the unique pricing rule that is always efficient. In an independent private values environment, any type-symmetric Bayes-Nash equilibrium converges towards this efficient limiting outcome as the population grows large, with inefficiency decaying as 1/N. We use approximate calculations, which match our theorems in this case, to illustrate the robustness of QV, in contrast to existing mechanisms. We discuss applications in both (near-term) commercial and (long-term) social contexts.

Eric Posner has a good summary.  I would put it this way.  Simple vote trading won’t work, because buying a single vote is too cheap and thus a liquid buyer could accumulate too much political power.  No single vote seller internalizes the threshold effect which arises when a vote buyer approaches the purchase of an operative majority.  Paying the square of the number of votes purchased internalizes this externality by an externally imposed pricing rule, as is demonstrated by the authors.  This is a new idea, which is rare in economic theory, so it should be saluted as such, especially since it is accompanied by outstanding execution.

The authors give gay marriage as an example where a minority group with more intense preferences — to allow it — could buy up the votes to make it happen, paying quadratic prices along the way.

My reservation about this and other voting schemes (such as demand revelation mechanisms) is that our notions of formal efficiency are too narrow to make good judgments about political processes through social choice theory.  The actual goal is not to take current preferences and translate them into the the right outcomes in some Coasean or Arrovian sense.  Rather the goal is to encourage better and more reasonable preferences and also to shape a durable consensus for future belief in the polity.

(It is interesting to read the authors’ criticisms of Vickrey-Clarke-Grove mechanisms on p.30, which are real but I do not think represent the most significant problems of those mechanisms, namely that they perform poorly on generating enough social consensus for broadly democratic outcomes to proceed and to become accepted by most citizens.  One neat but also repugnant feature of democratic elections is how they can serve as forums for deciding, through the readily grasped medium of one vs. another personae, which social values will be elevated and which lowered.  “Who won?” and “why did he win?” have to be fairly simple for this to be accomplished.)

I would gladly have gay marriage legal throughout the United States.  But overall, like David Hume, I am more fearful of the intense preferences of minorities than not.  I do not wish to encourage such preferences, all things considered.  If minority groups know they have the possibility of buying up votes as a path to power, paying the quadratic price along the way, we are sending intense preference groups a message that they have a new way forward.  In the longer run I fear that will fray democracy by strengthening the hand of such groups, and boosting their recruiting and fundraising.  Was there any chance the authors would use the anti-abortion movement as their opening example?

If we look at the highly successful democracies of the Nordic countries, I see subtle social mechanisms which discourage extremism and encourage conformity.  The United States has more extremism, and more intense minority preferences, and arguably that makes us more innovative more generally and may even make us more innovative politically in a good way.  (Consider say environmentalism or the earlier and more correct versions of supply-side economics, both innovations with small starts.)  But extremism makes us more innovative in bad ways too, and I would not wish to inject more American nutty extremism into Nordic politics.  Perhaps the resulting innovativeness is worthwhile only in a small number of fairly large countries which can introduce new ideas using increasing returns to scale?

By elevating persuasion over trading in politics (at some margins, at least), we encourage centrist and majoritarian groups.  We encourage groups which think they can persuade others to accept their points of view.  This may not work well in every society but it does seem to work well in many.  It may require some sense of persuadibility, rather than all voting being based on ethnic politics, as it would have been in say a democratic Singapore in the early years of that country.

In any case the relevant question is what kinds of preference formation, and which kinds of groups, we should allow voting mechanisms to encourage.  Think of it as “politics as education.”  When it comes to that question, I don’t yet know if quadratic voting is a good idea, but I don’t see any particular reason why it should be.

Addendum: On Twitter Glenn Weyl cites this paper, with Posner, which discusses some of these issues more.

What is the meta-rational thing to do here?

…the LessWrong community…just released our first book set “A Map that Reflects the Territory: Essays from the LessWrong Community“. It’s a collection of essays from 2018 by Scott Alexander, Eliezer Yudkowsky, and over twenty other writers on LessWrong. It’s a 5-book set, and we actually used quadratic voting to determine what went into the books and what didn’t.

We’re now offering the books on pre-order for $29. It turns out the demand is much higher than I expected, we only planned to print 500 sets but we already sold that many in the first 48 hours.

That is from my email…

Glen Weyl update and interview

Here is one excerpt:

I’ve moved on from being a researcher. I’m an advisor to Microsoft’s senior leaders about geopolitics and macroeconomics. So, my whole outlook has changed quite a bit as a result of that.

And:

In Taiwan, we’ve come to work extremely closely with Audrey Tang, their digital minister who’s just a remarkable person and, honestly, a much more interesting subject than me. She has been using quadratic voting for administering national hackathons—where people get together and try to create technological solutions to social problems.

Audrey has used quadratic voting to score those competitions and she’s also used another idea that we’re very into, called ‘data coalitions’ or ‘data cooperatives’—they’re sort of data labour unions—to organize those services. Taiwan’s response to Covid was, to a large extent, driven by these civic technology developments and they were the most successful country in the world. They had the lowest infection and death rate and the smallest impact on their economy. A lot of that was related to their harnessing of these civic technology approaches.

Here is the Five Books link, interesting throughout.

Tuesday assorted links

1. Vitalik on quadratic voting.

2. The erupting NZ volcano is privately owned.

3. New Keller Scholl and Robin Hanson paper on whether there was an automation revolution.

4. Marriage Story is an excellent film on many levels, including but not only L.A. vs. NYC, furthermore it offers running commentary on Bergman’s Scenes from a Marriage (my favorite movie ever?) and the Bergman/Ullmann story itself.

5. Was there a consistent Axial Age? (no)

6. Secret ballot for me but not for thee?

*Radical Markets*

The authors are Eric A. Posner and E. Glen Weyl, and the subtitle is Uprooting Capitalism and Democracy for a Just Society.

“Suppose the entire city of Rio is perpetually up for auction.”  To be clear, I don’t agree with these proposals.  But if you want a book that is smart, clearly written, dedicated to Bill Vickrey, and sees its premises through to their logical conclusions, I am happy to recommend this one.  Think of it as a bunch of social choice and incentive mechanisms, based on market-like ideas, though not markets in the sense of a traditional medieval fair.

The authors call for perpetually open auctions, quadratic voting, a kind of apprenticeship system for the private sponsorship of immigrants, a ban on mutual fund diversification within sectors (to preserve competition by limiting joint ownership), and creating more explicit markets in personal data.  If nothing else, it will force you to clarify what you actually like about markets, or don’t, and what you actually like about economics, or don’t.

Most of all, I differ from the authors in seeing a larger gap between models and the real world than they do, and thinking we need a greater variety of kinds of evidence before making very radical changes.  But at the very least, it is worth thinking through why we do not handle life as a second price auction.

Should we move to self-assessed property taxation?

Eric Posner and Glen Weyl recommend a version of this idea in their recent paper “Property is Only Another Name for Monopoly.”

The core proposal is you announce how much each piece of your property is worth, and you are then taxed as a percentage of that value (say 2.5%).  At the same time, you have to sell your property for that same value, if someone bids for it, thereby lowering or eliminating the incentive to under-report true values.  If you think this through, you can see it minimizes holdout problems.

I think of the proposal as trying to force “willingness to be paid” people to live at “willingness to pay” valuations.  Microfoundations as to why WTBP and WTP so diverge would be useful!

In the meantime, my main worry concerns complementarity.  Say I own eighty pieces of property, and together they constitute a life plan.  The value of any one piece of property depends on the others.  For instance, if I lived in a more distant house, the car would be of higher value.  The ping pong table would be worth less in Minnesota, and having a good slow cooker enhances the refrigerator.  Don’t get me started on the CDs, but of course they boost the value of the stereo system and for that matter all the books.  I’ll leave aside purely “replaceable” commodities that can be replenished at will, and with no loss of value, through a click on Amazon (Posner and Weyl in any case think those replaceables should be taxed at much lower rates).

So how do I announce the value of any single piece of that property, knowing I might have to end up selling its complements?

In essence, I have to calculate how much the rest of the economy values each piece of my property, for me to know how much any single piece is worth.  That recreates a version of the socialist calculation problem, not for the planner, but for every single taxpayer.  And you can’t rely on the status quo ex ante as a readily available default, because that status quo can be purchased away from you.

The authors do consider related issues on pp.76-78 and 89-90.  For instance, they allow individuals to announce valuations for entire bundles when complementarity is strong.  You choose the bundle: “My house and all its items for three million tokens.”

But your human capital and your personal plans are non-marketable, non-transferable assets that can’t be put in this bundle.  So the incentive is to assemble highly idiosyncratic assets that no one else can quite fit together, and so no one else will wish to buy from you, and then you can announce a low valuation.

If that strategy works, the tax system doesn’t yield enough revenue and furthermore you’ve had to distort your consumption patterns.  If that strategy doesn’t work, someone might buy your life’s belongings/plans from you anyway, leaving you without your beloved customized snowmobile, your assiduously assembled music collection, and what about all those shoes you thought fit only you?

Ex ante, individuals are forced to assume huge, non-diversifiable risk, namely that someone will snatch away their whole “commodity life” from them.  So many of us, even if we could bear the asset loss, just don’t have the time to rebuild that formerly perfect mesh of plans and possessions, the one that took decades to create (think about risk-aversion in terms of time).  Furthermore, what if a wealthy villain or personal enemy wished to threaten to denude you in this manner?  Or what if you simply make a big mistake reporting the value of your bundle?  Isn’t this much much harder than just doing your income taxes?

To protect against these risks, ex ante, people will value their wealth bundles at quite high levels, and the result will be that wealth taxation will be too high.  Since I don’t favor most forms of wealth taxation in the first place, why push for a method that also will tax people on the risk of losing most of their carefully assembled personal wealth and plans?  Is “planning plus complementarity” really something we wish to tax so hard?

Don’t forget the “planning plus complementarity” process as a whole tends to elevate the value of assets, not reduce them.  Posner and Weyl boast that their scheme lowers the value of assets (p.88: “Under our system, the prices of assets would be only a quarter to a half of their current level.”).  Lower asset values may boost turnover, but is it not prima facie evidence that the value of aggregate wealth has gone down?  (I am not convinced by the way, that once lower rates of income taxation are taken into account, that asset prices would in fact be lower in their system.)  Why is that good?

So I wish to announce a high valuation for keeping the current system in lieu of this reform.  My personal plans depend on it.

Addendum: I consider several of Glen’s ideas too much along the lines of what Hayek labeled “rationalist constructivism.”  Here is my earlier post on quadratic voting.

Second addendum: You might instead prefer this method for only a limited set of issues, such as eminent domain.  But then you have to end up taxing wealth values, if only for credibility and future reporting incentives, even when efficiency may dictate simply transferring the resources with compensation.  There just aren’t that many situations where a wealth tax is what you optimally should be seeking to do.  And keep in mind, so often the real preference revelation problem is not for the homeowners, but whether the government really needs your asset or wealth!  Or maybe they are just taking it because they can.

Assorted links

1. The Strange Reason Murakami Fans Gather At A Hokkaido Sheep Farm Every Year.  And the campaign to create koala mittens.

2. North Korean defectors review The Interview.

3. Children and play in the Holocaust.

4. Japan’s birth rate problem is worse than thought.

5. Robin Hanson on vote trading and quadratic voting.

6. The best films of the decade so far?

7. A new breed of economist is helping firms crack markets.

8. More on Texas vs. California.

Monday assorted links

1. “I became a statistician because I was put in prison.

2. Profile of Isabel Sawhill on marriage.

3. Drones vs. mosquitos.

4. A Rust Belt theory of low cost high culture.

5. Reddit thread on which American customs seem outrageous or pointless.

6. How to politely (?) end a conversation, the science thereof.

7. Malcolm Gladwell reviews America’s Bitter Pill, on ACA.

8. Eric Posner responds on quadratic voting.

9. Jeff Sachs on Krugman and austerity.

An email from Glen Weyl

I won’t do an extra indent, but this is all Glen, noting I added a link to the post of mine he referred to:

“Tyler, I hope all is well for you.  I am writing to try to somewhat more coherently respond to our various exchanges, partly at the encouragement of Mark Lutter, whom I copy.  As I understand it (but please correct me if I am wrong), you have two specific objections to the COST and QV and one general objection to the project of the book.  If I have missed other things, please point me to them.  Let me very briefly respond to these points:

  • On the COST: you wrote (I cannot actually find the post at this point…not sure where it went) that human capital investments that are complementary with assets may be discouraged or otherwise prejudices by the COST.  However, as we explain in the paper with Anthony (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2744810), it has been known since at least Rogerson’s paper (https://academic.oup.com/restud/article-abstract/59/4/777/1542650) that VCG leads to first-best investment, including in human capital, as long as those investments are privately valued; we show that this property extends to the COST.  You seem to focus on examples in your post where those human capital investments are privately valued.  I thus do not see an economic efficiency objection to the COST on these grounds.
  • On QV: you write (https://marginalrevolution.com/marginalrevolution/2015/01/my-thoughts-on-quadratic-voting-and-politics-as-education.html) that democracy is about far more than decision-making; it is about what people learn and are induced to learn through the democratic process.  This is a deep and critical point and central to what Sen has called the “constitutive” role of democracy.  And this objection has been lodged not just by you but, for example, by Danielle Allen.  It is one I greatly respect and have struggled with.  A fundamental problem here is that no one to my knowledge has managed to model this information acquisition process in a formal model in a way that allows comparison across systems; I have tried, but things get very messy very quickly.  Nonetheless, I have not been able to understand the informal arguments that suggest this would be systematically worse under QV and there are several suggestive arguments that it would be better.  For example, under QV people have the ability to specialize in certain areas of issue or candidate expertise, which in turn should allow for deeper education and for advertising campaigns targeted at those who actually know and care about an issue, rather than those who are hardly paying attention.  Perhaps some would argue that this is a bug, not a feature, because we want every citizen to be informed about every issue; but this seems to be as implausible as suggesting that the division of labor degrades our ability to perform a variety of household tasks.  For more on these, Eric and I have written several articles that discuss: https://www.vanderbiltlawreview.org/2015/03/voting-squared-quadratic-voting-in-democratic-politics/ [and] https://www.springerprofessional.de/en/public-choice-1-2-2017/12454020.
  • There is a broader Burkean argument that you seem to be making, namely that these institutions are extremely different than those we have historically used and may well have very bad, unintended consequences.  Here, I don’t think we disagree, but I think nonetheless there are at least two reasons I don’t see this as greatly diminishing the value of the ideas.  First, all novel improvements, whether to technology or social institutions, must confront this objection.  And they should confront it, I think, by experimenting at small scales and gradually scaling up and/or course-correcting as we learn about them.  The questions are then a) does the innovation have enough promise to be worth experimenting with, b) is it so risky to experiment with at small scales even that this vitiates a) and c) does it seem like these experiments will teach us something about broader scale applicability.  It seems to me that designs that so clearly address failures that economic theory we both accept says are very large, that are not just worked out in a narrow model but that we have studied from a range of not just economic but sociological and philosophical perspectives and which have caught the imagination of a broad set of entrepreneurs, activists and artists who are really interested in such experiments satisfies a).  I don’t see any objections you or others have raised as raising significant concerns on b).  And on c), it seems to me we will learn quite a lot from even relatively modest experiments (and already have) about the objections I hear most frequently (such as those related to collusion for QV or instability for the COST) that at very least will allow us to incrementally improve and scale a bit larger. So, it seems to me, the strong interest in experimenting with these ideas should be encouraged.

Finally, it seems to me that even if you remain convinced that there are unsurmountable practical difficulties with these mechanisms, that they play an important role in illustrating some pretty sharp divergences between what basic allocative efficiency calls for (and what the marginal revolutionaries like Jevons and Walras were quite explicit about their theories implying) and the outcomes we would expect to arise in the classic libertarian world.  I think the liberal radicalism mechanism makes this sharpest.  This seems instructive even if there is no way to remedy the limitations in these mechanisms because it suggests that the ideal toward which we should be steering societies using mechanisms that are not so dangerous are quite different than the ideal envisioned in standard libertarian theory.  For example, the ideal would seem to involve a much greater role for a range of collective organizations at different community scales with some ability to receive tax-based support than standard libertarian theory would allow.

I am interested in your thoughts on these matters and continuing the exchange.  Sorry for the length of this email, but I felt that I owed you a single, coherent and fairly detailed response.”

TC again: For further detail, I refer you to Glen’s book with Eric Posner.  For background, here are my earlier posts on their work.

What Is “Price Theory”? (A Guest Post by Glen Weyl)

When I was last living in Chicago, in the spring 2014, a regular visitor to the department of the University of Chicago and the editor of the Journal of Economic Literature, Steven Durlauf, asked me if I would be interested in writing something for the journal. For many years I had promised Gary Becker that I would write something to help clarify the meaning and role of price theory to my generation of economists, especially those with limited exposure to the Chicago environment, which did so much to shape my approach to economics. With Gary’s passing later that spring, I decided to use this opportunity to follow through on that promise. More than a year later I have posted on SSRN the result.

I have an unusual relationship to “price theory”. As far as I know I am the only economist under 40, with the possible exception of my students, who openly identifies myself as focusing my research on price theory. As a result I am constantly asked what the phrase means. Usually colleagues will follow up with their own proposed definitions. My wife even remembers finding me at our wedding reception in a heated debate not about the meaning of marriage, but of price theory.

The most common definition, which emphasizes the connection to Chicago and to models of price-taking in partial equilibrium, doesn’t describe the work of the many prominent economists today who are closely identified with price theory but who are not at Chicago and study a range of different models. It also falls short of describing work by those like Paul Samuelson who were thought of as working on price theory in their time even by rivals like Milton Friedman. Worst of all it consigns price theory to a particular historical period in economic thought and place, making it less relevant to the future of economics.

I therefore have spent many years searching for a definition that I believe works and in the process have drawn on many sources, especially many conversations with Gary Becker and Kevin Murphy on the topic as well as the philosophy of physics and the methodological ideas of Raj Chetty, Peter Diamond and Jim Heckman among others. This process eventually brought me to my own definition of price theory as analysis that reduces rich (e.g. high-dimensional heterogeneity, many individuals) and often incompletely specified models into ‘prices’ sufficient to characterize approximate solutions to simple (e.g. one-dimensional policy) allocative problems. This approach contrasts both with work that tries to completely solve simple models (e.g. game theory) and empirical work that takes measurement of facts as prior to theory. Unlike other definitions, I argue that mine does a good job connecting the use of price theory across a range of fields of microeconomics from international trade to market design, being consistent across history and suggesting productive directions for future research on the topic.

To illustrate my definition I highlight four distinctive characteristics of price theory that follow from this basic philosophy. First, diagrams in price theory are usually used to illustrate simple solutions to rich models, such as the supply and demand diagram, rather than primitives such as indifference curves or statistical relationships. Second, problem sets in price theory tend to ask students to address some allocative or policy question in a loosely-defined model (does the minimum wage always raise employment under monopsony?), rather than solving out completely a simple model or investigating data. Third, measurement in price theory focuses on simple statistics sufficient to answer allocative questions of interest rather than estimating a complete structural model or building inductively from data. Raj Chetty has described these metrics, often prices or elasticities of some sort, as “sufficient statistics”. Finally, price theory tends to have close connections to thermodynamics and sociology, fields that seek simple summaries of complex systems, rather than more deductive (mathematics), individual-focused (psychology) or inductive (clinical epidemiology and history) fields.

I trace the history of price theory from the early nineteenth to the late twentieth when price theory became segregated at Chicago and against the dominant currents in the rest of the profession. For a quarter century following 1980, most of the profession either focused on more complete and fully-solved models (game theory, general equilibrium theory, mechanism design, etc.) or on causal identification. Price theory therefore survived almost exclusively at Chicago, which prided itself on its distinctive approach, even as the rest of the profession migrated away from it.

This situation could not last, however, because price theory is powerfully complementary with the other traditions. One example is work on optimal redistributive taxation. During the 1980’s and 1990’s large empirical literatures developed on the efficiency losses created by income taxation (the elasticity of labor supply) and on wage inequality. At the same time a rich theory literature developed on very simple models of optimal redistributive income taxation. Yet these two literatures were largely disconnected until the work of Emmanuel Saez and other price theorists showed how measurements by empiricists were closely related to the sufficient statistics that characterize some basic properties of optimal income taxation, such as the best linear income tax or the optimal tax rate on top earners.

Yet this was not the end of the story; these price theoretic stimulated empiricists to measure quantities (such as top income inequality and the elasticity of taxable income) more closely connected to the theory and theorists to propose new mechanisms through which taxes impact efficiency which are not summarized correctly by these formulas. This has created a rich and highly productive dialog between price theoretic summaries, empirical measurement of these summaries and more simplistic models that suggest new mechanisms left out of these summaries.

A similar process has occurred in many other fields of microeconomics in the last decade, through the work of, among others, five of the last seven winners of the John Bates Clark medal. Liran Einav and Amy Finkelstein have led this process for the economics of asymmetric information and insurance markets; Raj Chetty for behavioral economics and optimal social insurance; Matt Gentzkow for strategic communication; Costas Arkolakis, Arnaud Costinot and Andrés Rodriguez-Clare in international trade; and Jeremy Bulow and Jon Levin for auction and market design. This important work has shown what a central and complementary tool price theory is in tying together work throughout microeconomics.

Yet the formal tools underlying these price theoretic approximations and summaries have been much less fully developed than have been analytic tools in other areas of economics. When does adding up “consumer surplus” across individuals lead to accurate measurements of social welfare? How much error is created by assumptions of price-taking in the new contexts, like college admissions or voting, to which they are being applied? I highlight some exciting areas for further development of such approximation tools complementary to the burgeoning price theory literature.

Given the broad sweep of this piece, it will likely touch on the interests of many readers of this blog, especially those with a Chicago connection. Your comments are therefore very welcome. If you have any, please email me at [email protected].