solve for equilibrium

What does equilibrium look like for the book business?

Adam Davidson offers some interesting remarks.  My take is a little more radical.  I expect two or three major publishers, with stacked names (“Penguin Random House”), and they will be owned by Google, Apple, Amazon, and possibly Facebook, or their successors, which perhaps would make it “Apple Penguin Random House.”  Those companies have lots of cash, amazing marketing penetration, potential synergies with marketing content they own, and very strong desires to remain focal in the eyes of their customer base.  They could buy up a major publisher without running solvency risk.  For instance Amazon revenues are about twelve times those of a merged Penguin Random House and arguably that gap will grow.

There is no hurry, as the tech companies are waiting to buy the content companies, including the booksellers, on the cheap.  Furthermore, the acquirers don’t see it as their mission to make the previous business models of those content companies work.  They will wait.

Did I mention that the tech companies will own some on-line education too?  EduTexts embedded in iPads will be a bigger deal than it is today, and other forms of on-line or App-based content will be given away for free, or cheaply, to sell texts and learning materials through electronic delivery.

Much of the book market will be a loss leader to support the focality of massively profitable web portals and EduTexts and related offerings.

There is this funny thing called antitrust law, but I think these companies are popular enough, and associated closely enough with cool products — and sometimes cheap products — to get away with this.

The equilibrium (with apologies to Daniel Klein)

On September 5, the first Sleeping Beauty in Polataiko’s exhibition awoke to a kiss from another woman. Both of them were surprised. Polataiko shot photos of them laughing and looking at each other. Then he posted the images to his Facebook profile, where he has been live-blogging the entire event. Now the Sleeping Beauty must wed her “prince,” thus queering the historically heteronormative fairtytale. Gay marriage is not allowed in the Ukraine, however, so these two women will have to wed in a European country that does allow for same-sex marriage.

Here is more.  I believe that none of you had solved for this equilibrium.  For the pointer I thank Eapen.

The political economy of Kansas fiscal policy (from the comments)

MR commentator Patrick L. has a go at it:

OK I’ll bite.

In nominal terms, between 2002 and 2012 state receipts grew 50%. Inflation in this period was 28%, and probably significantly lower for Kansas, while population growth has only been about 10% since 2000. Even the “low” 2014 receipts are $1.5 billion more in revenue from when Sebelius first took office and the government started rapidly growing. In the past 15 years expenditures have grown over 50%, exceeding $6 billion today. The shortfall is $300 million, or about 5%. While the growth of the Kansas government in the past 15 years is smaller than other governments in the country, it still explains the shortfall. We can justify this increase by saying that education and health are rising faster than everything else, but that is not a revenue problem. Tax rates have to rise because education and health costs are growing faster than our economies. That says nothing at all about the optimum size of taxation for state governments with regard to growth, jobs, or even revenue. The tax and spending levels Brownback choose would have been adequate ten, maybe even five years ago. With a bit of luck, he could have ignored the shortfall because of variance, which for receipts can be a few hundred million a year.

Republicans should be wise enough to not depend on luck, and they should be wiser predicting how trend lines go. Cutting the size of government was never a serious option.

I haven’t looked at the votes in depth, but it looks like a classic case of urban // rural split that typically troubles the state’s politics. Just under half the state’s population lives around Kansas City or Wichita, which are both five times than the next largest city. These places have as many votes as the rest of Kansas combined, but their needs are radically different.

Rural Kansas has two unique problems. First, there’s the problem of population collapse, which all farm states are seeing. What few children are born move out when they come of age and new people are not moving in. Fixed costs like “We need at least one school building” or “We need at least one teacher per grade” start to add up for small towns of 1000 or less. Those are the obvious problems, not to mention any number of federal or state concerns dealing with food, medical, or disability services that have to be met. As a matter of geography, 98% of the state is rural, and I think I heard 25% of the state is in towns less than 2500 – with over 400 municipal governments servicing less than 1000 people it’s probably the highest per capita in the country (This is FIVE times the national average).

This is a non-trivial growing problem related to scale government services that has been an issue of intense legal debate in the state. Wichita School District’s scale is such it can use its buses to deliver free or low cost lunches to children in the summer. Small cities don’t have buses. Is that fair? How should taxes be structured to compensate? The only political viable solution to this problem has been to spend more money. If all the small towns could magically consolidate into a super smallville, taxes would (back of the envelope) be 10-15% lower.

Government services to low population areas are subsidized by high population areas, and it costs much more to deliver the same services to small towns. The US Postal Service paid for delivery to small towns across the country by charging monopolistic prices on first class letter mail in cities (Which cost almost nothing to deliver). NPR’s national budget mostly goes to setup stations in small towns. The small towns in Kansas are both relatively and in many cases actually getting smaller, older, and poorer. They are costing more and delivering less.

The other problem is that some rural areas are *growing*, but they’re growing because of immigration attracted to the agriculture and food packaging industries. Which is not the same as growing from a resource boom which can be taxed heavily to compensate. Liberal, KS is the largest per capita immigrant community in the United States. While this influx of people is necessary for the health of these places, the new population has more expensive demands on government services and pays less in taxes. Some of these small towns are the same ones that a decade ago were collapsing. Services and infrastructure might have been allowed to lapse or removed, and now rapidly needs to be replaced. That’s expensive! In the long run this problem might replace the first problem, but for now it’s the worst of both worlds.

The economy of the small cities is based largely around food production, which mostly can’t move, and food packaging, which probably can’t for logistical reasons. These places are poorer, getting relatively poorer per capita, and demanding more in services both directly (immigration / aging) and through scale issues. Their populations are either getting very old or very Hispanic, or both.

In contrast, Kansas City is a stable metropolis whose economy depends on manufacturing is built around a national centralized hub for trains. It also has some finance and telecom sprinkled in, though those guys can probably go anywhere. Wichita, is a moderately growing city based around aircraft manufacturing. When state taxes can’t provide enough government services, local taxes for these areas easily rise to compensate. Their economic concerns are how to stop businesses from going across the border to Omaha, Oklahoma City, Tulsa, Springfield, or Kansas City, Mo – places which are functionally identical and just as close. Given their dependence on manufacturing, they also have to consider movement across international borders to China and Mexico. Their demography is much closer to the national averages rather than the extremes. They are large enough that they can take advantage of scaling for government services, without being so large that there is decreasing actual returns. I don’t have figures, but I’d guess income rates in the urban areas to be between 150 and 200% those of the rural areas, which are themselves typically around 2/3rds the national average. This is an industry effect, a farmer in Kansas City and an aeronautical engineer in Greensburg, KS would not make much money. The cities are richer, but they’re richer because they have industries that are becoming increasingly easier to move.

On a political level, normally cities become more liberal, and poorer as you go deeper into the city – a leftover of 19th century industrialization competing against 20th century transportation. Deep urban cores produce these deep blue constituencies that act as checks on conservative suburban rings. In some states this manifests itself as a coalition between the poor rural areas and the poor urban areas against richer suburban areas allowing normal American class politics to balance itself. Cities produce political equilibrium: The richer and denser it becomes, the more liberal, which pushes more money and voters to suburbia, diluting the power. In short, declining rural power (D) and rising urban power (D) offset each other, but rising urban power (D) enhances suburban power (R), and so at a state level you get a balance.

The problem is that the inner core of Kansas City is in Missouri, so Kansas only gets the rich (Republican) suburban ring and a tiny blue part. Typical democratic concerns like maintaining a progressive tax structure can’t really find a foundation. While Wichita also has an urban core that does provide a Democratic representation, the city isn’t constrained geographically by anything (No ocean, mountain, lake, and transportation goes around, not through, the city) means concentration, an ingredient for populist politics, is lessened. The city spreads, and the poor can easily move up the class structure by moving further and further out. Wichita has half the population density of Syracuse and two thirds that of Madison, two close sized metropolitan areas. I haven’t done a county level comparison, but I suspect that Sedgwick has half the density of the ‘average American county with half a million people’ in it. There are other places in America like that, but guess how they vote.

Nor are either cities big university cities, like Madison or Boston. The two big universities in the state are in the small towns of Lawrence and Manhattan, which are quite separate from the rest of the state. Urban centers are places of “Commanding Heights” industries, like health and education that can’t easily move, but Wichita and Kansas City are based around manufacturing.

The political outcomes are not that surprising at all. There is nothing ‘the matter with Kansas’. The power structure easily shifts between slim majorities formed from predominately suburban populations who are wealthier, and whose jobs are most likely to move, and slim majorities formed from the small urban cores and rural parts of the state.

There’s no possible political coalition that you could form that would pass a constitutional amendment allowing a floating balanced budget over a 10 year period. Nor are the populist pressure strong enough to push against regressive taxation. You have ‘fiscal hawks’ in the rural areas who never vote for cuts, and suburban conservatives who never vote for taxes. When the storm gets too bad, they vote a nice moderate democrat in to raise taxes and crack down hard on whatever (Non manufacturing / agricultural) big business they can put pressure on. Obviously something that can’t move easily like Health Insurance.

In summery, this really is an issue of Urban vs Rural politics. Unlike other cities, the kind of industries around Kansas City and Wichita can move. The jobs in the rural areas can’t. The rural areas require more per capita government services, and the urban areas have more money. They both have half the vote. Solve for equilibrium.

== As for the deal:

It’s mostly a .4% sales tax increase, which is less than some of the more fanciful projects done by local governments in the past 15 years, which have included sports arenas, loans to movie theaters, and waterfront improvement. A half cent increase in sales tax does move the state into the top 10 for the country, but the overall tax burden is still quite low. The real problem is that city/county sales taxes are a function of distance from Wichita, and the inverse of population. The smaller your city, and the farther you are from Wichita, the more the county depends on sales taxes. In places like Junction City, this could put the sales tax close to 10%! The real disparity is going to be at the border towns: After the change there will be a .7% difference between KC, KS and KC, MO, though I bet the Missouri side will raise taxes to compensate. After the increase, there’s a 1.5% difference between Pittsburg, KS and Joplin, MO – big enough that I could see some people consider driving for purchases more than $300 (Biweekly grocery shopping for a large family?), especially if retailers on the Missouri side are not dumb. As a general rule, the money and the shopping is on the Kansas side of the border, so stuff isn’t going to transition immediately, but I expect some Laffer curve effects here for local governments, and I would hope they’ll respond by dropping taxes to compensate.

This is probably WHY such a deal was able to pass. Most of the damage goes on the poor and rural parts of Kansas, which is where most of the balance budget hawks are. The rich living near Kansas City will have the easiest time dodging the increase and avoid it more often. A regressive tax, but an efficient one.

As for the other parts of the deal, $90 million in itemized deductions are being removed. I don’t actually think this will amount to much, since there aren’t many itemized state deductions left. What remains are things like adoption, historical preservation, or disabled access. I don’t see much money coming in this way, and the state will almost certainly reverse itself the first chance it gets (As it did the last time it got rid of the adoption credit).

Whew!

Mongol

Matt Yglesias offers a good review of this excellent movie, which chronicles the early life of Genghis Khan, or one vision thereof.  There are at least two increasing returns to scale mechanisms in this movie.  First, leadership is focal, which tends to bind groups together and make concentrated rule possible.  Winning battles makes you focal and winning larger battles makes you focal across larger groups.  Second, if you walk or ride alone in the countryside, you will be snatched or plundered.  That causes people to live in settlements and also larger cities.  Put those mechanisms together, solve for equilibrium, and eventually one guy rules a very large kingdom and you get some semblance of free trade.  Sooner or later, that is.  The movie brings you only part of the way there and I believe a sequel is in the works.

Abhijit Banerjee reminiscenses

Abhijit and I were in the same first year class at Harvard, and I have two especially strong memories of him from that time.

First, he was always willing to help out those who were not as advanced in the class work as he was.  Furthermore, that was literally everyone else.  He was very generous with his time.

Second, when it came to the first-year Macro final (I don’t mean the comprehensive exams), Andy Abel wrote a problem with dynamic programming, which was Andy’s main research area at the time.  Abhijit showed that the supposed correct answer was in fact wrong, that the equilibrium upon testing was degenerate, and he re-solved the problem correctly, finding some multiple equilibria if I recall correctly, all more than what Abel had seen and Abel wrote the problem.  Abhijit got an A+ (Abel, to his credit, was not shy about reporting this).

One of my favorite Abhijit papers is “On Frequent Flyer Programs and other Loyalty-Inducing Economic Arrangements,” with Larry Summers.  I believe it was published QJE 1987, but somehow the jstor link does not show up from google searches.  This was one of the first papers to show how consumer loyalty programs could segment the market and have collusive effects.

Another favorite Abjihit paper of mine is his job market paper, “The Economics of Rumours,” later published in ReStud 1993.  Have you ever wondered “if this rumor is true, why haven’t I heard it before?”  Abhijit works through the logic of the model on that one, in a scintillating performance.  It turns out this paper is now highly relevant for analyzing information transmission through social media.

Abhijit is the clearest case I know of a brilliant theorist who decided the future was with empirical work — he was right.  Nonetheless his early theory papers are still worthy of attention.  When Abhijit went on the job market, his letter writers suggested he might someday win a Nobel Prize, so strong were his talents.  They were right, but I suspect they had no idea for what the prize in fact would turn out to be.

Brexit update (POTMR)

Boris Johnson is planning to force a new Brexit deal through parliament in just 10 days — including holding late-night and weekend sittings — in a further sign of Downing Street’s determination to negotiate an orderly exit from the EU. According to Number 10 officials, Mr Johnson’s team has drawn up detailed plans under which the prime minister would secure a deal with the EU at a Brussels summit on October 17-18, before pushing the new withdrawal deal through parliament at breakneck speed.

The pound rose 1.1 per cent against the US dollar to $1.247 on Friday amid growing optimism that Mr Johnson has now decisively shifted away from the prospect of a no-deal exit and is focused on a compromise largely based on Theresa May’s withdrawal agreement.

That is new from the FT, here is part of my Brexit post from August 29:

I would sooner think that Boris Johnson wishes to see through a relabeled version of the Teresa May deal, perhaps with an extra concession from the EU tacked on.  His dramatic precommitment raises the costs to the Tories of not supporting such a deal, and it also may induce slight additional EU concessions.  The narrower time window forces the recalcitrants who would not sign the May deal to get their act together and fall into line, more or less now.

Uncertainty is high, but the smart money says the Parliamentary suspension is more of a stage play, and a move toward an actual deal, than a leap to authoritarian government.

This remains very much an open question, but if you “solve for the equilibrium,” that is indeed what you get.

Tuesday assorted links

1. Solve for the equilibrium price of real estate.

2. Kanye + Star Wars vs. NIMBY.

3. Reputation markets in everything: “The Wall Street Journal’s Erich Schwartzel recently wrote a story revealing that none of these tough-guy actors likes it very much when the characters they play get pummeled on screen. One of them even negotiated limits on how much his character can get beat up. Another has his sister, a producer, count how many times his character gets punched, to make sure he gives as good as he gets.

Today, Erich joins us to talk about the lengths these actors have gone to preserve their ever-so-fragile reputations for macho toughness. And the incentives they have for doing so.”

4. A Straussian take on Kenyan rebellion (song, The Rivingtons, 1962).  This was the recording that prompted Dave Marsh to describe Bob Dylan’s “The Times They Are a Changin'” as a “dull diatribe.”

5. A new piece on Harriet Martineau.

6. Stephen Williamson on the Fed.

Highly decentralized solar geoengineering

Nonstate actors appear to have increasing power, in part due to new technologies that alter actors’ capacities and incentives. Although solar geoengineering is typically conceived of as centralized and state-deployed, we explore highly decentralized solar geoengineering. Done perhaps through numerous small high-altitude balloons, it could be provided by nonstate actors such as environmentally motivated nongovernmental organizations or individuals. Conceivably tolerated or even covertly sponsored by states, highly decentralized solar geoengineering could move presumed action from the state arena to that of direct intervention by nonstate actors, which could in turn, disrupt international politics and pose novel challenges for technology and environmental policy. We conclude that this method appears technically possible, economically feasible, and potentially politically disruptive. Decentralization could, in principle, make control by states difficult, perhaps even rendering such control prohibitively costly and complex.

That is from Jesse L. Reynolds & Gernot Wagner, and injecting fine aerosols into the air, as if to mimic some features of volcanic eruptions, seems to be one of the major possible approaches.  I am not able to judge the scientific merits of their claims, but it has long seemed to me evident that some version of this idea would prove possible.

Solve for the equilibrium!  What is it?  Too much enthusiasm for correction and thus disastrous climate cooling?  Preemptive government regulation?  It requires government subsidy?  It becomes controlled by concerned philanthropists?  It starts a climate war between America/Vietnam and Russia/Greenland?  Goldilocks?  I wonder if we will get to find out.

Via the excellent Kevin Lewis.

Han Kuo-yu has won the Kuomintang primary in Taiwan

So yes, Taiwan does have the weirdest politics in the world right now.  Here is a reprise from my Bloomberg column last week:

Another candidate vying for the KMT nomination is Han Kuo-yu, mayor of Kaohsiung and a blunt-speaking outsider populist. He has called for closer ties with China and is believed to be China’s favored candidate, calling China and Taiwan “two individuals madly in love.” It is believed that Chinese cyber-operatives have been working to promote his candidacy. He might be more interesting yet.

What if Han wins the general election and calls for “peaceful reunification” of the two Chinas, based on “one country, two systems”?  Solve for the equilibrium!  I see the following options:

1. They go ahead with the deal, and voila, one China!

2. The system as a whole knows in advance if this is going to happen, and if it will another candidate runs in the general election, splitting the KMT-friendly vote, and Han never wins.

2b. Han just doesn’t win anyway, even though his margin in the primary was considerable and larger than expected.

3. The current president Tsai Ing-wen learns from Taiwanese intelligence that there are Chinese agents in the KMT and she suspends the general election and calls a kind of lukewarm martial law.

4. Han calls for reunification and is deposed by his own military, or a civil war within the government ensues.

5. Han foresees 2-4 and never calls for reunification in the first place.

Well people, which of these would it be?  Here is general background (NYT) on the new primary results.

Biblical Adverse Selection

And Jesus said, Behold, two men went forth each to buy a new car.

And the car of the first man was good and served its owner well; but the second man’s was like unto a lemon, and worked not.

But in time both men grew tired of their cars, and wished to be rid of them. Thus the two men went down unto the market, to sell their cars.

The first spoke to the crowd that had gathered there, saying honestly, My car is good, and you should pay well for it;

But the second man went alongside him, and bearing false witness, said also, My car is good, and you should pay well for it.

Then the crowd looked between the cars, and said unto them, How can we know which of ye telleth the truth, and which wisheth falsely to pass on his lemon?

And they resolved themselves not to pay for either car as if it were good, but to pay a little less than this price.

Now the man with a good car, hearing this, took his car away from the market, saying to the crowd, If ye will not pay full price for my good car, then I wish not to sell it to you;

But the man with a bad car said, I will sell you my car for this price; for he knew that his car was bad and was worth less than this price.

But as the first man left, the crowd returned to the second man and said, If thy car is good, why then dost thou not leave to keep the car, when we will pay less than it is worth? Thy car must be a lemon, and we will pay only the price of a lemon.

The second man was upset that his deception had been uncovered; but he could not gainsay the conclusion of the market, and so he sold his car for just the price of a lemon.

And the crowd reasoned, If any man cometh now to sell his car unto us, that car must be a lemon; since we will pay only the price of a lemon.

And Lo, the market reached its Nash equilibrium.

From username42 on Reddit. Hat tip: Michael Lane.

Your challenge: Explain an economics principle the King James Way.

Bitcoin is Less Secure than Most People Think

I spent part of the holidays poring over Eric Budish’s important paper, The Economic Limits of Bitcoin and the BlockChain. Using a few equilibrium conditions and some simulations, Budish shows that Bitcoin is vulnerable to a double spending attack.

In a double spending attack, the attacker sells say bitcoin for dollars. The bitcoin transfer is registered on the blockchain and then, perhaps after some escrow period, the dollars are received by the attacker. As soon as the bitcoin transfer is registered in a block–call this block 1–the attacker starts to mine his own blocks which do not include the bitcoin transfer. Suppose there is no escrow period then the best case for the attacker is that they mine two blocks 1′ and 2′ before the honest nodes mine block 2. In this case, the attacker’s chain–0,1′,2′–is the longest chain and so miners will add to this chain and not the 0,1… chain which becomes orphaned. The attacker’s chain does not include the bitcoin transfer so the attacker still has the bitcoins and they have the dollars! Also, remember, even though it is called a double-spend attack it’s actually an n-spend attack so the gains from attack could be very large. But what happens if the honest nodes mine a new block before the attacker mines 2′? Then the honest chain is 0,1,2 but the attacker still has block 1′ mined and after some time they will have 2′, then they have another chance. If the attacker can mine 3′ before the honest nodes mine block 3 then the new longest chain becomes 0,1′,2′,3′ and the honest nodes start mining on this chain rather than on 0,1,2. It can take time for the attacker to produce the longest chain but if the attacker has more computational power than the honest nodes, even just a little more, then with probability 1 the attacker will end up producing the longest chain.

As an example, Budish shows that if the attacker has just 5% more computational power than the honest nodes then on average it takes 26.5 blocks (a little over 4 hours) for the attacker to have the longest chain. (Most of the time it takes far fewer blocks but occasionally it takes hundreds of blocks for the attacker to produce the longest chain.) The attack will always be successful eventually, the key question is what is the cost of the attack?

The net cost of a double-spend attack is low because attackers also earn block rewards. For example, in the case above it might take 26 blocks for the attacker to substitute its longer chain for the honest chain but when it does so it earns 26 block rewards. The rewards were enough to cover the costs of the honest miners and so they are more or less enough to cover the costs of the attacker. The key point is that attacking is the same thing as mining. Budish assumes that attackers add to the computation power of the network which pushes returns down (for both the attacker and interestingly the honest nodes) but if we assume that the attacker starts out as honest–a Manchurian Candidate attack–then there is essentially zero cost to attacking.

It’s often said that Bitcoin creates security with math. That’s only partially true. The security behind avoiding the double spend attack is not cryptographic but economic, it’s really just the cost of coordinating to achieve a majority of the computational power. Satoshi assumed ‘one-CPU, one-vote’ which made it plausible that it would be costly to coordinate millions of miners. In the centralized ASIC world, coordination is much less costly. Consider, for example, that the top 4 mining pools today account for nearly 50% of the total computational power of the network. An attack would simply mean that these miners agree to mine slightly different blocks than they otherwise would.

Aside from the cost of coordination, a small group of large miners might not want to run a double spending attack because if Bitcoin is destroyed it will reduce the value of their capital investments in mining equipment (Budish analyzes several scenarios in this context). Call that the Too Big to Cheat argument. Sound familiar? The Too Big to Cheat argument, however, is a poor foundation for Bitcoin as a store of value because the more common it is to hold billions in Bitcoin the greater the value of an attack. Moreover, we are in especially dangerous territory today because bitcoin’s recent fall in price means that there is currently an overhang of computing power which has made some mining unprofitable, so miners may feel this a good time to get out.

The Too Big to Cheat argument suggests that coins are vulnerable to centralized computation power easily repurposed. The tricky part is that the efficiencies created by specialization–as for example in application-specific integrated circuits–tend to lead to centralization but by definition make repurposing more difficult.  CPUs, in contrast, tend to lead to decentralization but are easily repurposed. It’s hard to know where safety lies. But what we can say is that any alt-coin that uses a proof of work algorithm that can be solved using ASICs is especially vulnerable because miners could run a double spend attack on that coin and then shift over to mining bitcoin if the value of that coin is destroyed.

What can help? Ironically, traditional law and governance might help. A double spend attack would be clear in the data and at least in general terms so would the attackers. An attack involving dollars and transfers from banks would be potentially prosecutable, greatly raising the cost of an attack. Governance might help as well. Would a majority of miners (not including the attacker) be willing to fork Bitcoin to avoid the attack, much as was done with The DAO? Even the possibility of a hardfork would reduce the expected value of an attack. More generally, all of these mechanisms are a way of enforcing some stake loss or capital loss on dishonest miners. In theory, therefore, proof of stake should be less vulnerable to 51% attacks but proof of stake is much more complicated to make incentive-compatible than proof of work.

All of this is a far cry from money without the state. Trust doesn’t have the solidity of math but we are learning that it is more robust.

Hat tip to Joshua Gans and especially to Eric Budish for extensive conversation on these issues.

Addendum: See here for more on the Ethereum Classic double spend attack.

Christmas assorted links

1. 99 good news stories from 2018.  p.s. not all of them are good, though most of them are.  But prices going to zero for normal market goods and services usually is a mistake.

2. The seasonal business cycle in camel rentals.

3. David Brooks’s Sidney Awards, part I (NYT).

4. Should credit card companies be required to monitor or limit weapons purchases? (NYT, I say no and view this as a dangerous trend).

5. Should the EU enforce content regulations on streaming services?  (I say no and view this as a dangerous trend).

6. Solve for the equilibrium.

The Liberal Radicalism Mechanism for Producing Public Goods

The mechanism for producing public goods in Buterin, Hitzig, and Weyl’s, Liberal Radicalism is quite amazing and a quantum leap in public-goods mechanism-design not seen since the Vickrey-Clarke-Groves mechanism of the 1970s. In this post, I want to illustrate the mechanism using a very simple example. Let’s imagine that there are two individuals and a public good available in quantity, g. The two individuals value the public good according to U1(g)=70 g – (g^2)/2 and U2(g)=40 g – g^2. Those utility functions mean that the public good has diminishing utility for each individual as shown by the figure at right. The public good can be produced at MC=80.

Now let’s solve for the private and socially optimal public good provision in the ordinary way. For the private optimum each  individual will want to set the MB of contributing to g equal to the marginal cost. Taking the derivative of the utility functions we get MB1=70-g and MB2= 40 – 2g (users of Syverson, Levitt & Goolsbee may recognize this public good problem). Notice that for both individuals, MB<MC, so without coordination, private provision doesn’t even get off the ground.

What’s the socially optimal level of provision? Since g is a public good we sum the two marginal benefit curves and set the sum equal to the MC, namely 110 – 3 g = 80 which solves to g=10. The situation is illustrated in the figure at left.

We were able to compute the optimum level of the public good because we knew each individual’s utility function. In the real world each individual’s utility function is private information. Thus, to reach the social optimum we must solve two problems. The information problem and the free rider problem. The information problem is that no one knows the optimal quantity of the public good. The free rider problem is that no one is willing to pay for the public good. These two problems are related but they are not the same. My Dominant Assurance Contract, for example, works on solving the free rider problem assuming we know the optimal quantity of the public good (e.g. we can usually calculate how big a bridge or dam we need.) The LR mechanism in contrast solves the information problem but it requires that a third party such as the government or a private benefactor “tops up” private contributions in a special way.

The topping up function is the key to the LR mechanism. In this two person, one public good example the topping up function is:

Where c1 is the amount that individual one chooses to contribute to the public good and c2 is the amount that individual two chooses to contribute to the public good. In other words, the public benefactor says “you decide how much to contribute and I will top up to amount g” (it can be shown that (g>c1+c2)).

Now let’s solve for the private optimum using the mechanism. To do so return to the utility functions U1(g)=70 g – (g^2)/2 and U2(g)=40 g – g^2 but substitute for g with the topping up function and then take the derivative of U1 with respect to c1 and set equal to the marginal cost of the public good and similarly for U2. Notice that we are implicitly assuming that the government can use lump sum taxation to fund any difference between g and c1+c2 or that projects are fairly small with respect to total government funding so that it makes sense for individuals to ignore any effect of their choices on the benefactor’s purse–these assumptions seem fairly innocuous–Buterin, Hitzig, and Weyl discuss at greater length.

Notice that we are solving for the optimal contributions to the public good exactly as before–each individual is maximizing their own selfish utility–only now taking into account the top-up function. Taking the derivatives and setting equal to the MC produces two equations with two unknowns which we need to solve simultaneously:

These equations are solved at c1== 45/8 and c2== 5/8. Recall that the privately optimal contributions without the top-up function were 0 and 0 so we have certainly improved over that. But wait, there’s more! How much g is produced when the contribution levels are c1== 45/8 and c2== 5/8? Substituting these values for c1 and c2 into the top-up function we find that g=10, the socially optimal amount!

In equilibrium, individual 1 contributes 45/8 to the public good, individual 2 contributes 5/8 and the remainder,15/4, is contributed by the government. But recall that the government had no idea going in what the optimal amount of the public good was. The government used the contribution levels under the top-up mechanism as a signal to decide how much of the public good to produce and almost magically the top-up function is such that citizens will voluntarily contribute exactly the amount that correctly signals how much society as a whole values the public good. Amazing!

Naturally there are a few issues. The optimal solution is a Nash equilibrium which may not be easy to find as everyone must take into account everyone else’s actions to reach equilibrium (an iterative process may help). The mechanism is also potentially vulnerable to collusion. We need to test this mechanism in the lab and in the field. Nevertheless, this is a notable contribution to the theory of public goods and to applied mechanism design.

Hat tip: Discussion with Tyler, Robin, Andrew, Ank and Garett Jones who also has notes on the mechanism.

Talking to your doctor isn’t about medicine

On average, patients get about 11 seconds to explain the reasons for their visit before they are interrupted by their doctors. Also, only one in three doctors provides their patients with adequate opportunity to describe their situation…

In just over one third of the time (36 per cent), patients were able to put their agendas first. But patients who did get the chance to list their ailments were still interrupted seven out of every ten times, on average within 11 seconds of them starting to speak. In this study, patients who were not interrupted completed their opening statements within about six seconds.

Here is the story, here is the underlying research.  Via the excellent Charles Klingman.

Now solve for the telemedicine equilibrium.