solve for equilibrium
So yes, Taiwan does have the weirdest politics in the world right now. Here is a reprise from my Bloomberg column last week:
Another candidate vying for the KMT nomination is Han Kuo-yu, mayor of Kaohsiung and a blunt-speaking outsider populist. He has called for closer ties with China and is believed to be China’s favored candidate, calling China and Taiwan “two individuals madly in love.” It is believed that Chinese cyber-operatives have been working to promote his candidacy. He might be more interesting yet.
What if Han wins the general election and calls for “peaceful reunification” of the two Chinas, based on “one country, two systems”? Solve for the equilibrium! I see the following options:
1. They go ahead with the deal, and voila, one China!
2. The system as a whole knows in advance if this is going to happen, and if it will another candidate runs in the general election, splitting the KMT-friendly vote, and Han never wins.
2b. Han just doesn’t win anyway, even though his margin in the primary was considerable and larger than expected.
3. The current president Tsai Ing-wen learns from Taiwanese intelligence that there are Chinese agents in the KMT and she suspends the general election and calls a kind of lukewarm martial law.
4. Han calls for reunification and is deposed by his own military, or a civil war within the government ensues.
5. Han foresees 2-4 and never calls for reunification in the first place.
Well people, which of these would it be? Here is general background (NYT) on the new primary results.
And Jesus said, Behold, two men went forth each to buy a new car.
And the car of the first man was good and served its owner well; but the second man’s was like unto a lemon, and worked not.
But in time both men grew tired of their cars, and wished to be rid of them. Thus the two men went down unto the market, to sell their cars.
The first spoke to the crowd that had gathered there, saying honestly, My car is good, and you should pay well for it;
But the second man went alongside him, and bearing false witness, said also, My car is good, and you should pay well for it.
Then the crowd looked between the cars, and said unto them, How can we know which of ye telleth the truth, and which wisheth falsely to pass on his lemon?
And they resolved themselves not to pay for either car as if it were good, but to pay a little less than this price.
Now the man with a good car, hearing this, took his car away from the market, saying to the crowd, If ye will not pay full price for my good car, then I wish not to sell it to you;
But the man with a bad car said, I will sell you my car for this price; for he knew that his car was bad and was worth less than this price.
But as the first man left, the crowd returned to the second man and said, If thy car is good, why then dost thou not leave to keep the car, when we will pay less than it is worth? Thy car must be a lemon, and we will pay only the price of a lemon.
The second man was upset that his deception had been uncovered; but he could not gainsay the conclusion of the market, and so he sold his car for just the price of a lemon.
And the crowd reasoned, If any man cometh now to sell his car unto us, that car must be a lemon; since we will pay only the price of a lemon.
And Lo, the market reached its Nash equilibrium.
Your challenge: Explain an economics principle the King James Way.
I spent part of the holidays poring over Eric Budish’s important paper, The Economic Limits of Bitcoin and the BlockChain. Using a few equilibrium conditions and some simulations, Budish shows that Bitcoin is vulnerable to a double spending attack.
In a double spending attack, the attacker sells say bitcoin for dollars. The bitcoin transfer is registered on the blockchain and then, perhaps after some escrow period, the dollars are received by the attacker. As soon as the bitcoin transfer is registered in a block–call this block 1–the attacker starts to mine his own blocks which do not include the bitcoin transfer. Suppose there is no escrow period then the best case for the attacker is that they mine two blocks 1′ and 2′ before the honest nodes mine block 2. In this case, the attacker’s chain–0,1′,2′–is the longest chain and so miners will add to this chain and not the 0,1… chain which becomes orphaned. The attacker’s chain does not include the bitcoin transfer so the attacker still has the bitcoins and they have the dollars! Also, remember, even though it is called a double-spend attack it’s actually an n-spend attack so the gains from attack could be very large. But what happens if the honest nodes mine a new block before the attacker mines 2′? Then the honest chain is 0,1,2 but the attacker still has block 1′ mined and after some time they will have 2′, then they have another chance. If the attacker can mine 3′ before the honest nodes mine block 3 then the new longest chain becomes 0,1′,2′,3′ and the honest nodes start mining on this chain rather than on 0,1,2. It can take time for the attacker to produce the longest chain but if the attacker has more computational power than the honest nodes, even just a little more, then with probability 1 the attacker will end up producing the longest chain.
As an example, Budish shows that if the attacker has just 5% more computational power than the honest nodes then on average it takes 26.5 blocks (a little over 4 hours) for the attacker to have the longest chain. (Most of the time it takes far fewer blocks but occasionally it takes hundreds of blocks for the attacker to produce the longest chain.) The attack will always be successful eventually, the key question is what is the cost of the attack?
The net cost of a double-spend attack is low because attackers also earn block rewards. For example, in the case above it might take 26 blocks for the attacker to substitute its longer chain for the honest chain but when it does so it earns 26 block rewards. The rewards were enough to cover the costs of the honest miners and so they are more or less enough to cover the costs of the attacker. The key point is that attacking is the same thing as mining. Budish assumes that attackers add to the computation power of the network which pushes returns down (for both the attacker and interestingly the honest nodes) but if we assume that the attacker starts out as honest–a Manchurian Candidate attack–then there is essentially zero cost to attacking.
It’s often said that Bitcoin creates security with math. That’s only partially true. The security behind avoiding the double spend attack is not cryptographic but economic, it’s really just the cost of coordinating to achieve a majority of the computational power. Satoshi assumed ‘one-CPU, one-vote’ which made it plausible that it would be costly to coordinate millions of miners. In the centralized ASIC world, coordination is much less costly. Consider, for example, that the top 4 mining pools today account for nearly 50% of the total computational power of the network. An attack would simply mean that these miners agree to mine slightly different blocks than they otherwise would.
Aside from the cost of coordination, a small group of large miners might not want to run a double spending attack because if Bitcoin is destroyed it will reduce the value of their capital investments in mining equipment (Budish analyzes several scenarios in this context). Call that the Too Big to Cheat argument. Sound familiar? The Too Big to Cheat argument, however, is a poor foundation for Bitcoin as a store of value because the more common it is to hold billions in Bitcoin the greater the value of an attack. Moreover, we are in especially dangerous territory today because bitcoin’s recent fall in price means that there is currently an overhang of computing power which has made some mining unprofitable, so miners may feel this a good time to get out.
The Too Big to Cheat argument suggests that coins are vulnerable to centralized computation power easily repurposed. The tricky part is that the efficiencies created by specialization–as for example in application-specific integrated circuits–tend to lead to centralization but by definition make repurposing more difficult. CPUs, in contrast, tend to lead to decentralization but are easily repurposed. It’s hard to know where safety lies. But what we can say is that any alt-coin that uses a proof of work algorithm that can be solved using ASICs is especially vulnerable because miners could run a double spend attack on that coin and then shift over to mining bitcoin if the value of that coin is destroyed.
What can help? Ironically, traditional law and governance might help. A double spend attack would be clear in the data and at least in general terms so would the attackers. An attack involving dollars and transfers from banks would be potentially prosecutable, greatly raising the cost of an attack. Governance might help as well. Would a majority of miners (not including the attacker) be willing to fork Bitcoin to avoid the attack, much as was done with The DAO? Even the possibility of a hardfork would reduce the expected value of an attack. More generally, all of these mechanisms are a way of enforcing some stake loss or capital loss on dishonest miners. In theory, therefore, proof of stake should be less vulnerable to 51% attacks but proof of stake is much more complicated to make incentive-compatible than proof of work.
All of this is a far cry from money without the state. Trust doesn’t have the solidity of math but we are learning that it is more robust.
Hat tip to Joshua Gans and especially to Eric Budish for extensive conversation on these issues.
Addendum: See here for more on the Ethereum Classic double spend attack.
1. 99 good news stories from 2018. p.s. not all of them are good, though most of them are. But prices going to zero for normal market goods and services usually is a mistake.
3. David Brooks’s Sidney Awards, part I (NYT).
4. Should credit card companies be required to monitor or limit weapons purchases? (NYT, I say no and view this as a dangerous trend).
5. Should the EU enforce content regulations on streaming services? (I say no and view this as a dangerous trend).
The mechanism for producing public goods in Buterin, Hitzig, and Weyl’s, Liberal Radicalism is quite amazing and a quantum leap in public-goods mechanism-design not seen since the Vickrey-Clarke-Groves mechanism of the 1970s. In this post, I want to illustrate the mechanism using a very simple example. Let’s imagine that there are two individuals and a public good available in quantity, g. The two individuals value the public good according to U1(g)=70 g – (g^2)/2 and U2(g)=40 g – g^2. Those utility functions mean that the public good has diminishing utility for each individual as shown by the figure at right. The public good can be produced at MC=80.
Now let’s solve for the private and socially optimal public good provision in the ordinary way. For the private optimum each individual will want to set the MB of contributing to g equal to the marginal cost. Taking the derivative of the utility functions we get MB1=70-g and MB2= 40 – 2g (users of Syverson, Levitt & Goolsbee may recognize this public good problem). Notice that for both individuals, MB<MC, so without coordination, private provision doesn’t even get off the ground.
What’s the socially optimal level of provision? Since g is a public good we sum the two marginal benefit curves and set the sum equal to the MC, namely 110 – 3 g = 80 which solves to g=10. The situation is illustrated in the figure at left.
We were able to compute the optimum level of the public good because we knew each individual’s utility function. In the real world each individual’s utility function is private information. Thus, to reach the social optimum we must solve two problems. The information problem and the free rider problem. The information problem is that no one knows the optimal quantity of the public good. The free rider problem is that no one is willing to pay for the public good. These two problems are related but they are not the same. My Dominant Assurance Contract, for example, works on solving the free rider problem assuming we know the optimal quantity of the public good (e.g. we can usually calculate how big a bridge or dam we need.) The LR mechanism in contrast solves the information problem but it requires that a third party such as the government or a private benefactor “tops up” private contributions in a special way.
The topping up function is the key to the LR mechanism. In this two person, one public good example the topping up function is:
Where c1 is the amount that individual one chooses to contribute to the public good and c2 is the amount that individual two chooses to contribute to the public good. In other words, the public benefactor says “you decide how much to contribute and I will top up to amount g” (it can be shown that (g>c1+c2)).
Now let’s solve for the private optimum using the mechanism. To do so return to the utility functions U1(g)=70 g – (g^2)/2 and U2(g)=40 g – g^2 but substitute for g with the topping up function and then take the derivative of U1 with respect to c1 and set equal to the marginal cost of the public good and similarly for U2. Notice that we are implicitly assuming that the government can use lump sum taxation to fund any difference between g and c1+c2 or that projects are fairly small with respect to total government funding so that it makes sense for individuals to ignore any effect of their choices on the benefactor’s purse–these assumptions seem fairly innocuous–Buterin, Hitzig, and Weyl discuss at greater length.
Notice that we are solving for the optimal contributions to the public good exactly as before–each individual is maximizing their own selfish utility–only now taking into account the top-up function. Taking the derivatives and setting equal to the MC produces two equations with two unknowns which we need to solve simultaneously:
These equations are solved at c1== 45/8 and c2== 5/8. Recall that the privately optimal contributions without the top-up function were 0 and 0 so we have certainly improved over that. But wait, there’s more! How much g is produced when the contribution levels are c1== 45/8 and c2== 5/8? Substituting these values for c1 and c2 into the top-up function we find that g=10, the socially optimal amount!
In equilibrium, individual 1 contributes 45/8 to the public good, individual 2 contributes 5/8 and the remainder,15/4, is contributed by the government. But recall that the government had no idea going in what the optimal amount of the public good was. The government used the contribution levels under the top-up mechanism as a signal to decide how much of the public good to produce and almost magically the top-up function is such that citizens will voluntarily contribute exactly the amount that correctly signals how much society as a whole values the public good. Amazing!
Naturally there are a few issues. The optimal solution is a Nash equilibrium which may not be easy to find as everyone must take into account everyone else’s actions to reach equilibrium (an iterative process may help). The mechanism is also potentially vulnerable to collusion. We need to test this mechanism in the lab and in the field. Nevertheless, this is a notable contribution to the theory of public goods and to applied mechanism design.
Hat tip: Discussion with Tyler, Robin, Andrew, Ank and Garett Jones who also has notes on the mechanism.
On average, patients get about 11 seconds to explain the reasons for their visit before they are interrupted by their doctors. Also, only one in three doctors provides their patients with adequate opportunity to describe their situation…
In just over one third of the time (36 per cent), patients were able to put their agendas first. But patients who did get the chance to list their ailments were still interrupted seven out of every ten times, on average within 11 seconds of them starting to speak. In this study, patients who were not interrupted completed their opening statements within about six seconds.
Now solve for the telemedicine equilibrium.
Imagine that people could read each other’s minds, at least once they knew each other and focused on each other’s presence in a common physical space. They can’t do this perfectly or with full transparency, but still they have a much better idea what the other person is thinking and feeling than what they receive today from external signals. They can even “feel” those thoughts from the other at some times, leading to potential embarrassment, both.in positive and negative ways of course. Still, some noise remains, so you are never sure just how intentional, explicit, or sincere a “sampled thought” might be.
Solve for the equilibrium:
1. Many people would develop thicker skins, as they would learn what others really thought of them. They also would tolerate more evil thoughts from others, though at the margin most people still would try to look better rather than worse.
2. A large minority of people, for instance potential child molesters, could not go out in public very much.
3. Sometimes we would meet people and, before initiating a friendship, decide to “get everything out of the way.” Think all the bad (and good?) thoughts up front, and acknowledge this mutually. Make it clear that this is your standard practice with all your friends. Then, if the person later on catches you having a particular thought, you can just say, or intuit, back to them: “Of course I am thinking of stealing a dollar from you. I thought that on the very first day we met, right after wishing you didn’t get that big raise. You’re simply sampling residual memories from all the intentional sins we committed together when initiating our friendship. We did that so subsequent negative signals aren’t really new signals at all.”
And it’s not just thoughts: people preemptively might do everything they are afraid others might discover they are thinking. Get it out of the way. Restore that pooling equilibrium, as they say. Make sure everyone has every thought, using action if need be.
4. A boss hiring a new worker may try to prevent the worker from going through this “mind clearing” process early on. The worker may try to do it. And trying to engage in “mind clearing” with your boss may not be such a negative signal if everyone has unacceptable thoughts of some kind or another. We’re just trying to get back to an equilibrium where those thoughts don’t matter so much. Is that so terrible?
5. You might keep special friends, with whom you don’t act out or think through all the possible suspicions in advance. In essence they would be “surprise friends.” We would call them surprise friends because you would sample their thoughts in real time and with some degree of surprise. Those sampled thoughts actually would contain significant new information about what the person was thinking about you. Having a surprise friend might be considered a sign of courage.
6. Alternatively, people might simply prefer dopey friends, namely those with weak telepathic abilities.
7. Other people will form vice groups, somewhat akin to current gangs.
8. Note that if you can interpret the bad thoughts of others in a truly Bayesian manner (“well, that may sound horrible, but most of the other people are thinking something much worse…”), it is harder for other people to engage in the signal-jamming equilibrium of transmitting all bad thoughts in advance. You would take their signal-jamming as a very negative signal of what their true thoughts are like, and thus the better people would refrain from signal-jamming. At the margin, thoughts would become relevant again, including bad thoughts.
Is there thus a positive or negative social value to an individual turning more Bayesian in this setting, and thus discouraging the signal-jamming in advance?
Dane emails me:
This is a speculative solve-for-the-equilibrium-type question that I’d love to get your thoughts on:
Imagine there was a technology that allowed essentially frictionless harvesting, selling, and buying of (non-perishable) human sleep. Essentially, anyone can strap in to a machine, be put to sleep, and their time/sleep would be harvested in a way that their time sleeping could be used by anyone else who would then get all the benefits of that sleep but instantaneously instead of sleeping themselves, maybe through a painless injection or a drink perhaps.
Imagine also that this technology was relatively non-capital-intensive, or at least, cheap enough that all humans were potential suppliers/buyers of sleep. Call them sleep-workers and sleep-consumers.
Additionally, there’s nothing “free” about the technology. Any sleep-worker’s or sleep-consumer’s lifespan would be unaffected in terms of calendar time. Instead, there would be a zero-sum transfer of waking hours between persons. Even an “around-the-clock” sleep-worker could only net 16 hours of saleable sleep per day. The other 8 hours would have to go to meeting their own sleep needs.
How would this market evolve? How would society evolve? What is the market price for an hour of sleep? How would norms around sleep-working and sleep-consuming evolve? How would the economic indicators evolve (GDP, productivity, inequality, etc)? Which jobs could or could not compete with non-consciousness? How would the welfare state then evolve? How much inter-temporal saving of sleep would there be? Should prisoners be allowed to sleep-harvest for their entire sentences? Would we allow them? Would it be ethical to farm never-conscious humans for the sole purpose of harvesting sleep? Etc…
4. They solved for the equilibrium, link now corrected. And what an equilibrium it was. Yikes.
A DENTIST who bought John Lennon’s tooth is looking for potential love children of the late-Beatle in a bid to stake a claim to his £400million estate.
Dr Michael Zuk, 45, from Alberta, Canada, purchased the legendary songwriter’s decayed molar at auction in 2011 for around £20,000…
Speaking with The Sun Online, the dentist has sensationally revealed that he plans to stake a claim to the music icon’s vast estate using DNA from the body part.
He said: “I am looking for people who believe they are John Lennon’s child and have a claim to his estate and hopefully I can legitimise their claim.
“John was a very popular guy who was having sex with lots of women and I doubt birth control was on his mind.
…“I would ask anyone who is participating to sign a commission agreement which would mean if they were related they would pay my company a percentage of their inheritance.
“Like a finder’s fee.”
Here is the story, via Michael J.
P.s. Solve for the equilibrium.
This book is about what I call the Trade, the growing international business of political kidnappings, according to the US Treasury the most lucrative source of income, outside of state sponsorship, for illegal groups. But it’s more than about money. It is about my attempt, yes, to find the answer to two questions which have haunted me for nine years: Who kidnapped me, and why?
That is from Jere Van Dyk, The Trade: My Journey into the Labyrinth of Political Kidnapping.
Solve for the equilibrium, as they say. The puzzle, of course, is why there are not more kidnappings for revenue.
An effort that animal rescuers began more than a decade ago to buy dogs for $5 or $10 apiece from commercial breeders has become a nationwide shadow market that today sees some rescuers, fueled by Internet fundraising, paying breeders $5,000 or more for a single dog.
The result is a river of rescue donations flowing from avowed dog saviors to the breeders, two groups that have long disparaged each other. The rescuers call many breeders heartless operators of inhumane “puppy mills” and work to ban the sale of their dogs in brick-and-mortar pet stores. The breeders call “retail rescuers” hypocritical dilettantes who hide behind nonprofit status while doing business as unregulated, online pet stores.
But for years, they have come together at dog auctions where no cameras are allowed, with rescuers enriching breeders and some breeders saying more puppies are being bred for sale to the rescuers.
Here is more from Kim Kavin at WaPo, substantive throughout with photos and video. In essence, somebody has solved for the equilibrium.
For the pointers I thank Tom Vansant and Alexander Lowery.
That is the topic of my latest Bloomberg column, here is one excerpt:
The relative lack of attention being paid to the news that U.S.-backed forces killed 200 to 300 Russian mercenary soldiers this month in Syria seems like a non-barking dog to me.
In many years, this might have been the most disruptive story, holding the headlines for weeks or maybe months. Circa February 2018, it didn’t command a single major news cycle.
What outsiders know about the event is still fragmentary, but it sounds pretty ominous. One Bloomberg account notes: “More than 200 contract soldiers, mostly Russians fighting on behalf of Syrian leader Bashar al-Assad, died in a failed attack on a base held by U.S. and mainly Kurdish forces in the oil-rich Deir Ezzor region.” It is described as the biggest clash between U.S. and Russian forces since the Cold War. It seems that the Russian mercenaries are pretty closely tied to the Russian government.
One Russian commentator called this event “a big scandal and a reason for an acute international crisis.” American foreign policy expert Ian Bremmer noted, “At some level, it’s startling that isn’t the biggest news of the year.” Yet I have found that I know plenty of well-educated people, with graduate degrees and living in and near Washington, who aren’t even aware this occurred. The story has fallen into a memory hole, in part because neither the Americans nor the Russians wish to escalate the conflict.
Is this unusual affair a one-off, or an indication of a more basic shift in the world? I am starting to believe the latter.
Finally, do solve for the equilibrium:
As the tolerance for particular instances of conflict rises, the temptation to allow or initiate such conflicts rises, if only because the penalties won’t be so large. Eventually more parties will experiment with violent sorties.
Here is further coverage from The Washington Post, from today, the most detailed article to date, but it is already way down on their front page.