…Illing: Let’s return to the “competence principle.” Why does the right to competent government trump other fundamental rights, like the right to participate in the democratic process?
Brennan: I think the real question is why should we assume there’s a right to participate in democratic process? It’s actually quite weird and different from a lot of other rights we seem to have.
We have the right to choose our partner, to choose our religion, to choose what we’re going to eat, where we live, what job we’ll do, etc. While some of these things do impose costs on others, they’re primarily about carving out a sphere of autonomy for the individual, and about preventing other people from having control over you.
A right to participate in politics seems fundamentally different because it involves imposing your will upon other people. So I’m not sure that any of us should have that kind of right, at least not without any responsibilities.
So how do we create an epistocracy?
Brennan:…Here’s what I propose we do: Everyone can vote, even children. No one gets excluded. But when you vote, you do three things.
First, you tell us what you want. You cast your vote for a politician, or for a party, or you take a position on a referendum, whatever it might be. Second, you tell us who you are. We get your demographic information, which is anonymously coded, because that stuff affects how you vote and what you support.
And the third thing you do is take a quiz of very basic political knowledge. When we have those three bits of information, we can then statistically estimate what the public would have wanted if it was fully informed.
Under this system, it’s not really the case that you have more power than I do. We can’t really point to any individual and say you were excluded, or your vote counted for more. The idea is to gauge what the public would actually want if it had all the information it needed.
Lots to think about. Read the whole thing.
In normal times and places house prices are kept fairly close to construction costs by the ordinary processes of supply and demand. Average house prices didn’t rise much over the entire 20th century, for example. Even today, house prices are kept close to construction costs in most of the United States. But extreme supply restrictions in a small number of important places (San Francisco, San Jose, LA, New York, Boston etc.), have driven average prices well above any seen in the entire 20th century.
Over the last several decades high productivity industries have become more geographically concentrated. As a result, a substantial share of the productivity gains from technology, bio-tech and finance have gone not to producers but to non-productive landowners. High returns to land have meant lower returns to other factors of production.
The return to education, for example, has increased in the United States but it’s less well appreciated that in order to earn high wages college educated workers must increasingly live in expensive cities. One consequence is that the net college wage premium is not as large as it appears and inequality has been over-estimated. Remarkably Enrico Moretti (2013) estimates that 25% of the increase in the college wage premium between 1980 and 2000 was absorbed by higher housing costs. Moreover, since the big increases in housing costs have come after 2000, it’s very likely that an even larger share of the college wage premium today is being eaten by housing. High housing costs don’t simply redistribute wealth from workers to landowners. High housing costs reduce the return to education reducing the incentive to invest in education. Thus higher housing costs have reduced human capital and the number of skilled workers with potentially significant effects on growth.
Housing is eating the world.
Lesson one in our textbook chapter on managing incentives is “You get what you pay for (even when what you pay for is not exactly what you want)”. Case in point is the California cleanup of the 2017 wildfires, at $280,000 per site it’s four times more expensive than similar past cleanups and by far the costliest cleanup in CA history. The state emphasized speed and farmed the job out to the Army Corp of Engineers who hired contractors who were paid by the ton excavated! Paying by the ton created highly u̶n̶p̶r̶e̶d̶i̶c̶t̶a̶b̶l̶e̶ predictable consequences as KQED reports:
…Dan said he saw workers inflate their load weights with wet mud. Sonoma County Supervisor James Gore said he heard similar stories of subcontractors actually being directed to mix metal that should have been recycled into their loads to make them heavier.
“They [contractors] saw it as gold falling from the sky,” Dan said. “That is the biggest issue. They can’t pay tonnage on jobs like this and expect it to be done safely.”
…Krickl pointed to where his home used to stand. It’s a 6-foot deep depression that he affectionately called his “pond”.
That “pond” was created when contractors removed the foundation, soil and an entire concrete pad for Krickl’s garage, leaving behind a large hole.
Here’s my favorite part:
So many sites were over-excavated that the Governor’s Office of Emergency Services recently launched a new program to refill the holes left behind by Army Corps contractors. That’s estimated to cost another $3.5 million.
Hat tip: Carl D.
Wealthier countries allocate a greater proportion of their workers to science and engineering, fields which produce ideas that often benefit everyone. This is one reason why we all gain when other countries become rich. It’s not just the number of scientists and engineers that matters, however. In a clever paper, Agarwal and Gaule demonstrate that equally talented people are more productive in wealthier countries.
Agarwal and Gaule collect the scores of thousands of teenagers who entered the International Math Olympiad between 1981 and 2000 and they follow their careers. Every additional point earned at the Olympiad increases the likelihood that a participant will later earn a math PhD, be heavily cited, even earn a Fields medal. But Olympians from poorer countries are less likely to contribute to the mathematical frontier than equally talented teens from richer countries. It could be that smart teens from poorer countries are less likely to pursue a math career–and that could well be optimal–but Agarwal and Gaule find that many of the talented kids from poorer countries simply disappear off the world’s radar. Their talent is wasted.
The post-Olympiad loss is not the largest loss. Most of the potentially great mathematicians from poorer countries are lost to the world long before the opportunity to participate in an Olympiad. But it is frustrating that even after talent has been identified, it does not always bloom. We are, however, starting to do better.
You can see from the graph that upper-middle income countries are as good as turning their talent into results as high-income countries. Agarwal and Gaule also find some evidence that the low-income penalty is diminishing over time.
As incomes increase around the world it’s as if the entire world’s processing power is coming online for the first time in human history. That, at least, is one reason for optimism.
Hat tip: Floridan Ederer.
When people evaluate two or more goods separately versus jointly it’s common to see “preference reversals”. In a random survey, for example, people were asked to value the following dictionaries:
- Dictionary A: 20,000 entries, torn cover but otherwise like new
- Dictionary B: 10,000 entries, like new
When asked to value just one dictionary, either A or B, the average value was higher on Dictionary B. But when people were asked to evaluate both dictionaries together the average value was higher on Dictionary A.
What’s going on? Most people have no idea how many words a good dictionary has so telling them that a dictionary has 10K or 20K entries just fades into the background–it’s a dictionary of course it defines a lot of words. On the other hand, we all know that “like new” is better than “torn cover” so dictionary A gets the higher price. When confronted with the pair of dictionaries, however, we see that Dictionary A has twice as many entries as Dictionary B and it’s obvious that more entries makes for a better dictionary and in comparison to more entries, the sine qua non of a dictionary, the torn cover fades into importance.
- Baseball Card Package A: 10 valuable baseball cards, 3 not-so-valuable baseball cards
- Baseball Card Package B: 10 valuable baseball cards
- Congressional Candidate A: Would create 5000 jobs; has been convicted of a misdemeanor
- Congressional Candidate B: Would create 1000 jobs; has no criminal convictions
In each case B tends to have a higher value when evaluated separately but A tends to evaluate higher with joint evaluation. When is separate evaluation better? When is joint evaluation better?
There is a tendency to think that joint evaluation is always better since it is the “full information” condition. Sunstein pushes against this interpretation because he argues that full information doesn’t mean full rationality. Even with full information we may still be biased. The factor that becomes salient when the goods are evaluated jointly, for example, need not be especially relevant. Is a dictionary with 20k entries actually better than one with 10k entries? Maybe 95% of the time it’s worse because it takes longer to find the word you need and the dictionary is less portable. We might let the seemingly irrefutable numerical betterness of A overwhelm what might actually be more relevant, the torn cover.
Sellers could take advantage of the bias of joint evaluation by emphasizing information that consumers might think is important but actually isn’t–our computer screen has 1.073 billion color combinations while our competitors only has 16.7 million–while making less salient 6 hours of battery life versus 8 which may in practice be more important.
Personally, I’d go for full information and trust myself to figure out what is truly important but maybe that is my bias. See the paper for more examples and thought-experiments.
Tens of thousands of studies correlate family socioeconomic status with later child outcomes like income, wealth and attainment and then claim the correlation is causal. Very few such studies control for genetics, although twin adoption studies suggest that genetics is important. Cheap genomic scanning, however, has made it possible to go beyond twin studies. A new paper, for example, looks at differences in education-associated genes between non-identical twins raised in the same family and they find that children with more education-associated genes tend to have greater educational attainment and higher income later in life. In other words, differences in child outcomes both across families and within the same family are in part driven by genetics.
Surprisingly, however, the authors also find evidence for “genetic nurture” the idea that parental genes drive child environment which drives outcomes. That’s surprising because it’s hard to find strong evidence for big environmental effects in adoption studies but here the authors can rely on more precise data. Specifically, the authors look at maternal education-associated genes that are NOT passed on to the children and yet they find that such genes are also correlated with important child outcomes (fyi, they only have maternal genes). So smart parents benefit children twice. First by passing on smart genes and second–even when they do not pass on smart genes–by passing on a smart environment. Previous studies missed the latter effect perhaps because they focused on rich parents rather than smart parents (the former being easier to measure). The authors suggest that by looking at how smart parents help kids without smart genes we may be able to figure out smart environments and generalize them to everyone. That strikes me as optimistic.
Here is the paper abstract:
A summary genetic measure, called a “polygenic score,” derived from a genome-wide association study (GWAS) of education can modestly predict a person’s educational and economic success. This prediction could signal a biological mechanism: Education-linked genetics could encode characteristics that help people get ahead in life. Alternatively, prediction could reflect social history: People from well-off families might stay well-off for social reasons, and these families might also look alike genetically. A key test to distinguish biological mechanism from social history is if people with higher education polygenic scores tend to climb the social ladder beyond their parents’ position. Upward mobility would indicate education-linked genetics encodes characteristics that foster success. We tested if education-linked polygenic scores predicted social mobility in >20,000 individuals in five longitudinal studies in the United States, Britain, and New Zealand. Participants with higher polygenic scores achieved more education and career success and accumulated more wealth. However, they also tended to come from better-off families. In the key test, participants with higher polygenic scores tended to be upwardly mobile compared with their parents. Moreover, in sibling-difference analysis, the sibling with the higher polygenic score was more upwardly mobile. Thus, education GWAS discoveries are not mere correlates of privilege; they influence social mobility within a life. Additional analyses revealed that a mother’s polygenic score predicted her child’s attainment over and above the child’s own polygenic score, suggesting parents’ genetics can also affect their children’s attainment through environmental pathways. Education GWAS discoveries affect socioeconomic attainment through influence on individuals’ family-of-origin environments and their social mobility.
You can find the appendix with the key results here. I find the lab style difficult to follow. The authors run regressions, for example, but you won’t find a regression equation followed by a table with all the results. Instead the regression is described in the appendix and then some coefficients, but by no means all, are presented later in the appendix.
Spiders can fly. Here’s the story from an excellent piece by Ed Yong in The Atlantic.
Spiders have no wings, but they can take to the air nonetheless. They’ll climb to an exposed point, raise their abdomens to the sky, extrude strands of silk, and float away. This behavior is called ballooning. It might carry spiders away from predators and competitors, or toward new lands with abundant resources. But whatever the reason for it, it’s clearly an effective means of travel. Spiders have been found two-and-a-half miles up in the air, and 1,000 miles out to sea.
That part has long been known (although it was news to me). What is new is evidence about how spiders fly, electrostatic energy!
Erica Morley and Daniel Robert have an explanation. The duo, who work at the University of Bristol, has shown that spiders can sense the Earth’s electric field, and use it to launch themselves into the air.
Every day, around 40,000 thunderstorms crackle around the world, collectively turning Earth’s atmosphere into a giant electrical circuit. The upper reaches of the atmosphere have a positive charge, and the planet’s surface has a negative one. Even on sunny days with cloudless skies, the air carries a voltage of around 100 volts for every meter above the ground. In foggy or stormy conditions, that gradient might increase to tens of thousands of volts per meter.
Ballooning spiders operate within this planetary electric field. When their silk leaves their bodies, it typically picks up a negative charge. This repels the similar negative charges on the surfaces on which the spiders sit, creating enough force to lift them into the air. And spiders can increase those forces by climbing onto twigs, leaves, or blades of grass. Plants, being earthed, have the same negative charge as the ground that they grow upon, but they protrude into the positively charged air. This creates substantial electric fields between the air around them and the tips of their leaves and branches—and the spiders ballooning from those tips.
…Morley and Robert have tested it with actual spiders.
First, they showed that spiders can detect electric fields. They put the arachnids on vertical strips of cardboard in the center of a plastic box, and then generated electric fields between the floor and ceiling of similar strengths to what the spiders would experience outdoors. These fields ruffled tiny sensory hairs on the spiders’ feet, known as trichobothria. “It’s like when you rub a balloon and hold it up to your hairs,” Morley says.
In response, the spiders performed a set of movements called tiptoeing—they stood on the ends of their legs and stuck their abdomens in the air. “That behavior is only ever seen before ballooning,” says Morley. Many of the spiders actually managed to take off, despite being in closed boxes with no airflow within them. And when Morley turned off the electric fields inside the boxes, the ballooning spiders dropped.
The economic historian Jeffrey Rogers Hummel writes an informed defense of the American Revolution. Here’s the opening:
It has become de rigueur, even among libertarians and classical liberals, to denigrate the benefits of the American Revolution. Thus, libertarian Bryan Caplan writes: “Can anyone tell me why American independence was worth fighting for?… [W]hen you ask about specific libertarian policy changes that came about because of the Revolution, it’s hard to get a decent answer. In fact, with 20/20 hindsight, independence had two massive anti-libertarian consequences: It removed the last real check on American aggression against the Indians, and allowed American slavery to avoid earlier—and peaceful—abolition.”1 One can also find such challenges reflected in recent mainstream writing, both popular and scholarly.
In fact, the American Revolution, despite all its obvious costs and excesses, brought about enormous net benefits not just for citizens of the newly independent United States but also, over the long run, for people across the globe. Speculations that, without the American Revolution, the treatment of the indigenous population would have been more just or that slavery would have been abolished earlier display extreme historical naivety. Indeed, a far stronger case can be made that without the American Revolution, the condition of Native Americans would have been no better, the emancipation of slaves in the British West Indies would have been significantly delayed, and the condition of European colonists throughout the British empire, not just those in what became the United States, would have been worse than otherwise.
The idea that concepts depend on their reference class isn’t new. A short basketball player is tall and a poor American is rich. One might have thought, however, that a blue dot is a blue dot. Blue can be defined by wavelength so unlike a relative concept like short or rich there is some objective reality behind blue even if the boundaries are vague. Nevertheless, in a thought-provoking new paper in Science the all-star team of Levari, Gilbert, Wilson, Sievers, Amodio and Wheatley show that what we identify as blue expands as the prevalence of blue decreases.
In the figure below, for example, the authors ask respondents to identify a dot as blue or purple. The figure on the left shows that as the objective shading increases from very purple to very blue more people identify the dot as blue, just as one would expect. (The initial and final 200 trials indicate that there is no tendency for changes over time.) In the figure at right, however, blue dots were made less prevalent in the final 200 trials and, after the decrease in the prevalence, the tendency to identify a dot as blue increases dramatically. In the decreasing prevalence condition on the right, a dot that previously was previously identified as blue only 25% of the time now becomes identified as blue 50% of the time! (Read upwards from the horizontal axis and compare the yellow and blue prediction lines).
Clever. But so what? What the authors then go on to show, however, is that the same phenomena happens with complex concepts for which we arguably would like to have a consistent and constant identification.
Are people susceptible to prevalence-induced concept change? To answer this question, we showed participants in seven studies a series of stimuli and asked them to determine whether each stimulus was or was not an instance of a concept. The concepts ranged from simple (“Is this dot blue?”) to complex (“Is this research proposal ethical?”). After participants did this for a while, we changed the prevalence of the concept’s instances and then measured whether the concept had expanded—that is, whether it had come to include instances that it had previously excluded.
…When blue dots became rare, purple dots began to look blue; when threatening faces became rare, neutral faces began to appear threatening; and when unethical research proposals became rare, ambiguous research proposals began to seem unethical. This happened even when the change in the prevalence of instances was abrupt, even when participants were explicitly told that the prevalence of instances would change, and even when participants were instructed and paid to ignore these changes.
Assuming the result replicates (the authors have 7 studies which appear to me to be independent, although each study is fairly small in size (20-100) and drawn from Harvard undergrads) it has many implications.
…in 1960, Webster’s dictionary defined “aggression” as “an unprovoked attack or invasion,” but today that concept can include behaviors such as making insufficient eye contact or asking people where they are from. Many other concepts, such as abuse, bullying, mental disorder, trauma, addiction, and prejudice, have expanded of late as well.
… Many organizations and institutions are dedicated to identifying and reducing the prevalence of social problems, from unethical research to unwarranted aggressions. But our studies suggest that even well-meaning agents may sometimes fail to recognize the success of their own efforts, simply because they view each new instance in the decreasingly problematic context that they themselves have brought about. Although modern societies have made extraordinary progress in solving a wide range of social problems, from poverty and illiteracy to violence and infant mortality, the majority of people believe that the world is getting worse. The fact that concepts grow larger when their instances grow smaller may be one source of that pessimism.
The paper also gives us a way of thinking more clearly about shifts in the Overton window. When strong sexism declines, for example, the Overton window shrinks on one end and expands on the other so that what was once not considered sexism at all (e.g. “men and women have different preferences which might explain job choice“) now becomes violently sexist.
Nicholas Christakis and the fearless Gabriel Rossman point out on twitter (see at right) that it works the other way as well. Namely, the presence of extremes can help others near the middle by widening the set of issues that can be discussed or studied without fear of opprobrium.
But why shouldn’t our standards change over time? Most of the people in the 1850s who thought slavery was an abomination would have rejected the idea of inter-racial marriage. Wife beating wasn’t considered a violent crime in just the very recent past. What racism and sexism mean has changed over time. Are these examples of concept creep or progress? I’d argue progress but the blue dot experiment of Levari et al. suggests that if even objective concepts morph under prevalence inducement then subjective concepts surely will. The issue then is not to prevent progress but to recognize it and not be fooled into thinking that progress hasn’t been made just because our identifications have changed.
It’s well known that a large faction of medical spending occurs in the last 12 months of life but does this mean that the money spent was fruitless? Be careful as there is a big selection effect–we don’t see the people we spent money on who didn’t die. A new paper in Science by Einav, Finkelstein, Mullainathan and Obermeyer finds that most spending is not on people who are predicted to die within the next 12 months.
That one-quarter of Medicare spending in the United States occurs in the last year of life is commonly interpreted as waste. But this interpretation presumes knowledge of who will die and when. Here we analyze how spending is distributed by predicted mortality, based on a machine-learning model of annual mortality risk built using Medicare claims. Death is highly unpredictable. Less than 5% of spending is accounted for by individuals with predicted mortality above 50%. The simple fact that we spend more on the sick—both on those who recover and those who die—accounts for 30 to 50% of the concentration of spending on the dead. Our results suggest that spending on the ex post dead does not necessarily mean that we spend on the ex ante “hopeless.
…”Even if we zoom in further on the subsample of individuals who enter the hospital with metastatic cancer…we find that only 12% of decedents have an annual predicted mortality of more than 80%.
Thus, we aren’t spending on people for whom there is no hope but it doesn’t follow that it’s the spending that creates the hope. What we really want to know is who will live or die conditional on the spending. And to that issue this paper does not speak.
The Supreme Court has agreed to look at whether the 8th Amendment clause forbidding “excessive fines” applies against the states.
The case in question involves the controversial practice of civil asset forfeiture. Tyson Timbs was convicted and served time and paid fines for selling a small amount of drugs to an undercover officer. The state also launched a civil asset forfeiture case against his car:
…But the trial court ruled against the government. Because taking Tyson’s car would be “grossly disproportionate” to his offense—for which Tyson had already been punished—the trial court held that the forfeiture would violate the Excessive Fines Clause of the Eighth Amendment. The Indiana Court of Appeals agreed. Tyson suffered from drug addiction, the court noted, but his only record of dealing was selling a small amount of drugs to undercover police. The court also noted the “financial burdens” that Tyson had already faced when he pleaded guilty. Taking his car on top of all that would violate the Eighth Amendment.
Then the Indiana Supreme Court stepped in. Breaking with at least 14 other state high courts, the Indiana Supreme Court ruled that the Eighth Amendment provides no protection at all against fines and forfeitures imposed by the states.
…“This case is about more than just a truck,” said Wesley Hottot, an attorney with the Institute for Justice. “The Excessive Fines Clause is a critical check on the government’s power to punish people and take their property. Without it, state and local law enforcement could confiscate everything a person owns based on a minor crime or—using civil forfeiture—no crime at all.”
The case has potentially very wide application, far beyond civil asset forfeiture, because municipal governments desperate for revenue are criminalizing and fining minor infractions (see also my posts on Ferguson, MO and here)
Hilda Brucker went down to the municipal court in October 2016 after receiving a phone call. She hadn’t received a formal summons or known of any wrongdoing; instead, she thought she needed to clear a ticket.
But when she arrived at the Doraville, Georgia, courthouse, Brucker said she was placed before a judge and prosecutor who accused her of violating city code — because of cracks in her driveway.
She was fined $100 and sentenced to six months criminal probation, even though this was the first time she was made aware her driveway was considered a problem.
…About 25 percent of Doraville’s operating budget is reliant on fees and fines, according to IJ, a nonprofit law firm. From August 2016 to August 2017, it raked in about $3.8 million in fines, according to IJ’s lawsuit.
“It’s unconstitutional because it creates a financial incentive for the city government … to ticket people,” Josh House, an IJ attorney on the case, told Fox News. He said people in the town were being “punished” for the condition of their property by having to “fund the Doraville city government.”
The Institute for Justice is doing great work.
When Americans buy a car from Mexico, half of what they buy was earlier imported from the United States (74% of foreign imports in the car are from US, foreign imports and labor account for 2/3 of value, .74*.66=48.44–corrected from earlier version).
The firms exporting vehicles from Mexico to the U.S. have set up very deep supply chains between the two countries — much deeper than previously thought. About 74 percent of all the foreign parts used by vehicle assemblers in Mexico that export to the U.S. are imported from the U.S. itself. In contrast, only 18 percent of the imported parts used by Mexican firms exporting to Germany come from the U.S. (see chart). Because the parts that come from the U.S. also include inputs from other countries, it is important to account for international trade along all stages of the supply chain. I estimate that thirty-eight percent of the value of the average finished vehicle exported from Mexico to the U.S. is American value returning home, more than double the 17 percent figure that had been commonly considered.
In a world with deep supply chains a trade war will be much more expensive than in a conventional world. In a conventional world, a tariff only reduces efficiency at the margin as it relocates production from foreign to domestic firms who in the initial equilibrium have equal costs. But in a deep supply chain world a tariff isn’t just a tax on imports it also raises the costs of production of domestic firms. In a deep supply chain world, for example, a tariff on car imports from Mexico raises the cost of US auto production.
Paul Krugman recently argued that for an equal reduction in trade, Trump’s trade war would be less costly than Brexit because Trump’s tariffs will raise revenue and only distort production on the margin. In contrast, he argued that Brexit would raise costs on all units of production. In a deep supply chain world, however, the difference between Brexit and Trumpit are not so large. In such a world, tariffs increase the price of imports and make it more costly to produce goods domestically.
Should there be more publicly funded space exploration? Noa Ovadia recently argued that money should be spent on more pressing needs than space travel. An expert from IBM smacked that argument down pretty convincingly:
It is very easy to say that there are more important things to spend money on, and I do not dispute this. No one is claiming that this is the only item on our expense list. But that is beside the point. As subsidizing space exploration would clearly benefit society, I maintain that this is something the government should pursue.
Oh, did I mention the expert was Dr. Watson?
I am quoted on how economists are portrayed in the media:
It is the best of times. It is the worst of times. It is not uncommon, for example, to see critiques of economics in the media which are about as sophisticated as saying “look at those silly physicists who think that a bowling ball and a feather fall at the same rate.” Even people who should know better like David Suzuki say ridiculously, obtuse things when it comes to economics–perhaps for ideological reasons.
At the same time, the quality of the coverage of economics in the media is often excellent and has never been better. Greg Ip, David Leonhardt, Catherine Rampell, Adam Davidson, Stacey Vanek Smith, Cardiff Garcia, Megan McArdle all do superb economic commentary and reporting not just about the economy but about economics. And those are only the people off the top of my head, I could name many more.
The public also has access to top economists through the blogs and social media. I would count Paul Krugman, Tyler Cowen, John Cochrane, and Jeniffer Doleac in this category.
While some people claim that economics is out of touch or obsolete, economics passes the market test. Economists have never been more in demand. Designing new types of markets is a big part of the internet economy and computer scientists, followed by economists, are the leaders in this field. Google and Facebook run billions of dollars of auctions using what was once an obscure economic theory (Vickey-Clarke-Groves auctions). Google, Facebook, Uber and Airbnb all hire economists to better understand data and design new economic mechanisms. Even some online games like Eve Online are hiring economists to help to run virtual economies–one such economist, Yanis Varoufakis, went from a virtual economy to a real economy when he became Greece’s Minster of Finance.
If you want to understand the world and make it a better place there is no better degree than an economics degree because it is so versatile.
Firms involved in international commerce routinely contract that disputes are to be resolved by private courts of arbitration such as the International Court of Arbitration, the London Court of International Arbitration or the Singapore International Arbitration Center. These courts of arbitration compete for clients and thus have an incentive to resolve disputes fairly, quickly and inexpensively. Courts compete, for example, to provide arbiters who are experts not simply in the law but in the relevant area of commerce. The New York Convention of 1958 says that private arbitration decisions will be enforced by the national courts of any of the 159 signatories; thus private arbitration leverages national enforcement but is otherwise not tethered to national law (e.g. in US see, Mitsubishi v. Soler Chrysler, National Oil v. Libyan Sun). Over time private courts of international arbitration have developed a system of law that transcends nations, an anational law–this is the new lex mercatoria.
I propose that courts analogous to the courts of arbitration that govern international commerce be created to govern smart contracts in virtual space. Arbitration of smart contracts will develop a new private law that will evolve to meet the needs of virtual commerce, a true lex cryptographia. At first, it might seem contradictory to advocate for courts of smart contracts and the development of lex cryptographia. Isn’t the whole point of smart contracts that no courts or lawyers are needed? Similarly, lex cryptographia is usually understood to refer to the smart contracts themselves–code is law–rather than to law governing such contracts. In fact, it is neither desirable nor possible to divorce smart contracts from law.
Smart contracts execute automatically but only simple contracts such as those involving escrow are really self-enforcing. Most contracts, smart or dumb, involve touchstones with the real world. Canonical examples such as the smart contract that lets you use an automobile so long as the rent has been paid illustrate the potential for disputes. Bugs in the code? Disputes over the quality of the car? What happens when a data feed is disputed or internet service is disrupted? Smart contracts applied to the real world are a kind of digital rights management with all of DRMs problems and annoyances.
Some of these problems can be dealt with online using decentralized mechanisms. But we don’t yet know which decentralized mechanisms are robust or cost-effective. Moreover, when marveling at the wisdom of crowds we should not forget the wisdom of experts. Nick Szabo once remarked that if contract law was suddenly forgotten it would take hundreds of years to recover the embedded wisdom. Contract law, for example, is filled with concepts like mistake, misrepresentation, duress, negligence and intention that are not easily formalized in code. Contract law is a human enterprise. And the humans who write contracts want law with terms like negligence precisely because these terms fill in for gaps which cannot be filled in and formalized in contracts let alone in code.
I am enthusiastic about smart contracts on blockchains. Smart contracts will significantly reduce transaction costs and thus let people create valuable, new private orderings. But it will be more profitable to integrate law and code than to try to replace law with code. Integration will require new ways of thinking. The natural language version of a contract–what the parties intend to agree to–may not map precisely to the coded version. Arbiters will be called in to adjudicate and thus will have to be experts in code as well as in law. Smart contracts can be made by anonymous parties who may want a dispute resolved not just privately but anonymously. Smart contracts can be designed with escrow and multisignatory authority so arbiters will also become decision enforcers. All of these issues and many more will have to be understood and new procedures and understandings developed. The competitive market process will discover novel uses for smart contracts and the competitive market process among arbiters will discover novel law. Law will adjust to business practice and business practice to law.
In short, the best way to create a vital new lex cryptographia is through competitive, private arbitration built on the model that already governs international commerce.