Declining Labor Force Growth Explains Declining Dynamism

The best paper I have read in a long time is Hopenhayn, Neira and Singhania’s From Population Growth to Firm Demographics: Implications for Concentration, Entrepreneurship and the Labor Share. HNS do a great job at combining empirics and theory to explain an important fact about the world in an innovative and surprising way. The question the paper addresses is, Why is dynamism declining? As you may recall, my paper with Nathan Goldschlag, Is regulation to blame for the decline in American entrepreneurship?, somewhat surprisingly answered that the decline in dynamism was too widespread across too many industries to be explained by regulation. HNS point to a factor which is widespread across the entire economy, declining labor force growth.

Figure Two of the paper (at right) looks complicated but it tells a consistent and significant story. The top row of the figure shows three measures of declining dynamism: the rise in concentration which is measured as the share of employment accounted for by large (250+) firms, the increase in average firm size, and the declining exit rate. The bottom row of the figure shows the same measures but this time conditional on firm age. What we see in the bottom figure is two things. First, most of the lines jump around a bit but are generally flat or not increasing. In other words, once we control for firm age we do not see, for example, increasing concentration. Peering closer at the bottom row the second thing it shows is that older firms account for a larger share of employment, are bigger and have lower exit rates. Putting these two facts together suggests that we might be able to explain all the trends in the top row by one fact, aging firms.

So what explains aging firms? Changes in labor force growth have a big influence on the age distribution of firms. Assume, for example, that labor force growth increases. An increase in labor force growth means we need more firms. Current firms cannot absorb all new workers because of diminishing returns to scale. Thus, new workers lead to new firms. New firms are small and young. In contrast, declining labor force growth means fewer new firms. Thus, the average firm is bigger and older.

HNS then embed this insight into a dynamic model in which firms enter and exit and grow and shrink over time according to random productivity shocks (a modified version of Hopenhayn (1992)). We need a dynamic model because suppose the labor force grows today, this causes more young and small firms to enter the market today. Young and small firms, however, have high exit rates so today’s high entry rate will generate a high exit rate tomorrow and also a high entry rate tomorrow as replacements arrive. Thus, a shock to labor force growth today will influence the dynamics of the system many periods into the future.

So what happens when we feed the actual decline in labor force growth into the HNS dynamic model (calibrated to 1978.) Surprisingly, we can explain a lot about declining dynamism. At right, for example, is the startup rate. Note that it jumps up with rising labor force growth in the 1950s and 1960s and declines after the 1970s.

The paper also shows that the model predictions for firm age and concentration also fit the data reasonably well.

Most surprisingly, HNS argue that essentially all of the decline in the labor share of national income can be explained by the simple fact that larger firms use fewer non-production workers per unit of output. That is very surprising. I’m not sure I believe it.

If HNS are correct it implies a very different perspective on the decline in labor share. In the HNS model for example non-competitive factors do not play a role so there’s no monopoly or markups . Moreover, if the decline in labor share is caused by larger firms using fewer non-production workers then this is surely a good thing. In their model, however, there is only one factor of production so declining labor share means increasing profit share which I find dubious. If production and non-production labor are distinguished it may also be that declining non-production share will redound to production labor so the labor share won’t fall as much. Nevertheless, the ideas here are intriguing and the results on dynamism, which are the heart of the paper, do not rely on the arguments about the labor share.

John Bogle, RIP

In 1974, Paul Samuelson wrote Challenge to judgement, a searing critique of money managers. Samuelson challenged the money managers to show that they could beat the market. He concluded that “a respect for evidence compels me to incline toward the hypothesis that most portfolio decision makers should go out of business.” Samuelson hoped for something new:

At the least, some large foundation should set up an in-house portfolio that tracks the S&P 500 Index — if only for the purpose of setting up a naive model against which their in-house gunslingers can measure their prowess.

Inspired by Samuelson, John Bogle created the first index fund in 1976 and it quickly…failed. In the initial underwriting the fund raised only $11.3 million, which wasn’t even enough to buy a minimum portfolio of all the stocks in the S&P 500! The street crowed about “Bogle’s folly” but Bogle persevered and in so doing he benefited millions of investors, saving them billions of dollars is fees. As Warren Buffet said today:

Jack did more for American investors as a whole than any individual I’ve known. A lot of Wall Street is devoted to charging a lot for nothing. He charged nothing to accomplish a huge amount.

The creation of the index fund is a great example of how economic theory and measurement can improve practice. Our course on Money Skills at MRU is very much influenced by Bogle. Tyler and I recommend index funds and Vanguard in particular. In the videos and in our textbook we present data from Bogle’s book Common Sense on Mutual Funds. Here’s the first video in the series.

Subtitling>Dubbing

We study the influence of television translation techniques on the worldwide distribution of English-speaking skills. We identify a large positive effect for subtitled original version broadcasts, as opposed to dubbed television, on English proficiency scores. We analyze the historical circumstances under which countries opted for one of the translation modes and use it to account for the possible endogeneity of the subtitling indicator. We disaggregate the results by type of skills and find that television works especially well for listening comprehension. Our paper suggests that governments could promote subtitling as a means to improve foreign language proficiency.

That’s from TV or not TV? The impact of subtitling on english skills, a clever study with a useful finding.

I cannot help but note that our Principles of Microeconomics and Principles of Macroeconomics videos at MRU (and linked to in our textbook) are subtitled in English, Spanish, Hindi, Arabic and other languages so perhaps we can help teach languages as well as economics.

The Name Game: Urbanization in India

What is rural? What is urban? Different countries use different definitions and sometimes there are multiple definitions within a country. In India, as Reuben Abraham and Pritika Hingorani write, the same state can be 16% or 99% urban depending on the definition..

In India, only “statutory towns” are considered urban and have a municipal administration — a definition that officially leaves the country 26 percent urban. State governments make the decision using widely differing criteria; demographic considerations are peripheral at times. The Census of India provides the only other official, and uniform, estimate. Its formula uses a mix of population, density and occupation criteria, and pegs India at 31 percent urban.

Such estimates can be misleadingly low. For instance, Kerala is statutorily only 16 percent urban. Yet the census sees the well-developed southern state as approximately 48 percent urban. If we use a population cutoff of 5,000 residents as Ghana and Lebanon do, or even Mexico’s threshold of 2,500 people, Kerala’s urban share leaps to 99 percent, which is more consistent with ground reality.

So what? A rose by any other name smells as sweet but definitions matter for policy and resource flows:

The consequences of underestimating the urban share of the population are dire. Resources are badly misallocated: By one estimate, over 80 percent of federal government financing still goes to rural development. This reduces incentives for politicians, especially rural ones, to change the status quo. Tens of millions of Indians who live in dense, urban-like settlements are governed by rural governments that lack the mandate and the money to deliver basic services. In India, urban governments are constitutionally required to provide things such as fire departments, sewer lines, arterial roads and building codes. Local bodies in rural areas aren’t.

In addition, urban planning becomes particularly haphazard when cities grow but aren’t defined as such. How can roads, water lines, sewage lines and metros be arranged when a city is governed by multiple rural units?

As satellite data clearly show, most cities extend well beyond their administrative limits, and dense, linear settlements spread out of those cities along transit corridors. This growth is unregulated and unplanned, marred by narrow roads, growing distance from major thoroughfares, limited open space and haphazardly divided plots.

…what appears to be a single economic unit is now governed by a multitude of rural and urban jurisdictions, with no mechanism to coordinate on mobility, public goods or municipal services. It’s difficult and expensive to retrofit such cities with proper infrastructure and services.

The United States is Underpoliced and Overprisoned

Daniel Bier has a nice rundown on the ratio of police to prison spending comparing the United States to Europe. The US spends less on police and more on prisons than any European country.

Moreover, this is not because Europe spends less on criminal justice. Surprisingly, there is very little correlation between total spending and the ratio of police to prison spending. What we see in the graph below, for example, is that Europe is on the right, indicating more police to prison spending but not noticeably below the US states on total spending as a percent of GDP.

As I have argued before, the United States is underpoliced and overprisoned.

Two Teaching Resources

William Luther has put together an excellent list of Planet Money episodes that are keyed to the relevant chapters in Modern Principles of Economics. A similar list is also available for the excellent intermediate-micro text by Goolsbee, Levitt and Syverson.

For graduate students, Luke Stein has put together a 64 page “cheat sheet” (pdf) for basically the first 2 years of micro and macro theory. It’s not for everyone but would be great for studying for prelims at many top programs. This diagram summarizing key results in consumer theory was excellent.

Border Crime

Alex Nowrasteh at Cato shows that crime is lower in counties adjacent to the Mexican border than in the rest of the United States:

If the entire United States had crime rates as low as those along the border in 2017, then the number of homicides would have been 33.8 percent lower, property crimes would have been 2.1 percent lower, and violent crimes would have dropped 8 percent.

Obviously border counties are different than non-border countries, more rural etc. Nevertheless, the raw fact is striking in comparison to the heated rhetoric about illegal immigration and American blood.

Ethereum Classic Double Spend Attack?

Yesterday, I warned that double spend attacks were cheap and particularly likely for smaller coins using standard hash algorithms. Coincidentally (?) later that day there was this:

It’s not entirely clear whether that is true or if there is an alternative explanation. Coinbase, however, says that approximately $500,000 was double spent. You can find a good discussion on Hacker News.  You can also find an interesting calculation of the cost of renting enough hashing power to 51% dominate various networks here. It’s cheap. The costs given are underestimates in one respect since they don’t include block rewards but overestimates in another as renting may not always be possible.

Here’s some back of the envelope calculations on the cost of the ETC attack. If I am reading the blockchain stats correctly, ETC has a block time of about 15 seconds and the chain was reorganized almost to a depth of 100 blocks or 1500 seconds, i.e. 25 minutes. The cost of dominating the ETC hasing power for an hour is around $5000. Thus, this attack could have been very profitable, even adding in substantial setup costs. Feel free to write in the comments if these numbers look wrong.

As I mentioned yesterday, it’s not surprising that this is happening now because with massive falls in prices in most cryptocurrencies there is an excess supply of computation. Expect more stress testing this year.

Hat tip: The excellent Jake Seliger.

Bitcoin is Less Secure than Most People Think

I spent part of the holidays poring over Eric Budish’s important paper, The Economic Limits of Bitcoin and the BlockChain. Using a few equilibrium conditions and some simulations, Budish shows that Bitcoin is vulnerable to a double spending attack.

In a double spending attack, the attacker sells say bitcoin for dollars. The bitcoin transfer is registered on the blockchain and then, perhaps after some escrow period, the dollars are received by the attacker. As soon as the bitcoin transfer is registered in a block–call this block 1–the attacker starts to mine his own blocks which do not include the bitcoin transfer. Suppose there is no escrow period then the best case for the attacker is that they mine two blocks 1′ and 2′ before the honest nodes mine block 2. In this case, the attacker’s chain–0,1′,2′–is the longest chain and so miners will add to this chain and not the 0,1… chain which becomes orphaned. The attacker’s chain does not include the bitcoin transfer so the attacker still has the bitcoins and they have the dollars! Also, remember, even though it is called a double-spend attack it’s actually an n-spend attack so the gains from attack could be very large. But what happens if the honest nodes mine a new block before the attacker mines 2′? Then the honest chain is 0,1,2 but the attacker still has block 1′ mined and after some time they will have 2′, then they have another chance. If the attacker can mine 3′ before the honest nodes mine block 3 then the new longest chain becomes 0,1′,2′,3′ and the honest nodes start mining on this chain rather than on 0,1,2. It can take time for the attacker to produce the longest chain but if the attacker has more computational power than the honest nodes, even just a little more, then with probability 1 the attacker will end up producing the longest chain.

As an example, Budish shows that if the attacker has just 5% more computational power than the honest nodes then on average it takes 26.5 blocks (a little over 4 hours) for the attacker to have the longest chain. (Most of the time it takes far fewer blocks but occasionally it takes hundreds of blocks for the attacker to produce the longest chain.) The attack will always be successful eventually, the key question is what is the cost of the attack?

The net cost of a double-spend attack is low because attackers also earn block rewards. For example, in the case above it might take 26 blocks for the attacker to substitute its longer chain for the honest chain but when it does so it earns 26 block rewards. The rewards were enough to cover the costs of the honest miners and so they are more or less enough to cover the costs of the attacker. The key point is that attacking is the same thing as mining. Budish assumes that attackers add to the computation power of the network which pushes returns down (for both the attacker and interestingly the honest nodes) but if we assume that the attacker starts out as honest–a Manchurian Candidate attack–then there is essentially zero cost to attacking.

It’s often said that Bitcoin creates security with math. That’s only partially true. The security behind avoiding the double spend attack is not cryptographic but economic, it’s really just the cost of coordinating to achieve a majority of the computational power. Satoshi assumed ‘one-CPU, one-vote’ which made it plausible that it would be costly to coordinate millions of miners. In the centralized ASIC world, coordination is much less costly. Consider, for example, that the top 4 mining pools today account for nearly 50% of the total computational power of the network. An attack would simply mean that these miners agree to mine slightly different blocks than they otherwise would.

Aside from the cost of coordination, a small group of large miners might not want to run a double spending attack because if Bitcoin is destroyed it will reduce the value of their capital investments in mining equipment (Budish analyzes several scenarios in this context). Call that the Too Big to Cheat argument. Sound familiar? The Too Big to Cheat argument, however, is a poor foundation for Bitcoin as a store of value because the more common it is to hold billions in Bitcoin the greater the value of an attack. Moreover, we are in especially dangerous territory today because bitcoin’s recent fall in price means that there is currently an overhang of computing power which has made some mining unprofitable, so miners may feel this a good time to get out.

The Too Big to Cheat argument suggests that coins are vulnerable to centralized computation power easily repurposed. The tricky part is that the efficiencies created by specialization–as for example in application-specific integrated circuits–tend to lead to centralization but by definition make repurposing more difficult.  CPUs, in contrast, tend to lead to decentralization but are easily repurposed. It’s hard to know where safety lies. But what we can say is that any alt-coin that uses a proof of work algorithm that can be solved using ASICs is especially vulnerable because miners could run a double spend attack on that coin and then shift over to mining bitcoin if the value of that coin is destroyed.

What can help? Ironically, traditional law and governance might help. A double spend attack would be clear in the data and at least in general terms so would the attackers. An attack involving dollars and transfers from banks would be potentially prosecutable, greatly raising the cost of an attack. Governance might help as well. Would a majority of miners (not including the attacker) be willing to fork Bitcoin to avoid the attack, much as was done with The DAO? Even the possibility of a hardfork would reduce the expected value of an attack. More generally, all of these mechanisms are a way of enforcing some stake loss or capital loss on dishonest miners. In theory, therefore, proof of stake should be less vulnerable to 51% attacks but proof of stake is much more complicated to make incentive-compatible than proof of work.

All of this is a far cry from money without the state. Trust doesn’t have the solidity of math but we are learning that it is more robust.

Hat tip to Joshua Gans and especially to Eric Budish for extensive conversation on these issues.

Addendum: See here for more on the Ethereum Classic double spend attack.

Hacking Photosynthesis

The vast majority of life on Earth depends, either directly or indirectly, on photosynthesis for its energy. And photosynthesis depends on an enzyme called RuBisCO, which uses carbon dioxide from the atmosphere to build sugars. So, by extension, RuBisCO may be the most important catalyst on the planet.

Unfortunately, RuBisCO is, well, terrible at its job. It might not be obvious based on the plant growth around us, but the enzyme is not especially efficient at catalyzing the carbon dioxide reaction. And, worse still, it often uses oxygen instead. This produces a useless byproduct that, if allowed to build up, will eventually shut down photosynthesis entirely. It’s estimated that crops such as wheat and rice lose anywhere from 20 to 50 percent of their growth potential due to this byproduct.

While plants have evolved ways of dealing with this byproduct, they’re not especially efficient. So a group of researchers at the University of Illinois, Urbana decided to step in and engineer a better way. The result? In field tests, the engineered plants grew up to 40 percent more mass than ones that relied on the normal pathways.

That’s John Timmer at Ars Technica summarizing a paper by South et al. in Science. The experiment was done in tobacco plants but the same pathways are used in the C3 group of plants including rice, wheat, barley, soybean, cotton and sugar beets so the applications are large.

Hayek in the Machine

Medium: Nanoeconomics is about human-machine exchange, and machine-machine exchange. It is the economics of distributed ledgers and artificial intelligence, of object-capability programming and cybersecurity, of ‘central planning’ in the machine, and of ‘markets’ in the machine.

As we’ve come to understand blockchains and other distributed ledger technologies as an institutional technology, we’ve also learned that not only can blockchains coordinate and govern decentralised human economies (as governments, firms and markets do) but they can coordinate and govern decentralised machine economies (or human-machine economies).

This extends what Hayek called catallaxy — the spontaneous order of the market — from the market coordination of human action to the coordination of human-to-machine and machine-to-machine economies.

Nanoeconomics is not a new idea. In their Agoric papers published in 1988, Mark Miller and K. Eric Drexler developed the idea of a computational system as a space for economic exchange. The development of object-oriented programming has created software agents, which vie for scarce resources in the machine. But right now, these agents are governed through planning, not markets. Miller and Drexler suggested an alternative: a market-based computation system. In this system:

“machine resources — storage space, processor time, and so forth — have owners, and the owners charge other objects for use of these resources. Objects, in turn, pass these costs on to the objects they serve, or to an object representing the external user; they may add royalty charges, and thus earn a profit.”

With global computers like the smart-contract platform Ethereum we now have the bones of such a market-based computational architecture.

Interesting post from Chris Berg, Sinclair Davidson and Jason Potts of RMIT Blockchain Innovation Hub in Australia and Bill Tulloh from Agoric.

The Amazon War and the Evolution of Private Law

It’s well known that to boost their sales, sellers sometimes post fake 5-star reviews on Amazon. Amazon tries to police such actions by searching out and banning sites with fake reviews. An unintended consequence is that some sellers now post fake 5-star reviews on their competitor’s site.

The Verge: As Amazon has escalated its war on fake reviews, sellers have realized that the most effective tactic is not buying them for yourself, but buying them for your competitors — the more obviously fraudulent the better. A handful of glowing testimonials, preferably in broken English about unrelated products and written by a known review purveyor on Fiverr, can not only take out a competitor and allow you to move up a slot in Amazon’s search results, it can land your rival in the bewildering morass of Amazon’s suspension system.

…There are more subtle methods of sabotage as well. Sellers will sometimes buy Google ads for their competitors for unrelated products — say, a dog food ad linking to a shampoo listing — so that Amazon’s algorithm sees the rate of clicks converting to sales drop and automatically demotes their product.

What does a seller do when they are banned from Amazon? Appeal to the Amazon legal system and for that you need an Amazon lawyer.

The appeals process is so confounding that it’s given rise to an entire industry of consultants like Stine. Chris McCabe, a former Amazon employee, set up shop in 2014. CJ Rosenbaum, an attorney in Long Beach, New York, now bills himself as the “Amazon sellers lawyer,” with an “Amazon Law Library” featuring Amazon Law, vol. 1 ($95 on Amazon). Stine’s company deals with about 100 suspensions a month and charges $2,500 per appeal ($5,000 if you want an expedited one), which is in line with industry norms. It’s a price many are willing to pay. “It can be life or death for people,” McCabe says. “If they don’t get their Amazon account back, they might be insolvent, laying off 10, 12, 14 people, maybe more. I’ve had people begging me for help. I’ve had people at their wits’ end. I’ve had people crying.”

Amazon is a marketplace that is now having to create a legal system to govern issues of fraud, trademark, and sabotage and also what is in effect new types of intellectual property such as Amazon brand registry. Marketplaces have always been places of private law and governance but there has never before been a marketplace with Amazon’s scale and market power. It’s an open question how well private law will develop in this regime.

Athletes Don’t Own Their Tattoos

NYTimes: Any creative illustration “fixed in a tangible medium” is eligible for copyright, and, according to the United States Copyright Office, that includes the ink displayed on someone’s skin. What many people don’t realize, legal experts said, is that the copyright is inherently owned by the tattoo artist, not the person with the tattoos.

Some tattoo artists have sold their rights to firms which are now suing video game producers who depict the tattoos on the players likenesses:

The company Solid Oak Sketches obtained the copyrights for five tattoos on three basketball players — including the portrait and area code on Mr. James — before suing in 2016 because they were used in the NBA 2K series.

…Before filing its lawsuit, Solid Oak sought $819,500 for past infringement and proposed a $1.14 million deal for future use of the tattoos.

To avoid this shakedown, players are now being told to get licenses from artists before getting tattooed.

Why Doesn’t the FBI Videotape Interviews?

Michael Rappaport at Law and Liberty:

…if the FBI believes that an interviewee has lied during the interview, he or she can be prosecuted for false statements to the government. The penalty for this is quite serious. Under 18 U.S.C. 1001, making a false statement to the federal government in any matter within its jurisdiction is subject to a penalty of 5 years imprisonment. That is a long time.

How does the FBI prove the false statement? One might think that they would make a videotape of the interview, which would provide the best evidence of whether the interviewee made a false statement. But if one thought this, one would be wrong, very wrong.

The FBI does not make videotapes of interviews. Apparently, there are FBI guidelines that prohibit recordings of interviews. Instead, the FBI has a second agent listen to the interview and take notes on it. Then, the agent files a form—a 302 form—with his or her notes from the interview.

What is going on here? Why would the FBI prohibit videotaping the interviews and instead rely on summaries? The most obvious explanations do not cast a favorable light on the Bureau. If they don’t tape the interview, then the FBI agents can provide their own interpretation of what was said to argue that the interviewee made a false statement. Since the FBI agent is likely to be believed more than the defendant (assuming he even testifies), this provides an advantage to the FBI. By contrast, if there is a videotape, the judge and jury can decide for themselves.

…One might even argue this is unconstitutional under existing law. Under the Mathews v. Eldridge interpretation of the Due Process Clause, a procedure is unconstitutional if another procedure would yield more accurate decisions and is worth the added costs. Given the low costs of videotaping, it seems obvious that the benefits of such videotaping for accuracy outweigh the costs.

See also this excellent piece by Harvey Silverglate.