Dalrymple on Aleppo

The history of Aleppo is terrible stuff; a long succession of massacres and sieges disappearing into the mists of Syrian pre-history. First held by the Hittites, it was captured in turn by the Philistines, Assyrians, Babylonians, Persians, Greeks, Romans, Persians (again), Byzantines, Arabs, Mongols and Ottomans, each of whom vied to outdo the carnage of their predecessors. The Assyrians were the most imaginatively sadistic: they impaled the town’s menfolk on their spears and feasted for two days while their victims groaned to a slow death.

In between invasions Aleppo was ruled by a succession of aristocratic thugs who exacted outrageous taxes and perfected ingenious ways of bankrupting their burghers.

In all the town’s history there are only two cheering anecdotes. The first tells of the Arabs who captured Aleppo by dressing up as goats and nibbling their way into the city; the second concerns Abraham, who is supposed to have milked his cow on the citadel’s summit. It is not much in ten thousand years of history, especially when the one story ends in a massacre…and the other is a legend, and untrue. It is the result of a misunderstood derivation of the town’s (Arabic) name Haleb, which comes not from the Arabic for milk (halib) but a much older word, possibly Assyrian, connected with the mechanics of child abuse.

From William Dalrymple’s In Xanadu written in 1989…things have not since improved.

Nav Canada

As La Guardia closes due to the government shutdown, this seems like an opportune time to think about Nav Canada.

We are Canada’s Air Navigation Service Provider (ANSP) managing 3.3 million flights a year for 40,000 customers in over 18 million square kilometres – the world’s second-largest ANSP by traffic volume.

Our airspace stretches from the Pacific West coast to the East coast of Newfoundland and out to the centre of the North Atlantic, the world’s busiest oceanic airspace with some 1,200 flights crossing to and from the European continent daily. It also stretches from the busy U.S-Canada border with major international airports to the North Pole where aircraft fly polar routes to reach Asia.

We are also the world’s first fully privatized civil air navigation service provider, created in 1996 through the combined efforts of commercial air carriers, general aviation, the Government of Canada, as well as our employees and their unions.

Our revenues come from our aviation customers, not government subsidies. By investing in operations and controlling costs, we strive to keep customer charges stable, while improving safety and flight efficiency.

In addition to Canada, New Zealand, Germany, Australia, and the United Kingdom have moved in recent decades towards a more private system based on user fees rather than government funding. See also my earlier post on European airports.

Unsolved Shootings are Rising

In 2015, I documented that crime in Baltimore was rising rapidly as police resources became stretched as they dealt with riots and anger following the death of Freddie Gray. I warned that the city could tip into a permanently higher crime rate.

It’s now become clear that is exactly what happened as an investigative report by The Trace reveals:

Instead of getting backup, detectives were pulled from their cases, sometimes for days at a time, to help quell the violence. By 2016, homicide investigators cumulatively spent 10,000 hours working riot duty and patrol rather than tracking down murderers…

In the ensuing months, Baltimore’s closure rate for shootings dropped to 25 percent, the lowest in recent history. More than 1,100 cases from 2015 and 2016 alone remained unsolved by the following summer.

As the closure rate fell, the number of shootings increased (see data at right).

It’s not just Baltimore, however:

The crisis of unsolved shootings isn’t confined to cash-strapped cities like Baltimore, but also hits some of America’s most affluent metropolises. In 2016, Los Angeles made arrests for just 17 percent of gun assaults, and Chicago for less than 12 percent. The same year, San Francisco managed to make arrests in just 15 percent of the city’s nonfatal shootings. In Boston, the figure was just 10 percent.

Crime is lower today than in the past but we are in danger of becoming complacent. The rate of unsolved crimes is very high and in some cities it is soaring. Any city with an arrest rate for assaults of 15% is primed for a crime wave.

We need more police as well as better policing.

Addendum: I wonder how many of these cities are still devoting significant resources to marijuana busts?

Drop Gangs

Cryptocurrencies, GPS, drones, and cheap beacons are driving a new evolution in illegal markets:

…[A] major change is the use of “dead drops” instead of the postal system which has proven vulnerable to tracking and interception. Now, goods are hidden in publicly accessible places like parks and the location is given to the customer on purchase. The customer then goes to the location and picks up the goods. This means that delivery becomes asynchronous for the merchant, he can hide a lot of product in different locations for future, not yet known, purchases. For the client the time to delivery is significantly shorter than waiting for a letter or parcel shipped by traditional means – he has the product in his hands in a matter of hours instead of days. Furthermore this method does not require for the customer to give any personally identifiable information to the merchant, which in turn doesn’t have to safeguard it anymore. Less data means less risk for everyone.

The use of dead drops also significantly reduces the risk of the merchant to be discovered by tracking within the postal system. He does not have to visit any easily to surveil post office or letter box, instead the whole public space becomes his hiding territory.

…Classically, when used by intelligence agencies, dead drops relied on being concealed. This lead to dead drops being hard to find even by the intended recipients without costly preparation and training. One of the results of this was that dead drops were often used repeatedly, which increased the probability of both sender and recipient being identified by surveillance.

An ideal dead drop is however used exactly once. Only then can the risks of using it be reduced to pure bad luck.

This challenge is met by Dropgangs in various ways. The primary one is that the documentation of each dead drop is conducted in minute detail, covering GPS coordinates, photos of the surrounding and the location, as well as photos of the concealment device in which the product is hidden (such as an empty coke can). The documentation however increases the risk for the Dropgang since whoever creates it would be more easy to identify by surveillance. In addition, even great documentation still requires the customer to understand it and follow it precisely, which can lead to suspicious behavior around the dead drop location (staring at photos, visually comparing them to the surrounding, etc).

A first development to mitigate the problem of localizing is the use of Bluetooth beacons. In addition to the product, the dead drop contains a little electronic device that sends a signal that can be received by a smartphone, which in turn can display the direction and approximate distance to the device. In addition to the GPS coordinates, the customer requires only a smartphone with the correct App. Beacon devices like these are available on the open market for under ten dollars.

They do however pose the risk of a non-authorized party to discover the dead drop, simply by searching an area suitable for hiding dead drops with their own smartphone.

There are first reports of using beacon devices that are not constantly sending a signal, but have to be activated first. The activation usually happens by establishing a WiFi hotspot on the customer’s phone (by using the WiFi tethering feature). Only if the beacon sees a WiFi hotspot with a specific, merchant provided, unique name will it start to send a homing signal itself. Devices like these are very cheap (<15 USD) and have gained traction in the field, but they pose risks to the customer: His smartphone becomes identifiable by observers, even over considerable distance. This can lead to tracking the customer.

…A plausible next step would be the development of markets for dead drop operators that make their living by picking up product from one dead drop and placing it in another, working as a proxy for the customer to increase his safety and to reduce his efforts. This would also make this distribution model wider spread and available to more products, which will blur the lines between the black and the legal market. On this blurred line new services and technologies will establish themselves, inherently dual use services like lock boxes that can be paid by peer-to-peer cryptocurrencies.

Looking even further into the future, it seems plausible that the whole urban environment might find itself integrated into a dynamic landscape of very short-lived dead drops that are serviced by humans and cheap drones (unmanned aerial vehicles), which are already cheaply available and likely only require one market actor to develop and spread a mechanism to pick up and drop goods. Both merchant and customer could use drones, that are available for rent through dedicated Apps, to deliver product to a meeting point on a roof, where another drone would pick it up. Chaining multiple exchanges like this will make the tracing of the delivery extremely hard, essentially leading to mixing techniques so far used only in anonymizing digital communication.

Read the whole thing.

Hat tip: Eli Dourado.

In India Everyone Gets Affirmative Action

India has long affirmative-action-like programs for members of scheduled castes, scheduled tribes and other backward classes (yes, that is the official name). The programs typically reserve a certain number of political seats, government jobs, and educational placements for members of historically disadvantaged and discriminated against groups, hence the the term reservations. Over time, the number of reservations has been increased and the category expanded to more and more groups. In fact, under a new reservation program just announced, virtually everyone will be covered by one reservation or another!

The new program will cover household income of less than 8 lakhs which is $11,000, far above India’s GDP per capita! The new program is meant to benefit middle and upper castes who have chafed under reservations for the historically discriminated against. The fact that the program is open to so many people, however, means that it’s really not much of a benefit at all.

Moreover, ultimately reservations mean very little if there aren’t private-sector, wealth-creating jobs which is India’s primary challenge.

Declining Labor Force Growth Explains Declining Dynamism

The best paper I have read in a long time is Hopenhayn, Neira and Singhania’s From Population Growth to Firm Demographics: Implications for Concentration, Entrepreneurship and the Labor Share. HNS do a great job at combining empirics and theory to explain an important fact about the world in an innovative and surprising way. The question the paper addresses is, Why is dynamism declining? As you may recall, my paper with Nathan Goldschlag, Is regulation to blame for the decline in American entrepreneurship?, somewhat surprisingly answered that the decline in dynamism was too widespread across too many industries to be explained by regulation. HNS point to a factor which is widespread across the entire economy, declining labor force growth.

Figure Two of the paper (at right) looks complicated but it tells a consistent and significant story. The top row of the figure shows three measures of declining dynamism: the rise in concentration which is measured as the share of employment accounted for by large (250+) firms, the increase in average firm size, and the declining exit rate. The bottom row of the figure shows the same measures but this time conditional on firm age. What we see in the bottom figure is two things. First, most of the lines jump around a bit but are generally flat or not increasing. In other words, once we control for firm age we do not see, for example, increasing concentration. Peering closer at the bottom row the second thing it shows is that older firms account for a larger share of employment, are bigger and have lower exit rates. Putting these two facts together suggests that we might be able to explain all the trends in the top row by one fact, aging firms.

So what explains aging firms? Changes in labor force growth have a big influence on the age distribution of firms. Assume, for example, that labor force growth increases. An increase in labor force growth means we need more firms. Current firms cannot absorb all new workers because of diminishing returns to scale. Thus, new workers lead to new firms. New firms are small and young. In contrast, declining labor force growth means fewer new firms. Thus, the average firm is bigger and older.

HNS then embed this insight into a dynamic model in which firms enter and exit and grow and shrink over time according to random productivity shocks (a modified version of Hopenhayn (1992)). We need a dynamic model because suppose the labor force grows today, this causes more young and small firms to enter the market today. Young and small firms, however, have high exit rates so today’s high entry rate will generate a high exit rate tomorrow and also a high entry rate tomorrow as replacements arrive. Thus, a shock to labor force growth today will influence the dynamics of the system many periods into the future.

So what happens when we feed the actual decline in labor force growth into the HNS dynamic model (calibrated to 1978.) Surprisingly, we can explain a lot about declining dynamism. At right, for example, is the startup rate. Note that it jumps up with rising labor force growth in the 1950s and 1960s and declines after the 1970s.

The paper also shows that the model predictions for firm age and concentration also fit the data reasonably well.

Most surprisingly, HNS argue that essentially all of the decline in the labor share of national income can be explained by the simple fact that larger firms use fewer non-production workers per unit of output. That is very surprising. I’m not sure I believe it.

If HNS are correct it implies a very different perspective on the decline in labor share. In the HNS model for example non-competitive factors do not play a role so there’s no monopoly or markups . Moreover, if the decline in labor share is caused by larger firms using fewer non-production workers then this is surely a good thing. In their model, however, there is only one factor of production so declining labor share means increasing profit share which I find dubious. If production and non-production labor are distinguished it may also be that declining non-production share will redound to production labor so the labor share won’t fall as much. Nevertheless, the ideas here are intriguing and the results on dynamism, which are the heart of the paper, do not rely on the arguments about the labor share.

John Bogle, RIP

In 1974, Paul Samuelson wrote Challenge to judgement, a searing critique of money managers. Samuelson challenged the money managers to show that they could beat the market. He concluded that “a respect for evidence compels me to incline toward the hypothesis that most portfolio decision makers should go out of business.” Samuelson hoped for something new:

At the least, some large foundation should set up an in-house portfolio that tracks the S&P 500 Index — if only for the purpose of setting up a naive model against which their in-house gunslingers can measure their prowess.

Inspired by Samuelson, John Bogle created the first index fund in 1976 and it quickly…failed. In the initial underwriting the fund raised only $11.3 million, which wasn’t even enough to buy a minimum portfolio of all the stocks in the S&P 500! The street crowed about “Bogle’s folly” but Bogle persevered and in so doing he benefited millions of investors, saving them billions of dollars is fees. As Warren Buffet said today:

Jack did more for American investors as a whole than any individual I’ve known. A lot of Wall Street is devoted to charging a lot for nothing. He charged nothing to accomplish a huge amount.

The creation of the index fund is a great example of how economic theory and measurement can improve practice. Our course on Money Skills at MRU is very much influenced by Bogle. Tyler and I recommend index funds and Vanguard in particular. In the videos and in our textbook we present data from Bogle’s book Common Sense on Mutual Funds. Here’s the first video in the series.

Subtitling>Dubbing

We study the influence of television translation techniques on the worldwide distribution of English-speaking skills. We identify a large positive effect for subtitled original version broadcasts, as opposed to dubbed television, on English proficiency scores. We analyze the historical circumstances under which countries opted for one of the translation modes and use it to account for the possible endogeneity of the subtitling indicator. We disaggregate the results by type of skills and find that television works especially well for listening comprehension. Our paper suggests that governments could promote subtitling as a means to improve foreign language proficiency.

That’s from TV or not TV? The impact of subtitling on english skills, a clever study with a useful finding.

I cannot help but note that our Principles of Microeconomics and Principles of Macroeconomics videos at MRU (and linked to in our textbook) are subtitled in English, Spanish, Hindi, Arabic and other languages so perhaps we can help teach languages as well as economics.

The Name Game: Urbanization in India

What is rural? What is urban? Different countries use different definitions and sometimes there are multiple definitions within a country. In India, as Reuben Abraham and Pritika Hingorani write, the same state can be 16% or 99% urban depending on the definition..

In India, only “statutory towns” are considered urban and have a municipal administration — a definition that officially leaves the country 26 percent urban. State governments make the decision using widely differing criteria; demographic considerations are peripheral at times. The Census of India provides the only other official, and uniform, estimate. Its formula uses a mix of population, density and occupation criteria, and pegs India at 31 percent urban.

Such estimates can be misleadingly low. For instance, Kerala is statutorily only 16 percent urban. Yet the census sees the well-developed southern state as approximately 48 percent urban. If we use a population cutoff of 5,000 residents as Ghana and Lebanon do, or even Mexico’s threshold of 2,500 people, Kerala’s urban share leaps to 99 percent, which is more consistent with ground reality.

So what? A rose by any other name smells as sweet but definitions matter for policy and resource flows:

The consequences of underestimating the urban share of the population are dire. Resources are badly misallocated: By one estimate, over 80 percent of federal government financing still goes to rural development. This reduces incentives for politicians, especially rural ones, to change the status quo. Tens of millions of Indians who live in dense, urban-like settlements are governed by rural governments that lack the mandate and the money to deliver basic services. In India, urban governments are constitutionally required to provide things such as fire departments, sewer lines, arterial roads and building codes. Local bodies in rural areas aren’t.

In addition, urban planning becomes particularly haphazard when cities grow but aren’t defined as such. How can roads, water lines, sewage lines and metros be arranged when a city is governed by multiple rural units?

As satellite data clearly show, most cities extend well beyond their administrative limits, and dense, linear settlements spread out of those cities along transit corridors. This growth is unregulated and unplanned, marred by narrow roads, growing distance from major thoroughfares, limited open space and haphazardly divided plots.

…what appears to be a single economic unit is now governed by a multitude of rural and urban jurisdictions, with no mechanism to coordinate on mobility, public goods or municipal services. It’s difficult and expensive to retrofit such cities with proper infrastructure and services.

The United States is Underpoliced and Overprisoned

Daniel Bier has a nice rundown on the ratio of police to prison spending comparing the United States to Europe. The US spends less on police and more on prisons than any European country.

Moreover, this is not because Europe spends less on criminal justice. Surprisingly, there is very little correlation between total spending and the ratio of police to prison spending. What we see in the graph below, for example, is that Europe is on the right, indicating more police to prison spending but not noticeably below the US states on total spending as a percent of GDP.

As I have argued before, the United States is underpoliced and overprisoned.

Two Teaching Resources

William Luther has put together an excellent list of Planet Money episodes that are keyed to the relevant chapters in Modern Principles of Economics. A similar list is also available for the excellent intermediate-micro text by Goolsbee, Levitt and Syverson.

For graduate students, Luke Stein has put together a 64 page “cheat sheet” (pdf) for basically the first 2 years of micro and macro theory. It’s not for everyone but would be great for studying for prelims at many top programs. This diagram summarizing key results in consumer theory was excellent.

Border Crime

Alex Nowrasteh at Cato shows that crime is lower in counties adjacent to the Mexican border than in the rest of the United States:

If the entire United States had crime rates as low as those along the border in 2017, then the number of homicides would have been 33.8 percent lower, property crimes would have been 2.1 percent lower, and violent crimes would have dropped 8 percent.

Obviously border counties are different than non-border countries, more rural etc. Nevertheless, the raw fact is striking in comparison to the heated rhetoric about illegal immigration and American blood.

Ethereum Classic Double Spend Attack?

Yesterday, I warned that double spend attacks were cheap and particularly likely for smaller coins using standard hash algorithms. Coincidentally (?) later that day there was this:

It’s not entirely clear whether that is true or if there is an alternative explanation. Coinbase, however, says that approximately $500,000 was double spent. You can find a good discussion on Hacker News.  You can also find an interesting calculation of the cost of renting enough hashing power to 51% dominate various networks here. It’s cheap. The costs given are underestimates in one respect since they don’t include block rewards but overestimates in another as renting may not always be possible.

Here’s some back of the envelope calculations on the cost of the ETC attack. If I am reading the blockchain stats correctly, ETC has a block time of about 15 seconds and the chain was reorganized almost to a depth of 100 blocks or 1500 seconds, i.e. 25 minutes. The cost of dominating the ETC hasing power for an hour is around $5000. Thus, this attack could have been very profitable, even adding in substantial setup costs. Feel free to write in the comments if these numbers look wrong.

As I mentioned yesterday, it’s not surprising that this is happening now because with massive falls in prices in most cryptocurrencies there is an excess supply of computation. Expect more stress testing this year.

Hat tip: The excellent Jake Seliger.

Bitcoin is Less Secure than Most People Think

I spent part of the holidays poring over Eric Budish’s important paper, The Economic Limits of Bitcoin and the BlockChain. Using a few equilibrium conditions and some simulations, Budish shows that Bitcoin is vulnerable to a double spending attack.

In a double spending attack, the attacker sells say bitcoin for dollars. The bitcoin transfer is registered on the blockchain and then, perhaps after some escrow period, the dollars are received by the attacker. As soon as the bitcoin transfer is registered in a block–call this block 1–the attacker starts to mine his own blocks which do not include the bitcoin transfer. Suppose there is no escrow period then the best case for the attacker is that they mine two blocks 1′ and 2′ before the honest nodes mine block 2. In this case, the attacker’s chain–0,1′,2′–is the longest chain and so miners will add to this chain and not the 0,1… chain which becomes orphaned. The attacker’s chain does not include the bitcoin transfer so the attacker still has the bitcoins and they have the dollars! Also, remember, even though it is called a double-spend attack it’s actually an n-spend attack so the gains from attack could be very large. But what happens if the honest nodes mine a new block before the attacker mines 2′? Then the honest chain is 0,1,2 but the attacker still has block 1′ mined and after some time they will have 2′, then they have another chance. If the attacker can mine 3′ before the honest nodes mine block 3 then the new longest chain becomes 0,1′,2′,3′ and the honest nodes start mining on this chain rather than on 0,1,2. It can take time for the attacker to produce the longest chain but if the attacker has more computational power than the honest nodes, even just a little more, then with probability 1 the attacker will end up producing the longest chain.

As an example, Budish shows that if the attacker has just 5% more computational power than the honest nodes then on average it takes 26.5 blocks (a little over 4 hours) for the attacker to have the longest chain. (Most of the time it takes far fewer blocks but occasionally it takes hundreds of blocks for the attacker to produce the longest chain.) The attack will always be successful eventually, the key question is what is the cost of the attack?

The net cost of a double-spend attack is low because attackers also earn block rewards. For example, in the case above it might take 26 blocks for the attacker to substitute its longer chain for the honest chain but when it does so it earns 26 block rewards. The rewards were enough to cover the costs of the honest miners and so they are more or less enough to cover the costs of the attacker. The key point is that attacking is the same thing as mining. Budish assumes that attackers add to the computation power of the network which pushes returns down (for both the attacker and interestingly the honest nodes) but if we assume that the attacker starts out as honest–a Manchurian Candidate attack–then there is essentially zero cost to attacking.

It’s often said that Bitcoin creates security with math. That’s only partially true. The security behind avoiding the double spend attack is not cryptographic but economic, it’s really just the cost of coordinating to achieve a majority of the computational power. Satoshi assumed ‘one-CPU, one-vote’ which made it plausible that it would be costly to coordinate millions of miners. In the centralized ASIC world, coordination is much less costly. Consider, for example, that the top 4 mining pools today account for nearly 50% of the total computational power of the network. An attack would simply mean that these miners agree to mine slightly different blocks than they otherwise would.

Aside from the cost of coordination, a small group of large miners might not want to run a double spending attack because if Bitcoin is destroyed it will reduce the value of their capital investments in mining equipment (Budish analyzes several scenarios in this context). Call that the Too Big to Cheat argument. Sound familiar? The Too Big to Cheat argument, however, is a poor foundation for Bitcoin as a store of value because the more common it is to hold billions in Bitcoin the greater the value of an attack. Moreover, we are in especially dangerous territory today because bitcoin’s recent fall in price means that there is currently an overhang of computing power which has made some mining unprofitable, so miners may feel this a good time to get out.

The Too Big to Cheat argument suggests that coins are vulnerable to centralized computation power easily repurposed. The tricky part is that the efficiencies created by specialization–as for example in application-specific integrated circuits–tend to lead to centralization but by definition make repurposing more difficult.  CPUs, in contrast, tend to lead to decentralization but are easily repurposed. It’s hard to know where safety lies. But what we can say is that any alt-coin that uses a proof of work algorithm that can be solved using ASICs is especially vulnerable because miners could run a double spend attack on that coin and then shift over to mining bitcoin if the value of that coin is destroyed.

What can help? Ironically, traditional law and governance might help. A double spend attack would be clear in the data and at least in general terms so would the attackers. An attack involving dollars and transfers from banks would be potentially prosecutable, greatly raising the cost of an attack. Governance might help as well. Would a majority of miners (not including the attacker) be willing to fork Bitcoin to avoid the attack, much as was done with The DAO? Even the possibility of a hardfork would reduce the expected value of an attack. More generally, all of these mechanisms are a way of enforcing some stake loss or capital loss on dishonest miners. In theory, therefore, proof of stake should be less vulnerable to 51% attacks but proof of stake is much more complicated to make incentive-compatible than proof of work.

All of this is a far cry from money without the state. Trust doesn’t have the solidity of math but we are learning that it is more robust.

Hat tip to Joshua Gans and especially to Eric Budish for extensive conversation on these issues.

Addendum: See here for more on the Ethereum Classic double spend attack.

Hacking Photosynthesis

The vast majority of life on Earth depends, either directly or indirectly, on photosynthesis for its energy. And photosynthesis depends on an enzyme called RuBisCO, which uses carbon dioxide from the atmosphere to build sugars. So, by extension, RuBisCO may be the most important catalyst on the planet.

Unfortunately, RuBisCO is, well, terrible at its job. It might not be obvious based on the plant growth around us, but the enzyme is not especially efficient at catalyzing the carbon dioxide reaction. And, worse still, it often uses oxygen instead. This produces a useless byproduct that, if allowed to build up, will eventually shut down photosynthesis entirely. It’s estimated that crops such as wheat and rice lose anywhere from 20 to 50 percent of their growth potential due to this byproduct.

While plants have evolved ways of dealing with this byproduct, they’re not especially efficient. So a group of researchers at the University of Illinois, Urbana decided to step in and engineer a better way. The result? In field tests, the engineered plants grew up to 40 percent more mass than ones that relied on the normal pathways.

That’s John Timmer at Ars Technica summarizing a paper by South et al. in Science. The experiment was done in tobacco plants but the same pathways are used in the C3 group of plants including rice, wheat, barley, soybean, cotton and sugar beets so the applications are large.