Category: Economics

How costly is trust in the blockchain?

Eric B. Budish has a new paper on this topic:

Satoshi Nakamoto invented a new form of trust. This paper presents a three equation argument that Nakamoto’s new form of trust, while undeniably ingenious, is extremely expensive: the recurring, “flow” payments to the anonymous, decentralized compute power that maintains the trust must be large relative to the one-off, “stock” benefits of attacking the trust. This result also implies that the cost of securing the trust grows linearly with the potential value of attack — e.g., securing against a $1 billion attack is 1000 times more expensive than securing against a $1 million attack. A way out of this flow-stock argument
is if both (i) the compute power used to maintain the trust is non-repurposable, and (ii) a successful attack would cause the economic value of the trust to collapse. However, vulnerability to economic collapse is itself a serious problem, and the model points to specific collapse scenarios. The analysis thus suggests a “pick your poison” economic critique of Bitcoin and its novel form of trust: it is either extremely expensive relative to its economic usefulness or vulnerable to sabotage and collapse.

I enjoyed these sentences:

The intuition for why Nakamoto’s method of creating trust is so expensive, relative to other methods of creating trust, is that Nakamoto’s form of trust is memoryless.  The Bitcoin system is only as secure at a moment in time as the amount of computing power being devoted to maintaining it at that particular moment in time.

Whether or not you agree with the arguments here, or maybe you think proof of stake will render them less relevant, it is nice to see academics (U. Chicago business school) making contributions to crypto debates.

And do you know what is excellent about this paper?  At the end is an appendix “Discussion of Responses to this Paper’s Argument.”  If you can’t write one of those for your own paper, maybe nobody gives a damn!

Via the excellent Kevin Lewis.

The latest wisdom on corporate income tax cuts

A corporate income tax cut leads to a sustained increase in GDP and productivity, with peak effects between five and eight years. R&D spending and capital investment display hump-shaped responses while hours worked and employment are much less affected.

That is from a new NBER working paper by James Cloyne, Joseba Martinez, Haroon Mumtaz, and Paolo Surico.  You will hear many economists, including Paul Krugman, tell you that the Trump corporate tax cuts were a failure.  It would be more accurate to say that we still do not know how effective they will be, noting that the pandemic may have extended the “five and eight years” benchmark a bit.  And it would be more accurate to report that the best available science indicates the tax cuts stand a good chance of succeeding.  See this earlier research, in top-tiered outlets, and also this.

How well will Colombia end up doing?

That is the topic of my latest Bloomberg column.  I am ultimately optimistic, but let me present the case for the negative:

Yet those positives have been in place for a while, and the results are less than earth-shattering. By World Bank estimates, Colombia has a per capita income of slightly more than $16,000, using purchasing power parity standards. For purposes of comparison, Mexico comes in at slightly over $20,000. Argentina is considered to have been an economic failure since the Peronist years, but still has a per capita income exceeding $22,000.

Also troubling is the country’s export profile. After fossil fuels, which have a limited future, the country’s leading exports are coffee, gems and precious metals. None of these is large enough or sophisticated enough or training enough quality labor to push the nation over the top. When it comes to complex manufacturing, the country is lagging well behind Mexico and Brazil, much less South Korea.

A pessimistic view of Colombia would cite the country’s very different geographic regions that have never seen full economic or even political unification. The lack of a fully developed nation-state has been reflected in the country’s ongoing troubles with guerrillas and drug lords. The major urban centers of Bogotá and Medellín are both deep in the interior, surrounded by mountains, and unable to take advantage of major navigable rivers. There is no world-class port or harbor, and except for its connection to the US, the country is inward-looking and has attracted relatively few immigrants, recent Venezuelan refugees aside. The Amazon cuts off Colombia from much of the rest of South America. De facto Colombia has no richer neighbor to pull it up by its bootstraps, Panama being much too small and most of Brazil being too distant. Colombia’s problems also include a recent uptick in troubles with former guerrillas.

I look forward to my next visit to the country…

Some reasons why the U.S. dollar is strong

That is the topic of my latest Bloomberg column.  After talking through some traditional economic reasons, here are some cultural reasons:

In addition, American soft power is far more robust than many criticisms would indicate. English is increasingly entrenched as the global language. The world’s major internet companies are still largely American, with the exception of some Chinese ones. If the internet continues to become more important in our lives, that is another plus for the US — and the dollar.

America also sets a good deal of the global intellectual agenda, for better or worse. The #MeToo movement, Black Lives Matter and wokeism, among other topics, are debated around the world. A US presidential election is akin to a global presidential election, in terms of interest and maybe impact. No other country can say the same.

Perhaps, in view of the larger moral picture, this is all a mixed blessing. But in terms of keeping the dollar as a focal reserve currency, it is a major plus.

There is a reason why the global intellectual chattering classes find it hard to be bullish on America, and so to them a strong dollar is counterintuitive. When one country is so much the center of global attention, it is hard for that country to look good. As my colleague Martin Gurri argues, anything studied and discussed long enough on the internet tends to lead to disillusionment. People focus on the vices more than the virtues, and lose trust.

And so it is with the US. Both abroad and domestically, on both the left and right, there seems to be less faith in the American dream than there was three or four decades ago. In some quarters the US is seen as on the verge of collapse, or at the very least moral and intellectual ruin.

…Most people around the world can recite the defects of America far more easily than they can those of, say, Paraguay.

As a related side note, with both nominal incomes and the dollar up, now is a remarkably inexpensive time for Americans to travel abroad.

Sell Drone Space Like Spectrum!

Photo Credit: MaxPixel

Drone airspace resembles spectrum in the 1980s, an appreciating asset that could be bought, subleased, traded, and borrowed against – if it were only permitted.

Much like legacy spectrum policy, there is immense technocratic inertia towards rationing airspace use to a few lucky drone companies. The Federal Aviation Administration (FAA) has begun drafting long-distance drone rules for services like home delivery, business-to-business delivery, and surveying. In the next decade, drone services companies will deploy mass-market parcel delivery and medical deliveries in urban and suburban areas to make deliveries and logistics faster, cheaper, and greener.

…Federal officials recognize that the current centralized system of air traffic management won’t work for drones: at peak times today, US air traffic controllers actively manage only about 5,400 en route aircraft.

Red flags abound, however. FAA’s current plans for drone traffic management, while vague and preliminary, are clear about what happens once local congestion occurs: the agency will step in to ration airspace and routes how it sees fit. Further, the agency says it will closely oversee the development of airspace management technologies. This is a recipe for technology lock-in and intractable regulatory battles.

US aviation history offers the alarming precedent of expert planning for a new industry. In 1930 President Hoover’s Postmaster General, who regulated airmail routes, and a handpicked group of business executives teamed up to “rationalize” the nascent airline marketplace. In private meetings, they eliminated the established practice of competitive bidding for air routes, divided routes amongst themselves, and reduced the number of startup airlines from around forty to three.

“Universal” and “interoperable” air traffic management are popular concepts in the drone industry, but these principles have destroyed innovation and efficiency in traditional airspace management. The costly US air traffic management system still relies on voice communications and manual writing and passing of paper slips. Large, legacy users and vendors dominate upgrade efforts, and “update by consensus” means the injection of innumerable veto points. Drone traffic management will be “clean sheet,” but interoperable systems are incredibly difficult to build and, once built, to upgrade with new technology and processes. More than 16,000 FAA employees worked on the over-budget, pared-down, years-delayed air traffic management upgrades for traditional aviation.

…To avoid anticompetitive “route-squatting” and sclerotic bureaucratic control of a new industry, aviation regulators should announce a national policy of “airspace markets” – government sales of high-demand drone routes, resembling present-day government spectrum auctions.

Brent Skorup has the details, from a prize winning paper at CSPI.

A Radical Proposal for Funding Science

The process of competing for science funding is so onerous that much of the value is dissipated in seeking funding. Risk aversion by committee means that breakthrough science is often funded surreptiously, on the margin of funded science. These problems are serious and make alternative funding procedures worth thinking about even if radical.

To avoid rent dissipation and risk aversion, our state funding of science should be simplified and decentralized into Researcher Guided Funding. Researcher Guided Funding would take the ~$120 billion spent by the federal government on science each year and distribute it equally to the ~250,000 full-time research and teaching faculty in STEM fields at high research activity universitieswho already get 90% of this money. This amounts to about $500,000 for each researcher every year. You could increase the amount allocated to some researchers while still avoiding dissipating resources on applications by allocating larger grants in a lottery that only some of them win each year. 60% of this money can be spent pursuing any project they want, with no requirements for peer consensus or approval. With no strings attached, Katalin Karikó and Charles Townes could use these funds to pursue their world-changing ideas despite doubt and disapproval from their colleagues. The other 40% would have to be spent funding projects of their peers. This allows important projects to gain a lot of extra funding if a group of researchers are excited about it. With over 5,000 authors on the paper chronicling the discovery of the Higgs Boson particle in the Hadron Supercollider, this group of physicists could muster $2.5 billion dollars a year in funding without consulting any outside sources. This system would avoid the negative effects of long and expensive review processes, because the state hands out the money with very few strings, and risk aversion among funders, because the researchers individually get to decide what to fund and pursue.

There are issues to be sure (see the paper) but experimentation in science funding is called for:

Government funding of science is a logical and well-intentioned attempt to increase the production of a positive externality. However, the institutional forms in which we have chosen to distribute these funds have created parasitic drag on the progress of science. There are many exciting proposals for new ways to fund science, but picking any one of these without rigorous experimentation would be foolish and ironic. The best proposal for science funding reform is to apply science to the problem. Rapid and large-scale experimentation is needed to continuously update and improve our science funding methods.

That is from a prize-winning essay from the CSPI by Maxwell Tabarrok.

See also Tyler’s important post, Science as a source of social alpha.

From my email, on the market for programmers

Programmers in the US are well-paid and companies report difficulty hiring programmers. At the same time, while it’s less reported, there are a lot of people who are good at programming but can’t get programming jobs.There’s a simple explanation, and it’s one that I’ve validated in several ways since realizing it: companies only want to hire already-employed programmers. There’s little incentive to hire someone not already working as a programmer, because if you pay them less than the market rate, they’ll leave after a year, and it takes months to get net productivity from them. (This is great for that person, but the company doesn’t care.) There’s also a big difference between good and bad programmers that can be hard for non-technical managers to determine.There are some developer jobs specifically for new graduates, but fewer than there are computer science graduates alone, and only at certain companies. There’s also a limited window to get one after graduating. Some people can get jobs after a coding bootcamp, yes – but in general, only people in demand for DEI reasons can actually do that, and any technical college degree works about equally well.The higher developer salaries get, the more unqualified people apply, the higher search costs get, and the more companies are disinclined to hire people who aren’t already working as developers.

That is from bhauth.

Adam Smith and Colombia

I gave a keynote address in Bogotá to the International Adam Smith Society, here is my talk.  Why is Adam Smith still relevant to Colombia of all places?  It’s not just the market economics, rather my remarks focused on Book V (the best and most interesting part of WoN!) and Smith’s take on standing armies and why they are conducive to liberty.

My excellent Conversation with Matthew Ball

Here is the audio, video, and transcript.  Here is part of the summary:

Ball joined Tyler to discuss the eventual widespan transition of the population to the metaverse, the exciting implications of this interconnected network of 3D worlds for education, how the metaverse will improve dating and its impacts on sex, the happiness and career satisfaction of professional gamers, his predictions for Tyler’s most frequent uses of the metaverse, his favorite type of entrepreneur, why he has thousands of tabs open on his computer at any given moment, and more.

Here is one excerpt:

COWEN: As I read your book, The Metaverse, which again, I’ll recommend highly, I have the impression you’re pretty optimistic about interoperability within the metaverse and an ultimate lack of market power. Now, if I look around the internet — I mean, most obviously, the Apple Store but also a lot of gaming platforms — you see 30 percent fees, or something in that neighborhood, all over the place. Will the metaverse have the equivalent of a 30 percent fee? Or is it a truly competitive market where everything gets competed down to marginal cost?

BALL: I think neither/nor. I wouldn’t say that market power diffuses. There’s currently this ethos, especially in the Web3 community, that decentralization needs to win and that decentralization can win.

It’s a question of where on the spectrum are we? The early internet was obviously held back by heavy decentralization. This is one of the reasons why AOL was, for so many people, the primary onboarding experience. It was easy, cohesive, visual, vertically integrated down to the software, the browser experience, and so forth. But we believe that the last 15 years has been too centralized.

At the end of the day, no matter how decentralized the underlying protocols of the metaverse are, no matter how popular blockchains are, there are multiple forms of centralization. Habit is powerful. Brand is powerful — the associated trust, intellectual property, the fundamental feedback loops of revenue and scale that drive better product investment for more engineers.

So I struggle to imagine the future isn’t some form of today, a handful of varyingly horizontal-vertical software and hardware-based platforms that have disproportionate share and even more influence. But that doesn’t mean that they’re going to be as powerful as today.

The 30 percent fee is definitely going to come by the wayside. We see this in the EU, whose legislation dropped yesterday. I have absolute certainty that that is going to go away. The question is the timeline. A lawyer joked yesterday, Apple is going to fight the EU until the heat death of the universe, and that’s probably likely. But Apple will find other ways to control and extract, as is their profit motive.

COWEN: Where is the most likely place for that partial market power or centralization to show up? Is it in the IP rights, in the payment system, the hardware provider, a cross-platform engine, somewhere else? What’s the most likely choke point?

BALL: There seem to be two different answers to that. Number one is software distribution. This is your classic discovery and distribution of virtual experiences. Steam does that. Roblox does that. Google does that, frankly, the search engine. That gateway to virtual experiences typically affords you the opportunity to be the dominant identity system, the dominant payment system, and so on and so forth.

The other option is hardware. We can think of the metaverse as a persistent network of experiences, but as with the internet, it may exist literally and in abstraction, but you can only access it through a device. Those device operators have an ever-growing network of APIs, experiences, technologies, technical requirements, and controls through which they can shape it.

Recommended, interesting throughout.

Landlordism returns to Ireland but why?

No political party ever ran for election on the promise to bring back the landlords. None of our leaders ever said that the problem with late 20th century Ireland was that too few people were paying rent.

And yet, this shift back towards landlordism didn’t happen by accident. It has been engineered by the State and partly paid for by the taxpayer.

It is the State that created and shaped this change. It pays all or most of the rent to landlords for 113,000 households. Between 2001 and 2020 governments have spent €12.5 billion to support the private rental market.

The consequent shift is a remarkable exercise in social engineering. Renting has been made more and more “normal” for each succeeding generation.

A recent ESRI report tells us that fewer than 20 per cent of Irish people born in the 1950s or 1960s lived in rented accommodation in their mid-thirties. For those born in the 1970s this rises to just over 30 per cent. For those born in the 1980s it’s over 40 per cent.

These figures are, naturally, mirrored on the other side by a dramatic decline in home ownership among young people. In 2004, 60 per cent of those aged 25-44 owned their own homes. By 2015 that had halved to 30 per cent.

Here is more from the excellent Fintan O’Toole.

The fiscal angle to the Ukraine war is undercovered

Ukraine’s budget crisis has become acute because of a slump in tax revenues and customs duties since the invasion began almost five months ago together with higher war spending.

A halt to grain and steel exports has deprived Kyiv of foreign currency earnings. Ukraine is being forced to burn through its foreign exchange reserves at an accelerating pace, as the central bank purchases government bonds to plug its financing gap.

…The finance ministry said its assessment of the gap was still $5bn a month but even that was way more than western capitals had so far provided.

…The fiscal strains are showing more broadly. Naftogaz, the state-owned energy company, on Tuesday asked holders of $1.5bn of its bonds to accept a delay in payments as it seeks to preserve cash for purchasing gas. It would amount to the first default by a Ukrainian state entity since the war began.

Here is more from the FT.

Yimby and Liberty

Good answer from Matt Yglesias at Slow Boring:

Marcus Seldon: How should the YIMBY movement/urbanists deal with the fact that most Americans say they want to live in a detached single-family home that they own? How do you sell upzoning, walkable neighborhoods, transit-oriented development, and so on to people who largely like (or think they like, at least) the American suburban lifestyle?

MY: Logically, there’s just no contradiction here. It’s clear that there is significant unmet demand to live in New York, Boston, D.C., and San Francisco, and it’s also clear that most people don’t want to live in those cities. Right now, they collectively account for maybe three to four percent of the U.S. population, and in YIMBYtopia, maybe that would go up to five to six percent.

But mostly, the thing I want to sell people on is freedom. It should be legal to build a detached single-family home on any parcel of residentially zoned land in America. But it should also be legal to build a duplex or some rowhouses there. The point of making it legal to build mid-rise apartments isn’t that there’s something incredibly awesome about living in a mid-rise apartment. It’s that in a world of tradeoffs, you might prefer it to an alternative living situation where you have a longer commute or higher expenses.

Yglesias is correct. Yimby is a natural libertarian issue, it’s good for freedom, efficiency and the poor. It’s unfortunate that in recent years there has been some slippage among libertarians to adopt a “conservative” approach to Yimby and immigration by arguing for local and national rights to determine neighborhood and country composition. Sorry, you can twist words all you want, but that isn’t libertarianism it’s collectivism.

The NSF Career Award

From an email from an anonymous correspondent:

Tyler, you may already know this, but I don’t think most people outside of STEM do. The NSF CAREER award (grant) is viewed as a major stepping-stone towards tenure, and there is an expectation that most people will “get one” on their way to tenure at top universities. Yet the requirements are:

1. Write 15 pages, outlining your entire ambitious research agenda for the next 5 years, generally organized as 3 major thrusts with 2-3 paper-sized ideas each.

2. Write 2-3 pages about broader impact, which generally includes broadening participation goals explaining the new undergraduate classes, graduate classes, and extensive community outreach you will engage in.

3. You have about a 20% chance of being awarded this grant, and will hear in 6-9 months (~10% of your tenure clock!)

4. It’s for only $500k, which at most top STEM programs covers about a grad-student per year by the time all the indirect costs are included.

(You may guess that I am writing one right now). The idea that we basically have “prestige” grants that everyone agrees are way too much effort for way too little money blows my mind. And everyone goes along with it!

Other unintended consequences include that you’re effectively forbidden from proposing the same ideas to other funding agencies while the grants is under review, locking you out of other funding sources!

Imagine if I pitched a VC and they said “We’ll get back to you in six months and in the mean time you can’t pitch anyone else, and we’ll only give you enough for one employee for the next five years”. How could anyone do innovation in that kind of environment?!

TC again: Of those it is #4 that I find most astonishing.  That is some rate of overhead!  Keep in mind that throughout world history the costs of intermediation generally have run at about two percent of wealth.  And that is for intermediaries that have to assess the creditworthiness of borrowers, not just send money along.

Are some VC investments predictably bad?

Do institutional investors invest efficiently? To study this question I combine a novel dataset of over 16,000 startups (representing over $9 billion in investments) with machine learning methods to evaluate the decisions of early-stage investors. By comparing investor choices to an algorithm’s predictions, I show that approximately half of the investments were predictably bad—based on information known at the time of investment, the predicted return of the investment was less than readily available outside options. The cost of these poor investments is 1000 basis points, totalling over $900 million in my data. I provide suggestive evidence that over-reliance on the founders’ background is one mechanism underlying these choices. Together the results suggest that high stakes and firm sophistication are not sufficient for efficient use of information in capital allocation decisions.

That is from a new paper by Diag Davenport, via Atta Tarki.