Hal of course was in top form, here is the audio and transcript. Excerpt:
COWEN: Why doesn’t business use more prediction markets? They would seem to make sense, right? Bet on ideas. Aggregate information. We’ve all read Hayek.
VARIAN: Right. And we had a prediction market. I’ll tell you the problem with it. The problem is, the things that we really wanted to get a probability assessment on were things that were so sensitive that we thought we would violate the SEC rules on insider knowledge because, if a small group of people knows about some acquisition or something like that, there is a secret among this small group.
You might like to have a probability assessment of whether that would go through. But then, anybody who looks at the auction is now an insider. So there’s a problem in you have to find things that (a) are of interest to the company but (b) do not reveal financially critical information. That’s not so easy to do.
COWEN: But there are plenty of times when insider trading is either illegal or not enforced. Plenty of countries where it’s been legal, and there we don’t see many prediction markets in companies, if any. So it seems like it ought to have to be some more general explanation, or no?
VARIAN: Well, I’m just referring to our particular case. There was another example at the same time: Ford was running a market, and Ford would have futures markets on the price of gasoline, which was very relevant to them. It was an external price and so on. And it extended beyond the usual futures market.
That’s the other thing. You’re not going to get anywhere if you’re just duplicating a market that already exists. You have to add something to it to make it attractive to insiders.
So we ran a number of cases internally. We found some interesting behavior. There’s an article by Bo Cowgill on our experience with this auction. But ultimately, we ran into this problem that I described. The most valuable predictions would be the most sensitive predictions, and you didn’t want to do that in public.
COWEN: But then you must think we’re not doing enough theory today. Or do you think it’s simply exhausted for a while?
VARIAN: Well, one area of theory that I’ve found very exciting is algorithmic mechanism design. With algorithmic mechanism design, it’s a combination of computer science and economics.
The idea is, you take the economic model, and you bring in computational costs, or show me an algorithm that actually solves that maximization problem. Then on the other side, the computer side, you build incentives into the algorithms. So if multiple people are using, let’s say, some communications protocol, you want them all to have the right incentives to have the efficient use of that protocol.
So that’s a case where it really has very strong real-world applications to doing this — everything from telecommunications to AdWords auctions.
VARIAN: Yeah. I would like to separate the blockchain from just cryptographic protocols in general. There’s a huge demand for various kinds of cryptography.
Blockchain seems to be, by its nature, relatively inefficient. As an economist, I don’t like this proof of work that this is. I don’t like the fact that there’s one version of the blockchain that has to keep being updated. I don’t like the fact that it’s so slow. There are lots of things that you could fix, and I expect to see them fixed in the future, but I would say, crypto in general — big deal. Blockchain — not so much.
COWEN: Now, users seem to like them both, but if I just look at the critics, why does it seem to me that Facebook is more hated than Google?
VARIAN: Well, you know, I actually don’t use Facebook. I don’t have any moral objection to it. I just don’t have the time to do it. [laughs] There are other things of this sort that can end up soaking up a substantial amount of time.
I think that one of the reasons — and this is, of course, quite speculative — I think that one of the reasons people are most worried about Facebook is they don’t really understand the limits of what can be done at Facebook. Whereas at Google, I think we’re pretty clear that we’re showing you ads. We’re showing you ads that are targeted to one thing or another, but that’s how the information’s used.
So, you’ve got this specific application in our case. In Facebook’s case, it’s more amorphous, I think.
There is much, much more at the link.
1. The Libra will be backed by a bundle of pretty safe, pretty mainstream assets (I don’t know which ones). It is presented as one hundred percent reserve, though no system with fluctuating prices and also float really will be pure one hundred percent. And the reserve is in “low-risk” assets, attention all critics of the Basel capital standards.
1b. The paper has a chance to say that the custodians will be separately capitalized, with no cross-collateralization, for purposes of Libra protection, but it does not do so. I would recommend that change.
2. The assets in the reserve fund will come from users of Libra (how will they be charged?) and from “investors in the separate Investment Token.” Furthermore “The funds for the coins that will be distributed as incentives will come from a private placement to investors.”
3. What about the public choice issues? Won’t banks insist — correctly or not — that this represents competition and part of the payments system, and thus it should be brought under deposit insurance control and taxation, Fed regulation, various bank holding company acts, Monetary Control Act of 1980, and so on? Have banks ever lost a political battle of this kind?
4. We are told “The association does not set monetary policy. It mints and burns coins only in response to demand from authorized resellers.” Maybe, of course there are hundreds of years of debate on that one, google “real bills doctrine,” noting that here we have a semi-dominant private issuer rather than a perfectly competitive banking system. The association policy on interest rate spreads, floats, and credit, of course, can end up being a monetary policy de facto. I don’t want to prejudge this one against Libra, since to me the validity of the real bills doctrine is a genuinely open question, but it is worth noting that most economists would not agree with the doctrine in most settings.
4b. Won’t some margins arise where there are fractional reserves, even if Facebook/association/Libra are not the ones doing it? Imagine that a new class of intermediaries arises, offering some intermediate services between the core system and retail use, but not adhering to the 100% reserve provisions. The logic behind this tendency seems pretty strong, for better or worse, and it can reintroduce risk into the system. Someone wants to be holding higher yielding assets and then be making claims on them be liquid through the Libra system. But Facebook/Libra would not seem to have the power to regulate the surrounding system of intermediaries, or is that somehow to be done through covenant (“you can’t use Libra unless you promise not to pile your intermediaries on top of it”)?
5. The crypto angle does seem like a sideshow, for me that is not a problem.
6. Imagine a private payment company issuing SDRs, or some other similar basket, based on 100% backing. They would offer you new transactions technologies for greater convenience (WhatsApp?), in return receiving access to your transactions data and sharing some of the float and spread all around, to merchants and customers too. Perhaps that is one way of thinking about how the plan works and where the gains from trade come from?
7. Is there a provision in the system for zero or low-interest loans? Can I send small amounts of “libras,” say to pay my water bill, without first having them in my account? Might sellers sign up to participate in such a system, sharing part of the credit risk with Libra? And is there a way to do it, with crypto and layered assets and float and implicit positions, so that all this is not subject to the usual consumer credit regulations? Is that part of how the system will make money and attract interest? This is just speculation, my question marks here are literally question marks, not tricks to make you think that is how it will be.
8. “Who holds intraday credit risk?” is always a question worth asking.
9. Does any of this try to arbitrage away the fees earned by credit card companies for their intermediation?
10. What if the market for the underlying currencies and assets is (for a while?) more liquid than the market for Libras? Say the basket values adjust before Libra values do. What kind of arbitrage opportunities does that create? If we know Libras are due to depreciate, is there a higher nominal rate of interest on them, as with traditional currencies in an international multi-currency setting? What are the equivalents of covered and uncovered interest parity in this setting? Does a kind of “program trading” arise to perform the arbitrage? Can perfect redemption be offered credibly while the prices are still out of whack?
I still don’t feel I have a great handle on the plan, but those are my immediate reactions. You should take them with a grain of salt, as they may be based on misunderstandings or perhaps even plan incompleteness. I look forward to learning more.
Addendum: If anyone connected to Libra would wish to send more information or address these questions, I would gladly run that material on MR.
Here is a thread on the new project from those running it, here is the White Paper (which I have yet to read). Here is an FTAlphaville analysis of how it may not use a blockchain after all. Note this:
Another important aspect of the Libra Blockchain is Move, its new programming language. This programming language will, says Facebook, allow users to define their own smart contracts in the future. Smart contracts are agreements written in code whose clauses are automatically enforced when a set of pre-determined criteria is met.
Any comments from the experts in the MR reading audience? By the way, if you haven’t been paying attention the Facebook share price is up 44.2% this year. Alphabet is up 4.7%.
Here is a further FTAlphaville analysis: “Managing a pegged monetary reserve system isn’t all that easy.”
Here is a Hacker News thread.
Or so it seems to me, here is the headline: Washington state waterfront owners asked to take dead whales
Here is part of the story:
At least one Washington state waterfront landowner has said yes to a request to allow dead gray whales to decompose on their property.
So many gray whale carcasses have washed up this year that the National Oceanic and Atmospheric Administration Fisheries says it has run out of places to take them.
In response, the agency has asked landowners to volunteer property as a disposal site for the carcasses. By doing so, landowners can support the natural process of the marine environment, and skeletons left behind can be used for educational purposes, officials said.
But the carcasses can be up to 40 feet (12 meters) long. That’s a lot to decay, and it could take months. Landowner Mario Rivera of Port Hadlock, Washington, told KING5-TV that the smell is intermittent and “isn’t that bad.”
“It is really a unique opportunity to have this here on the beach and monitor it and see how fast it goes,” said his wife, Stefanie Worwag.
With Zimbabwe’s economy in shambles and political tensions rising, leaving the country seems the best option for many who are desperate for jobs. But those dreams often end at the passport office, which doesn’t have enough foreign currency to import proper paper and ink.
A passport now takes no less than a year to be issued. An emergency passport can take months amid a backlog of 280,000 applications, never mind recent ones.
That is my essay in the new NBER volume The Economics of Artificial Intelligence: An Agenda, edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. Here is one excerpt from my piece:
These distribution effects [from more powerful AI] may be less egalitarian if hardware rather than software is the constraint for the next generation of AI. Hardware is more likely to exhibit constant or rising costs, and that makes it more difficult for suppliers to charge lower prices to poorer buyers [price discrimination]. You might think it is obvious that future productivity gains will come in the software area — and maybe so — but the very best smart phones, such as IPhones, also embody significant innovations in the areas of materials. A truly potent AI device might require portable hardware at significant cost. At this point we don’t know, but it would be unwise to assume that future innovations will be software-intensive to the same extent that recent innovations have been.
You can buy the book here, it has many notable contributors and other essays of interest.
A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial auto-correlation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t statistics become severely inflated leading to significance levels that are in error by several orders of magnitude. We analyse 27 persistence studies in leading journals and find that in most cases if we replace the main explanatory variable with spatial noise the fit of the regression commonly improves; and if we replace the dependent variable with spatial noise, the persistence variable can still explain it at high significance levels. We can predict in advance which persistence results might be the outcome of fitting spatial noise from the degree of spatial au-tocorrelation in their residuals measured by a standard Moran statistic. Our findings suggest that the results of persistence studies, and of spatial regressions more generally, might be treated with some caution in the absence of reported Moran statistics and noise simulations.
Slate has published an adaptation from my recent book *Big Business: A Love Letter to an American Anti-Hero*, here is one excerpt:
Advocates of splitting up the big tech companies have a utopian vision of what will replace them. Whether you like it or not, we now live in a world where every possible idea (and video) will be put out there in some fashion or another. Don’t confuse your discomfort with reality with your assessment of big tech companies as individual agents. We’re probably better off having major, well-capitalized companies as guardians and gatekeepers of online channels, however imperfect their records, as the relevant alternatives would probably be less able to fend off abuse of their platforms and thus we would all fare worse.
Imagine, for instance, that instead of the current Facebook we had seven smaller companies all performing comparable social networking services, perhaps with some form of interconnectability or data portability. The negative sides of social media, which are indeed real, probably would be worse and harder to control.
It is unlikely that such a setting would result in greater consumer privacy and protection. Instead, we would have more weakly capitalized entities, with less talent on staff and weaker A.I. technologies to take down objectionable material. Probably some of those companies would be more tolerant of irresponsible user behavior as a competitive lure. Fake accounts would proliferate, and social networking sites such as 4chan—often a cesspool of racism and rhetoric that goes beyond the merely offensive—would comprise a larger and more central part of the market.
As for privacy, these smaller Facebook replacements would be more susceptible to hacks, foreign surveillance and infiltration, and external manipulation—the real dangers to our privacy and well-being.
There is much more at the link.
SlateStarCodex, whose 2017 post on the cost disease was one of the motivations for our investigation, says Why Are the Prices so D*mn High (now available in print, ePub, and PDF) is “the best thing I’ve heard all year. It restores my faith in humanity.” I wouldn’t go that far.
SSC does have some lingering doubts and points to certain areas where the data isn’t clear and where we could have been clearer. I think this is inevitable. A lot has happened in the post World War II era. In dealing with very long run trends so much else is going on that answers will never be conclusive. It’s hard to see the signal in the noise. I think of the Baumol effect as something analogous to global warming. The tides come and go but the sea level is slowly rising.
In contrast, my friend Bryan Caplan is not happy. Bryan’s basic point is to argue, ‘look around at all the stupid ways in which the government prevents health care and education prices from falling. Of course, government is the explanation for higher prices.’ In point of fact, I agree with many of Bryan’s points. Bryan says, for example, that immigration would lower health care prices. Indeed it would. (Aside: it does seem odd for Bryan to argue that if K-12 education were privately funded schools would not continue their insane practice of requiring primary school teachers to have B.A.s when in fact, as Bryan knows, credentialism has occurred throughout the economy)
The problem with Bryan’s critiques is that they miss what we are trying to explain which is why some prices have risen while others have fallen. Immigration would indeed lower health care prices but it would also lower the price of automobiles leaving the net difference unexplained. Bryan, the armchair economist, has a simple syllogism, regulation increases prices, education is regulated, therefore regulation explains higher education prices. The problem is that most industries are regulated. Think about the regulations that govern the manufacture of automobiles. Why do all modern automobiles look the same? As Car and Driver puts it:
In our hyperregulated modern world, the government dictates nearly every aspect of car design, from the size and color of the exterior lighting elements to how sharp the creases stamped into sheet metal can be.
(See Jeffrey Tucker for more). And that’s just design regulation. There are also environmental regulations (e.g. ethanol, catalytic converters, CAFE etc.), engine regulations, made in America regulations, not to mention all the regulations on the inputs like steel and coal. The government even regulates how cars can be sold, preventing Tesla from selling direct to the public! When you put all these regulations together it’s not at all obvious that there is more regulation in education than in auto manufacturing. Indeed, since the major increase in regulation since the 1970s has been in environmental regulation, which impacts manufacturing more than services, it seems plausible that regulation has increased more for auto manufacturing.
As an empirical economist, I am interested in testable hypotheses. A testable hypothesis is that the industries with the biggest increases in regulation have seen the biggest increases in prices over time. Yet, when we test that hypothesis as best we can it appears to be false. Remember, this does not mean that regulation doesn’t increase prices! It can and probably does it’s just that regulation is not the explanation for the differences in prices we see across industries. (Note also that Bryan argues that you don’t need increasing regulation to explain increasing prices, which is true, but I still need a testable hypotheses not an unfalsifiable claim.)
So by all means let’s deregulate, but don’t expect 70+ year price trends to reverse until robots and AI start improving productivity in services faster than in manufacturing.
Let me close with this. What I found most convincing about the Baumol effect is consilience. Here, for example, are two figures which did not make the book. The first shows car prices versus car repair prices. The second shows shoe and clothing prices versus shoe repair, tailors, dry cleaners and hair styling. In both cases, the goods price is way down and the service price is up. The Baumol effect offers a unifying account of trends such as this across many different industries. Other theories tend to be ad hoc, false, or unfalsifiable.
In addition to being a great economist, Marty was an institution builder. He was the early driving force behind the rise of the NBER, he led the development of empirical public finance as a respected field, and also very early on he pushed health care economics, both through his leadership at the NBER and through his own work and mentorship. He always was reaching out to help others, and Larry Summers, Jim Poterba, David Cutler, Raj Chetty, and Jason Furman were some of those he mentored. The economics of art museums was yet another topic he had a real interest in, and stimulated research in.
Marty also was one of my oral examiners at Harvard, and he asked only excellent questions. I thank him for judging my answers to be good enough.
Do trade reforms that significantly reduce import barriers lead to faster economic growth? In the two decades since Rodríguez and Rodrik’s (2000) critical survey of empirical work on this question, new research has tried to overcome the various methodological problems that have plagued previous attempts to provide a convincing answer. This paper examines three strands of recent work on this issue: cross-country regressions focusing on within-country growth, synthetic control methods on specific reform episodes, and empirical country studies looking at the channels through which lower trade barriers may increase productivity. A consistent finding is that trade reforms have a positive impact on economic growth, on average, although the effect is heterogeneous across countries. Overall, these research findings should temper some of the previous agnosticism about the empirical link between trade reform and economic performance.
That is the abstract to the new NBER working paper from Douglas Irwin, self-recommending.
Anecdotes that Millennials fundamentally differ from prior generations are numerous in the popular press. One claim is that Millennials, happy to rely on public transit or ride-hailing, are less likely to own vehicles and travel less in personal vehicles than previous generations. However, in this discussion it is unclear whether these perceived differences are driven by changes in preferences or the impact of forces beyond the control of Millennials, such as the Great Recession. We empirically test whether Millennials’ vehicle ownership and use preferences differ from those of previous generations using data from the US National Household Travel Survey, Census, and American Community Survey. We estimate both regression and nearest-neighbor matching models to control for the confounding effect of demographic and macroeconomic variables. We find little difference in preferences for vehicle ownership between Millennials and prior generations once we control for confounding variables. In contrast to the anecdotes, we find higher usage in terms of vehicle miles traveled (VMT) compared to Baby Boomers. Next we test whether Millennials are altering endogenous life choices that may, themselves, affect vehicles ownership and use. We find that Millennials are more likely to live in urban settings and less likely to marry by age 35, but tend to have larger families, controlling for age. On net, these other choices have a small effect on vehicle ownership, reducing the number of vehicles per household by less than one percent.
That is from new work by Christopher R. Knittel and Elizabeth Murphy.
Changing sectoral trends in the last 6 decades, translated through the economy’s production network, have on net lowered trend GDP growth by around 2.3 percentage points. The Construction sector, more than any other sector, stands out for its contribution to the trend decline in GDP growth over the post-war period, accounting for 30 percent of this decline.
That is from a new working paper by Andrew Foerster, Andreas Hornstein, Pierre-Daniel Sarte, and Mark W. Watson, “Aggregate Implications of Changing Sectoral Trends.”
Kevin Erdmann, telephone!
…we suggest that this division of innovative labor has not, perhaps, lived up to its promise. The translation of scientific knowledge generated in universities to productivity enhancing technical progress has proved to be more difficult to accomplish in practice than expected. Spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. Corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. Large corporations have access to significant resources, can more easily integrate multiple knowledge streams, and direct their research toward solving specific practical problems, which makes it more likely for them to produce commercial applications. University research has tended to be curiosity-driven rather than mission-focused. It has favored insight rather than solutions to specific problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. This is not to deny the important contributions that universities and small firms make to American innovation. Rather, our point is that large corporate labs may have distinct capabilities which have proved to be difficult to replace.
That is from Ashish Arora, Sharon Belenzon, Andrea Patacconi, and Jungkyu Suh, “The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth,” recommended, an excellent paper spanning several disciplines. I would myself note this is further reason not to split up the major tech companies.