Results for “zero marginal product”
119 found

A Critique of Tabarrok on Bundling

In my MRUniversity video on the economics of bundling I argue that bundling raises total surplus and that requiring the Cable TV companies to price by the channel is unlikely to reduce most people’s cable bill (see also Does Cable TV Ripoff People Who Don’t Like Sports?). Pragmatarianism offers an excellent critique. Here is one bit from a longer post worth reading in full:

The flaw in Tabarrok’s logic is that it completely ignores the necessity of determining what the actual demand is for the individual components in the bundle.  For example, when I subscribed to cable…Charter had no idea how much I valued the Discovery Channel.  Neither did the Discovery Channel.  But is my valuation relevant?  According to Tabarrok…it really isn’t.  Uh, what? 

How could the Discovery Channel and Charter and Tabarrok not care what the actual demand is for the Discovery Channel?  In the absence of consumer valuation…how could society’s limited resources be put to their most valuable uses? 

Tabarrok is basically arguing that we don’t need accurate information in order to efficiently allocate resources.  Except, does he really believe that?  Let me consult my magic database…

The most valuable public goods are constantly changing, just as the most valuable private goods are constantly changing.  The signal provided by prices and mobility is therefore of great importance. – Alexander Tabarrok, in The Voluntary City

Huh.  Hmmm.  Is the Discovery Channel a private good?  Yes.  Is its value constantly changing?  Yes.  So…according to Tabarrok…it’s of great importance that the Discovery Channel should have its own price.  But this sure wasn’t what he said in his video. 

An excellent point that was made most forcefully by Ronald Coase in The Marginal Cost Controversy. Coase argued that pricing goods with high fixed cost at marginal cost would generate static efficiency but at the price of dynamic efficiency because we would not be able to say with assurance that the total value of the product exceeded total cost. Similarly we lose some information with bundling, perhaps especially so because marginal cost in this case is zero. With bundling, we know that the total value of the bundle exceeds the total cost but we are less certain that the total value of each bundle component (channel) exceeds the total cost of each component.

But this cannot be the whole story because in another paper, The Nature of the Firm, Coase pointed out that sometimes we choose not to use prices. Firms, for example, are islands of central planning in a market ocean (see Yglesias for a good discussion).

A channel such as HBO is itself a bundle of dramas, comedies and documentaries. Should Girls and Game of Thrones always be priced and sold separately and not through the HBO bundle? HBO certainly learns something from individually priced downloads on iTunes and that information helps HBO to improve its service. But how much is this information worth?

In 2002 should HBO have individually priced episodes of the Sopranos and sold them through AOL?  Individual pricing generates value but it also has costs. Tradeoffs are everywhere. And, to the crux of the issue, if a law had been passed in 2002 requiring HBO to sell The Sopranos on an episode by episode basis would that have resulted in better and more programming at lower prices? I think not. Similarly, I see few reasons to think that welfare would be improved by a law requiring cable TV companies to price by channel.

More generally, the price system is embedded in the larger field of the market economy which includes non-price institutions such as firms; and the market economy is embedded in the larger field of civil society which includes non-profits and non-market institutions such as the family. Economists often focus on the virtues of the price system but that should not blind us to the many virtues and many margins on which a free society operates.

Are recessions a good time to boost the minimum wage?

One empirical regularity is that many minimum wage boosts come during recessions or downturns, as many of you pointed out here.  Yet I take this repeated pattern to be an argument for having a low, zero, or quite “fettered” minimum wage.

Let’s think through the economics.  One of the main pro-minimum wage arguments — arguably the #1 argument — cites labor market monopsony.  Let’s say you have a monopsonistic employer who holds back on bidding for more labor, out of fear that hiring more labor raises the price paid on all labor units of a certain quality (by assumption, there is no perfect price discrimination here).  The minimum wage can get you out of this trap.  By forcing the higher wage on all workers in any case, the employer now doesn’t hesitate to hire more of them because the “fear of bidding up the price of labor” effect is gone or diminished.  And that is how, in some situations, a higher minimum wage can boost employment.

Now let’s say the economy is in a demand-driven downturn, which creates a surplus in the labor market.  Now, to get more workers, the monopsonist firm does not have to raise the wage and it can get more workers at the prevailing wage.  But employers just don’t want more workers, because of demand-side constraints.  So employers could in fact hire more workers without pushing up wage rates at all, once again that is for all units of labor of a particular quality.  Yes there is still monopsony, but the potential wage effects of hiring more labor are muted by the labor surplus.  And that means boosting the minimum wage won’t create the beneficial hiring effects which operate in the more traditional monopsony scenario, explained in the paragraph directly above.

In other words, if you think we are now seeing a slow labor market for demand-side reasons, you should be skeptical of the monopsony argument for minimum wage hikes, at least for the time being.

By the way, demand-side problems often wreck the notion that the EITC and minimum wage are complements.

The bottom line is that a lot of the arguments for a higher minimum wage are inconsistent with or in tension with a demand-driven labor market slowdown.  And I don’t exactly see the world rushing to point this out.

Here is my earlier argument that slow labor markets are the worst times to boost minimum wages.  Here is my earlier post drawing a parallel between minimum wages (government-enforced sticky wages) and privately-enforced sticky wages.  Here is an excerpt from that post:

I know many economists who will argue: “let’s raise the state-imposed minimum wage.  Employers will respond by creating higher-productivity jobs, or by paying more, and few jobs will be lost.”  I do not know many Keynesians who will argue: “In light of the worker-imposed minimum wage, employers will respond by creating higher-productivity jobs, or by paying more, and few jobs will be lost.”

Addendum: By the way, here are some graphs and regressions about the minimum wage and recessions, from  Kevin Erdmann.  I think he is attempting the impossible, but you still might find it instructive to look at some of the pictures.

Which kinds of music are encouraged by streaming vs. downloads?

Let’s compare iTunes downloads to a mythical perfect streaming service which lets you listen to everything for a fixed fee each month or sometimes even for free. In the interests of analytical clarity, I will oversimplify some of the actual pricing schemes associated with streaming and consider them in their purest form.

Streaming seems to encourage the demand for variety, so the website vendor wants to make browsing seem really fun, perhaps more fun than the songs themselves.  (An alternative view is that the information produced by streaming services, and the recommendations, allow for in-depth exploration of genres and that outweighs the “greater ease of sampling of variety” effect.  Perhaps both effects can be true for varying groups of listeners with somehow the “middle level of variety-seeking left in the lurch, relatively speaking.)

The music creators are incentivized to create music which sounds very good on first approach.  Otherwise the listener just moves on to further browsing and doesn’t think about going to your concert or buying your album.

Streaming, with its extremely large menu, also means commonly consumed pieces will tend to be shorter or more easily broken into excerpts.  This will favor pop music and I think also opera, because of its arias.

Advertising is a more important revenue source for streaming than it is for downloads.  The music promoted by streaming services thus should contribute to the overall ambience and coolness of the site, and musicians who can meet that demand will find that their work is given more upfront attention.  It encourages music whose description evokes a response of “Oh, I’ve never had that before, I’d like to try it.”  Even if you don’t really care about it.

People who purchase advertised products are, on average, older than the people who purchase music.  Streaming services thus should slant product and product accessibility on the site toward the musical tastes of older people.

Since streaming divides up revenues among a greater number of artists, that should encourage solo performers with low capital costs, who can keep their (tiny) share all for themselves.  It also may require that the artists on streaming services can make a living or partial living giving concerts, even more so than under the previous world order.

This music industry source suggests that streaming boosts album sales in a way that downloads do not.  It also questions whether that boost will be long-lived, as streaming services take over more of the market.

When the marginal cost of more music is truly zero, does that make musical choices more or less socially influenced?

Hannah Karp shows that in the new world of streaming, mainstream radio stations are responding by playing the biggest hits over and over again.  Ad-supported media require the familiar song to grab and keep the attention of the listener.  Risk-aversion is increasing, which probably pushes some marginal listeners, who are interested in at least some degree of exploration, into further reliance on streaming.

The top 10 songs last year were played close to twice as much on the radio than they were 10 years ago, according to Mediabase, a division of Clear Channel Communications Inc. that tracks radio spins for all broadcasters. The most-played song last year, Robin Thicke’s “Blurred Lines,” aired 749,633 times in the 180 markets monitored by Mediabase. That is 2,053 times a day on average. The top song in 2003, “When I’m Gone” by 3 Doors Down, was played 442,160 times that year.

So the differing parts of the market are interdependent here.

What do you think?

The resource costs of a gold standard

This is one of those topics which bugs me.

I’m not happy with counting the stock of “monetary gold” as the “resource costs of a gold standard,” as did Milton Friedman.  We also hold stocks of oil, copper, and other commodities — how about books in libraries? — and no one considers inventories of those commodities as costs per se.  For one thing, holding monetary gold in vaults still involves an option to convert into commodity uses and it may in essence serve as a useful commodity inventory for gold.  Another way to put the point is that a properly capitalized bank can simply hold its gold in dental offices — or in wedding rings — if need be.  How about if they hold their assets in the form of securities (T-Bills?) which can be, if needed, traded for gold mining stocks?

Is there a systematic market failure when it comes to locating inventories too close to major shipping centers?  I don’t see why.  But that’s arguably the same question as the one about the resource costs of a gold standard.

Or consider the Hotelling resource pricing rule, namely that a resource price should rise at the nominal rate of interest, with various adjustments for costs and changing costs and risk tossed in.  Let’s say there is a gold standard and gold is also the medium of account.  The price of gold rising at the nominal rate of interest thus means the general price level is falling at the nominal rate of interest.  During the times of the classical gold standard, expected price inflation was roughly zero, but nominal interest rates were higher than zero.  Either prices weren’t falling fast enough or nominal interest rates were too high or some mix of both.  Say prices weren’t falling enough.  Well, that is violating the Hotelling rule but in fact gold production is then falling short of an optimum, not exceeding it.  Alternatively, you could toss in a liquidity rate of return on holding gold inventories and maybe then things would be just right.

A way to put this point more generally is that pricing some contracts in terms of a commodity does not itself create violations of the Hotelling rule.  You might think that the liquidity premium on gold has to create an inefficiency, perhaps because social and private returns to liquidity differ.  But do they, in the case of base money?  Or isn’t the social return to liquidity arguably higher, if you see bankruptcy costs and benefits from thick capitalization using the liquid asset?  In any case, the marginal liquidity return on money gold has to equal the marginal liquidity return on “commodity gold inventories” and then I am back to not being so sure there is a significant externality wedge.

It is unlikely that a final “all things considered” view will have the quantity of gold mined and held be just right.  Yet as a first cut answer, postulating zero real resource costs for a gold standard is more reasonable than it might at first appear.

By the way, for macroeconomic reasons I’ve never favored a gold standard, but the resource cost argument has long seemed to me weak.  All things considered, we might not end up digging up enough gold (liquidity) and that is the real worry we should hold.

A Few Favorite Books from 2013

Tom Jackson asked me for a couple of best books for his year end column. I don’t read as many books as Tyler so consider these some favorite social science books that I read in 2013.

In The Undercover Economist Strikes Back, Tim Harford brings his genius for storytelling and the explanation of complex ideas to macroeconomics. Most of the popular economics books, like The Armchair Economist, Freakonomics, Predictably Irrational and Harford’s earlier book The Undercover Economist, focus on microeconomics; markets, incentives, consumer and firm choices and so forth. Strikes Back is that much rarer beast, a popular guide to understanding inflation, unemployment, growth and economic crises and it succeeds brilliantly. Mixing in wonderful stories of economists with exciting lives (yes, there have been a few!) with very clear explanations of theories and policies makes Strike Back both entertaining and enlightening.

Stuart Banner’s American Property is a book about property law, which sounds like an awfully dull topic. In the hands of Banner, however, it is a fascinating history of what we can own, how we can own it and why we can own it. Answers to these questions have changed as judges and lawmakers have grappled with new technologies and ways of life. Who owns fame? Was there a right to own one’s own image? Benjamin Franklin, whose face was used to hawk many products, would have scoffed at the idea but after the invention of photography and the onset of what would later be called the paparazzi thoughts began to change. In the early 1990s, Vanna White was awarded $403,000 because a robot pictured in a Samsung advertisement turning letters was reminiscent of her image on the Wheel of Fortune. American Property is a great read by a deep scholar who writes with flair and without jargon.

On June 3, 1980, shortly after the Soviet Union’s invasion of Afghanistan, the U.S. president’s national security adviser was woken at 2:30 am and told that Soviet submarines had launched 220 missiles at the United States. Shortly thereafter he was called again and told that 2,200 land missiles had also been launched. Bomber crews ran to their planes and started their engines, missile crews opened their safes, the Pacific airborne command post took off to coordinate a counter-attack. Only when radar failed to reveal an imminent attack was it realized that this was a false alarm. Astoundingly, the message NORAD used to test their systems was a warning of a missile attack with only the numbers of missiles set to zero. A faulty computer chip had inserted 2’s instead of zeroes. We were nearly brought to Armageddon by a glitch. If that were the only revelation in Eric Schlosser’s frightening Command and Control it would be of vital importance but in fact that story of near disaster occupies just one page of this 632 page book. The truth is that there have been hundreds of near disasters and nuclear war glitches. Indeed, there have been so many covered-up accidents that it’s clear that the US government has come much closer to detonating a nuclear weapon and killing US civilians than the Russians ever did. Thankfully, we have reduced our stockpile of nuclear weapons in recent years but, as in so many other areas, we are also more subject to computers and their vulnerabilities as we make decisions at a faster, sometimes superhuman, pace. Command and control, Schlosser warns us, is an illusion. We are one black swan from a great disaster and if this is true about the US handling of nuclear weapons how much more fearful should we be of the nuclear weapons held by North Korea, Pakistan or India?

Some dangers in estimating signaling and human capital premia

Let’s say you signal your way into a first job, then learn a lot from holding that perch, and enjoy a persistently higher income for the rest of your life.  Is that a return to signaling or a return to learning?  Or both?

Maybe it matters that “the signaling came first.”  Well, try this thought experiment.

Let’s say you have to learn to read and write to signal effectively.  Can we run a causal analysis on “learning how to read and write”?  Take away that learning and you take away the return to signaling.  Should we thus conclude that the return to signaling is zero, once we take learning into account?  After all, the learning came first.  No, not really.

The trick is this: when there are non-additive, value-enhancing relationships across inputs, single-cause causal experiments can serve up misleading results.  In fact, by cherry-picking your counterfactual you can get the return to signaling, or to human capital, to be much higher or lower.  Usually one is working in a model where the implicit marginal causal returns to learning, IQ, signaling, and so on sum up to much more than 100%, at least if you measure them in this “naive” fashion.  If you think of a career in narrative terms, IQ, learning, and signaling are boosting each others’ value with positive and often non-linear feedback.  And insofar as these labor market processes have “gatekeepers,” it is easy for the marginal product of any one of these to run very high, again if you set up the right thought experiment.

Along related lines, many people use hypothetical examples to back out the return to signaling, learning, IQ, or whatever.  “Let’s say they make you drop out of Harvard and finish at Podunk U.”  “Let’s say you forge a degree.”  “Let’s say you are suddenly a genius but living in the backwoods.”  And so on.  These are fun to talk and think about, but like the above constructions they will give you a wide range of answers for marginal returns, again depending which counterfactual you choose.  A separate point is that many of these are non-representative examples, or they involve out of equilibrium behavior.

I call the methods discussed in the above few paragraphs the single-cause causal measures, because we are trying to estimate the causal impact of but a single cause in a broader non-additive, multi-causal process.

There is another way to analyze the return to signaling, and that is to leave historical causal chains intact and ask what if a degree is removed.  Let’s say I’ve held a job for ten years and my team is very productive.  But the boss can’t figure out who is the real contributor.  I get an especially large share of the pay because, from my undergraduate basket weaving major, the boss figures I am smarter than those team members who did not finish college at all.  If I didn’t have the degree, I would receive $1000  less.  So that year the return to signaling is $1000.  I call this the modal measure.  It is modal rather than causal because we take my degree away in an imaginary sense, without taking away my job (which perhaps I would not have, earlier on, received without the degree).

There are also the measures (not easy to do) based in notions from bargaining theory.  Consider IQ, learning, and signaling as coming together to form “coalitions.”  One-by-one, remove different marginal elements of the coalition in thought experiments, estimate the various marginal products, and then average up those marginal products as suggested by various bargaining axioms.  You could call those the multi-cause causal measures.  They are more theoretically correct than the single-cause causal measures, but difficult to do and also less fun to talk about.

Yet another method is to pick out a single counterfactual on the basis of which policy change is being proposed.  I’ll call these the policy measures.  Let’s say the proposal is to subsidize student transfer from community colleges to four-year institutions.  You can then ask causal questions about the group likely to be affected by this.  (It is possible to estimate the private return to education for this kind of policy, but hard to break that down into signaling and learning components.)  In any case the answers to these questions will not resolve broader debates about the relative importance of signaling, learning, IQ, and so on and how we should understand education more generally.

Usually when people argue about the return to signaling, they are conflating the single-cause causal measures, the modal measures, the bargaining theory measures, and the policy measures.  The single-cause causal measures are actually the least justified of this lot, but they exercise the most powerful sway over most of our imaginations.

The single-cause causal measures are especially influential in the blogosphere, where they make for snappy posts with vivid narrative examples and counterexamples.  But they are misleading, so do not be led astray by them.

How much of education and earnings variation is signalling? (Bryan Caplan asks)

On Twitter Bryan asks me:

Would you state your human capital/ability bias/signaling point estimates using my typology?

He refers to this blog post of his, though he does not clearly define the denominator there: is it percentage of what you spend on education or explaining what percentage of the variation in lifetime earnings?  I’ll choose the latter and also I’ll focus on signaling rather than trying to separate out which parts of human capital are from birth and which are later learned.  My calculations are thus:

1. I don’t wish to count “credentialed” occupations, where you need a degree and/or license, but you are reaping rents due to monopoly privilege.  It’s neither human capital nor signaling (it’s not about intrinsic talent), though you could argue it is a kind of human capital in rent-seeking.  In any case, let’s focus on private labor markets without such barriers.

2. Most capital and resource income is due to factors better explained by human capital theories or due to inheritance.  That is more than a third of earnings right there.  Note that the higher is inequality, the less the signaling model will end up explaining.  That is one reason why the signaling model has become less relevant.

3. Depending on job and sector, what you’ve signaled, as opposed to what you know, explains a big chunk of wages in the first three to five years of employment.  Within five years (often less), most individuals are earning based on what they can do, setting aside credentialism as discussed under #1.  Here is my earlier post on speed of employer learning.

Keep in mind that everyone’s wages change quite a bit over their lifetime and that is mostly not due to retraining (i.e., changes in the educational signal) in the formal sense, as most people stop formal retraining after some point.  The changes are due to employer estimates of skill, modified by bargaining power.  In this sense all theories are predominantly human capital theories, whether they admit it or not.

To be generous, let’s give Bryan the full first five years of income based on signaling alone, out of a forty year career.  And let’s say that on average wages rise at the rate of time discount (not true as of late, but a simplifying assumption and I think Bryan believes in a claim like this anyway.)

How much of income is explained by signaling?  I’m coming up with “1/8 of 2/3,” the latter fraction referring generously to labor’s share in national income.  That will fall clearly under ten percent, but recall I’ve inserted some generous assumptions here.

Bryan wants to call me “a signaling denialist,” yet I see signaling as still very important for understanding some aspects of the labor market.  But it’s far from the main story for the labor market as a whole, especially as you move into the out years.

That all said, this “decomposition” approach may obscure more than it illuminates.  Let’s consider two parables.

First, imagine a setting where you need the signal to be in the game at all, but after that your ingenuity and your personal connections explain all of the subsequent variation in income.  Depending what margin you choose, the contribution of signaling to later income can be seen as either zero percent or one hundred percent.  Signaling won’t explain any of the variation of income across people with the same signal, yet people will compete intensely to get the signal in the first place.

Second, in a basic signaling model there are two groups and one dimension of signaling.  That’s too simple.  A signaling model implies that a worker is paid some kind of average product throughout many years, but of course the reference class for defining this average product is changing all the time and is not, over time, based on the original reference class of contemporaneous graduating peers.  For the purposes of calculating your wage based on a signal, is your relevant peer group a) all those people who got out of bed this morning, b) all those people in the Yale class of 2012, or c) all those who have been mid-level managers at IBM for twenty years?  This will change as your life passes.

So there’s usually a signaling model nested within a human capital model, with the human capital model determining the broader parameters of pay, especially changes in pay.  The employer’s (reasonably good but not perfect) estimate of your marginal product determines which peer group you get put into, if you choose to invest in additional signals (or not).  The epiphenomena are those of a signaling model, but the peer group reshufflings over time are ruled by something else.  Everything will look like signaling but again over time signaling won’t explain much about the variation or evolution in wages.

Seeing the relevance of those “indeterminacy” and “nested” perspectives is more important than whatever decomposition you might cite to answer Bryan’s query.

How sticky are wages anyway?

On the front of this new Elsby, Shin, and Solon paper (pdf) it reads “Preliminary and incomplete,” but if anything that is a better description of the pieces which have come before theirs.  They have what I consider to be the holy grail of macroeconomics, namely a worker-by-worker micro database of nominal wage stickiness under adverse economic conditions, including the great recession and with over 40,000 workers, drawn from the Current Population Survey.

Here are a few results:

1. When looking at the distribution of nominal wage changes, there is always a spike at zero.

2. That said, the spike, ranging from six to twenty percent, isn’t as big as one might expect.

3. The fraction of hourly workers reporting a nominal wage reduction always exceeds ten percent, and the fraction of non-hourly workers reporting a nominal wage reduction always exceeds twenty percent.

3b. In 2007-2008, 37.1% of U.S. workers in the non-hourly sample experienced negative nominal wage changes.  That’s a lot.  In the following years that figure was  over thirty percent.  See Table 6 on p.24.

4. These figures are for workers who stay with the same employer for a year or more, and thus they are from sectors where nominal stickiness is especially likely.  Overall nominal stickiness is probably considerably smaller than those figures indicate, as the broader pool of workers includes temps, those on commissions, those with short-term jobs, and so on.

5. If you compare the great recession to earlier downturns, “…initial evidence appears to be weak for a simple story in which the combination of downward stickiness in nominal wages and low inflation has generated high unemployment through excessive rates of job loss.”  If it were primarily a story of sticky nominal wages, we should have expected layoff rates to be even higher than they were.

6. Overall wages are less sticky in the UK than in the U.S.; for instance “the proportion [of measured UK workers] experiencing nominal wage cuts regularly has run in the neighborhood of 20 percent.”  (And here are some recent related results.)

7. Other studies with true microdata also find strongly procyclical real wages, often mediated through changes in nominal wages, including nominal wage declines.

8. The slowdown in real wage growth for U.S. women, during the great recession, follows puzzling patterns.

9. None of these figures include wage changes which take the form of changes in the quality of working conditions, chances of promotion, fringe benefits, and so on.

NB: This paper does not show nominal wages to be fully flexible, nor does it show that observed nominal wage changes were “enough” to re-equilibrate labor markets.  Still, this paper should serve as a useful corrective to excess reliance on the sticky nominal wage hypothesis.  Nominal wage stickiness is a matter of degree and perhaps we need to turn the dial back a bit on this one.

Note also that this paper need not discriminate against neo-Keynesian and monetarist theories, though it will point our attention toward “zero marginal revenue product” versions of the argument, in which case the flexibility of nominal wages simply doesn’t help much.  Note also that such versions of the argument may have somewhat different analytic and policy conclusions than what we are used to expecting.

Addendum: Also from Solon, this time with Martins and Thomas, is this paper about Portugal (pdf), showing considerable nominal flexibility for entry wages in labor markets.

Thiel v. Schmidt

Peter Thiel, taking the pessimistic view, and Eric Schmidt of Google, taking the optimistic view, both made good points in their debate over technology but Thiel had the knockout punch:

PETER THIEL: …Google is a great company.  It has 30,000 people, or 20,000, whatever the number is.  They have pretty safe jobs.  On the other hand, Google also has 30, 40, 50 billion in cash.  It has no idea how to invest that money in technology effectively.  So, it prefers getting zero percent interest from Mr. Bernanke, effectively the cash sort of gets burned away over time through inflation, because there are no ideas that Google has how to spend money.

ERIC SCHMIDT: [talks about globalization]

The moderator repeats Thiel’s point:

ADAM LASHINSKY:  You have $50 billion at Google, why don’t you spend it on doing more in tech, or are you out of ideas?  And I think Google does more than most companies.  You’re trying to do things with self-driving cars and supposedly with asteroid mining, although maybe that’s just part of the propaganda ministry.  And you’re doing more than Microsoft, or Apple, or a lot of these other companies.  Amazon is the only one, in my mind, of the big tech companies that’s actually reinvesting all its money, that has enough of a vision of the future that they’re actually able to reinvest all their profits.

ERIC SCHMIDT:  They make less profit than Google does.

PETER THIEL:  But, if we’re living in an accelerating technological world, and you have zero percent interest rates in the background, you should be able to invest all of your money in things that will return it many times over, and the fact that you’re out of ideas, maybe it’s a political problem, the government has outlawed things.  But, it still is a problem.

ADAM LASHINSKY:  I’m going to go to the audience very soon, but I want you to have the opportunity to address your quality of investments, Eric.

ERIC SCHMIDT:  I think I’ll just let his statement stand.

ADAM LASHINSKY:  You don’t want to address the cash horde that your company does not have the creativity to spend, to invest?

ERIC SCHMIDT:  What you discover in running these companies is that there are limits that are not cash.  There are limits of recruiting, limits of real estate, regulatory limits as Peter points out.  There are many, many such limits.  And anything that we can do to reduce those limits is a good idea.

PETER THIEL:  But, then the intellectually honest thing to do would be to say that Google is no longer a technology company, that it’s basically ‑‑ it’s a search engine.  The search technology was developed a decade ago.  It’s a bet that there will be no one else who will come up with a better search technology.  So, you invest in Google, because you’re betting against technological innovation in search.  And it’s like a bank that generates enormous cash flows every year, but you can’t issue a dividend, because the day you take that $30 billion and send it back to people you’re admitting that you’re no longer a technology company.  That’s why Microsoft can’t return its money.  That’s why all these companies are building up hordes of cash, because they don’t know what to do with it, but they don’t want to admit they’re no longer tech companies.

ADAM LASHINSKY:  Briefly, and then we’re going to go to the audience.

ERIC SCHMIDT:  So, the brief rebuttal is, Chrome is now the number one browser in the world.

In my mind, the revealed preference of our technological leaders is the best and most depressing argument for the great stagnation.

Why do universities have endowments?

Alan Gunn asks:

Why do universities (in the US and Britain only) have endowments, and should they? And why does no one but Henry Hansmann [pdf, eBook version here] write about this question?

Because they can.  Tax law doesn’t stop them, and why should a University President spend down the fund?  An ongoing high balance in the fund means prestige, a good ranking, and an ability to make credible commitments to quality faculty and quality programs.

Current donors know that their support will feed into something long-run and grand.  Rationally or not, it is less persuasive for an alumni donor to hear a pitch like “We will spend down the corpus.  Penn State will rise seventeen spots in the ratings, for twenty years, and then fade into obscurity.”  Many givers care predominantly about the “here and now,” but they donate to political campaigns, or benevolent charities, not universities.

Ultimately we need a theory of segmented giving, and how board structures of universities support such giving.  University board members benefit most from a prestigious school with a high endowment and other prestigious board members.  In general those boards will support accumulating the endowment, at least if the school has any chance for prestige in the first place.  Spending money within the university instead distributes those benefits to current faculty and students, rather than to the decision-makers over the endowment.

Note that while the most visible colleges and universities usually have large endowments, the median and modal schools have endowments very close to zero.  They have no chance of accumulating their way to substantial prestige benefits.

Alternatively, you could drop the fancy institutional economics and apply crude price theory.  Universities can borrow or otherwise raise money tax-free, and at g > r you should expect ongoing and rising accumulation.

It is striking how much the list of top U.S. universities does not change over the last century, albeit with some new entries from the west coast.  Among other things, that suggests there has been no fancy, expensive and effective new product that a school might invest in and run down its endowment for.  This might change in the next twenty years.  One can imagine a middling school running down its endowment to spend its way to leadership in on-line education.

I have never seen a good paper on which non-profits accumulate endowments and which do not, and how that difference functions as both cause and effect.  I would think, for instance, that the Heritage Foundation has a substantial endowment, but many think tanks do not.

Here is a new paper on university endowments (pdf), by Gilbert and Hrdlicka, asking whether endowments are invested in too risky a fashion.  It also raises the question of how well endowment practices will survive in a time with low rates of return.  Here is a 2008 dialogue on endowment reform.  Here is a 2009 law review piece on university endowments, it is a little slow to load.  Here is a TIAA-CREF perspective (pdf) on the investment committees for university endowments; they tend to be run by donors.  Here is a look at mandatory payout proposals.  Here is a good 2010 paper (pdf) on what happens when endowment values decline, it is called “Why I Lost My Secretary.”

What should the ECB do?

In the comments, Gareth writes:

Target nominal GDP growth of at least 5% for all EU countries, and back it with a threat of unlimited QE. Italy needs a primary surplus of 6%+ with current yields. That drops to around 3% if NGDP growth rate goes up from predicted 2.5% to 5%.

And back in reality, they could at least raise the inflation target to 4%.

I don’t think he and I disagree on the underlying economic theory, but I suspect this would be too little.  At zero growth, that means five percent price inflation a year and basically an open fire hose to the Italian Treasury.  That also means they never reform, noting that I accept the Keynesian point that at this time horizon fiscal reform is counterproductive.  But “no reform” is counterproductive too!

In my view, the growth simply isn’t there, not now, not even with looser monetary policy.  Eurozone inflation is already about three percent, and while I understand the Sumneresque credibility point at current margins nothing seems credible for the eurozone, there is only the present.  Why should another two percent inflation a year turn the tide?  The inability to implement any kind of credible rule means that the “in the moment” solution has to be all the stronger.  So the “answer,” if that is the right word, is ten percent inflation a year for the eurozone — plus the firehose to Rome — to get the real value of those debts down and quickly.  Maybe twelve.

I don’t feel like debating whether this would be better or worse than the status quo; I am content to suggest it probably won’t happen, not even if German and French leaders understand the gravity of the situation, which I suspect they do.  (The Germans, believe it or not, “do gravity well” and have for some time.)  It’s a common meme these days that the German leaders “don’t get it,” but I view it in reverse: they’re the ones who understand how grave a problem it is, and how truly hard to fix it would be, which is why they are not doing more.  They don’t see the point in pulling out the peashooter against the elephant, and the blunderbuss is not yet available, if it ever will be.

Addendum: Kevin Drum comments.

Do all serious economists favor a carbon tax?

Richard Thaler, Justin Wolfers, and Alex all consider that question on Twitter.  I say no.  While I personally favor such a policy, here are my reservations:

1. Other countries won’t follow suit and then we are doing something with almost zero effectiveness.

2. It may push dirty industries to less well regulated countries and make the overall problem somewhat worse.

3. There is Jim Manzi’s point that Europe has stiff carbon taxes, and is a large market, but they have not seen a major burst of innovation, just a lot of conservation and some substitution, no game changers.  Denmark remains far more dependent on fossil fuels than most people realize and for all their efforts they’ve done no better than stop the growth of carbon emissions; see Robert Bryce’s Power Hungry, which is in any case a useful contrarian book for considering this topic.

4. Especially for large segments of the transportation sector, there simply aren’t plausible substitutes for carbon on the horizon.

5. A tax on energy is a sectoral tax on the relatively productive sector of the economy — making stuff — and it will shift more talent into finance and other less productive sectors.

6. Oil in particular will become so expensive in any case that a politically plausible tax won’t add much value (careful readers will note that this argument is in tension with some of those listed above).

7. A carbon tax won’t work its magic until significant parts of the energy and alternative energy sector are deregulated.  No more NIMBY!  But in the meantime perhaps we can’t proceed with the tax and expect to get anywhere.  Had we had today’s level of regulation and litigation from the get-go, we never could have built today’s energy infrastructure, which I find a deeply troubling point.

8. A somewhat non-economic argument is to point out the regressive nature of a carbon tax.

9. Jim Hamilton’s work suggests that oil price shocks have nastier economic consequences than many people realize.

9b. A more prosperous economy may, for political and budgetary reasons, lead to more subsidies for alternative energy, and those subsidies may do more good than would the tax.  Maybe we won’t adopt green energy until it’s really quite cheap, in which case let’s just focus on the subsidies.

10. The actual application of such a tax will involve lots of rent-seeking, privileges, exemptions, inefficiencies, and regulatory arbitrage.

It seems to me entirely possible that a serious economist would find those arguments hold the balance of power.  In my view those points stack up against a) the problem seems to be worse than we thought at first, b) the philosophic “we are truly obliged to do something,” and c) “some taxes need to go up anyway” arguments.

I am in any case not an optimist on the issue and I consider my pessimism a more fundamental description of my views on the issue than any policy recommendation.  If you study tech, you will see a bright present and also a bright future.  If you study K-12 education, you will see a mixed to dismal present and a possibly bright future.  If you study energy economics and the environment, you will see an OK present and a dismal future, no matter what policies we choose.

Do our intuitions about deadweight loss break down at very small scales?

I’ve been thinking about high-frequency trading again.  Some of the issues surrounding HFT may come from whether our intuitions break down at very small scales.

Take the ordinary arbitrage of bananas.  If one banana sells for $1 and another for $2, no one worries that the arbitrageurs, who push the two prices together, are wasting social resources.  We need the right price signal in place and the elimination of deadweight loss is not in general “too small” to be happy about.

But at tiny enough scales, we stop being able to see why the correct price is the “better” price, from a social point of view.  Think of the marginal HFT act as bringing the correct price a millisecond earlier, so quickly that no human outside the process notices, much less changes an investment decision on the basis of the better price coming more quickly.  (Will we ever use equally fast computers to make non-financial, real investment decisions in equally small shreds of time?  Would that boost the case for HFT?  Is HFT “too early to the party”?  If so, does it get credit for starting the party and eventually accelerating the reactions on the real investment side?)

HFT also lowers liquidity risk in many cases (it is easier to resell a holding, especially for long-term investors, as day churners can get caught in the froth), and thereby improving the steady-state market price, again especially for long-term investors.  That too could improve investment decisions, even if the improvement in the price is small in absolute terms.

Some decisions based on prices have to rely on very particular thresholds.  If no tiny price change stands a chance of triggering that threshold, we encounter the absurdity of there being no threshold at all.  We fall into the paradoxes of the intransitivity of indifference and you end up with too many small grains of sugar in your coffee.

So maybe a tiny price improvement, across a very small area of the price space, carries a small chance of prompting a very large corrective adjustment, with a comparably large social gain.  Yet we never know when we are seeing the adjustment.  The smaller the scale of the price improvement, the less frequently the real economy gains come, but in expected value terms those gains remain large relative to the resources used for arbitrage, just as in the bananas case.  It’s not obvious why operating on a smaller scale of price changes should change this familiar logic.  Is the key difference of smaller scales, combined with lumpy real economy adjustments, a greater infrequency of benefit but intact expected gains?

In this model the HFTers labor, perhaps blind to their own virtues, and bring one big grand social benefit, invisibly, every now and then.  Occasionally, for real investors, their trades help the market cross a threshold which matters.

I am reminded of vegetarians.  Say you stop eating chickens.  You are small relative to the market.  Does your behavior ever prompt the supermarket to order a smaller number of chickens based on a changed inventory count?  Or are all the small rebellions simply lost in a broader froth?

What is the mean expected time that HFT must run before it triggers a threshold significant for the real economy?

Aren’t the rent-seeking costs of HFT near zero?  Long-term investors do not have to buy and sell into the possible froth.  HFTers thus “tax” the traders who were previously the quickest to respond, discourage their trading, and push the rent-seeking costs of those traders out of the picture.  More fast computers, fewer carrier pigeons.  Are there models in which total rent-seeking costs can fall, as a result of HFT?  Does it depend on whether fast computers or pigeons are more subject to production economies of scale?

The Lucas critique and twin adoption studies

It is odd to cite a twin adoption study, and its results, as a response to a methodological critique of…a twin adoption study.  Nonetheless, the paper Alex cites, and the associated graph, show very readily the problems in interpreting such studies.

If you check out the graph, for a variable such as "religious importance" it shows family transmissibility of about thirty percent, with varying estimates of transmissibility for other religious variables.  That is the result from a data set involving a) parents who try hard to transmit religion with some idea of what to do, b) parents who don't try very hard, and c) parents who try hard and have no idea of what is an effective technique, as they might be advised by a well-informed social scientist. 

Here's the key point.  The original "control" question we were debating was about a) alone, yet in response Alex is putting forth a measure of the marginal efficacy for a-c, namely including the families who aren't trying to transmit.  Obviously the marginal product of the informed, trying families should be higher than the average marginal product for the group as a whole.  At the very least we can take thirty percent as the lower bound here, not the best estimate of the effect we are trying to measure.

The Korean-American Protestant study finds transmissibility of religious fervor, through family influence, of two-thirds.  That paper does not control for genetics, and of course because of genetic similarity family influence will run especially easily and this figure is an overestimate of the net family effect.  You can think of two-thirds as the upper bound here.  If we had commensurable studies (not the case), we would have lower and upper bounds for trying-to-transmit families.

One way to think about the Korean study is to recognize that out of 100 Protestant children, parental inculcation "worked" for 66 of them.  The correct marginal product question is: without that inculcation, how many of those 66 kids would have found their way to a comparably observant religion?  Of course we don't know, but that's the right question to focus upon.

Twin studies encourage you to think in terms of a different question about marginal products: if you had those Protestant families adopt 100 kids, and try to inculcate the same religion, would 66 of them have ended up observant?  Very likely not.  Of course the two thought experiments are quite different, and they give you different measures of marginal product, most of all because there are non-linear interactions between parenting, peers, and genes.  Since most children are not adopted, it is the first thought experiment which gives the more accurate measure of marginal product of parental inculcation of religion. 

What about the religious variables which don't seem very transmissible at all?  Alex cites "born agains," drawing on the same paper.  But this interpretation again mirrors a major drawback of many interpretations of twin adoption studies, namely that they don't reconcile the cross-sectional and the time series comparisons.  Alex is walking a simple pitfall here, as does the paper he cites.

What does this mean in practice?  Born agains (or arguably, their revival) are a relatively recent phenomenon in the United States, dating from the 1970s.  (There is a similar revival in biblical literalism, although perhaps less extreme.)  The study takes adults from the mid-1990s.  That means you will have lots of "born again" descendents who had strongly below average prospects of having had "born again" parents.  The correlation will appear very weak, but this doesn't show the variable is not transmissible going forward.  We simply don't know, at least not from this data set.  

To put this final point another way, the father of Abraham was not a Muslim, and so back then the correlation was zero, but this does not show the family cannot transmit Islam in later periods of time.

I'd like to stress again that when it comes to Bryan's book, I agree with most of his points.  But on religion in particular I don't think Alex is making sound claims.

Observations about Chinese (Chinese-American?) mothers

I agree with many of Bryan Caplan's views on parenting, and Yana can attest that I have never attempted a "dragon mother" style.  Yet I think that Bryan is overreaching a bit in rejecting virtually all of Amy Chua's claims.  The simpler view — which most Americans intuitively grasp — is that some Asian parenting styles do make kids more productive, and better at school, although it is less clear they make the kids happier.  It remains the case that most people overrate how much parenting matters in a broader variety of contexts, and in that regard Bryan's work is hardly refuted.  Still, I see real evidence for a parenting effect from many (not all) Asian-American and Asian families.

1. James Flynn argues, using evidence from tests, that Chinese families boosted their children's IQs by intensive parental techniques.  Based on some very specific research, he claims the parenting was causal and the IQ boost followed.  I hardly consider this the final word, but it's more to the point that the adoption studies and the like, which don't try to measure this effect directly and don't have measures of strict Asian parenting.

2. It is obvious that some Asian parenting techniques make the children much more likely to succeed as classical musicians.  It's a big marginal effect upon whatever genetic influence there might be (and in this case the genetic influence might well be zero or very small; Chinese hardly seem genetically superior in music.)  The only question is how much longer this list can become.  What else can the parents make their kids better at, even relative to IQ?  Future engineering success?  If violin is a slam dunk, I don't see why engineering is a big stretch.

3. I suspect that Bryan and his wife do, correctly, apply the notion of "high expectations" to their children and to the benefit of those kids. 

4. Bryan, like Judith Harris, argues that the influence of parents is typically mediated through peers and peer effects.  But we should not confuse the partial and general equilibrium mechanisms here.  For any single parent, the peers may well carry the chain of influence to their child and a lot of the parenting style applied to that individual kid will appear irrelevant.  But for the culture as a whole, the peers can serve this function only because of the general influence of culture and parenting on all of the peers as a whole.  In other words, peer quality is endogenous and a single family is free-riding upon the parenting efforts of others.  That's a better model than just looking at the partial equilibrium coefficient on the parent effect and concluding that parenting doesn't matter.  This is a mistake commonly made by Harris fans.

5. As an aside, I wonder how much there is a common Chinese parenting or mothering style.  Chua, of course, is from the Philippines.  It is estimated that about 20 percent of the children are China are "abandoned" by their parents — mothers too – typically as the parents move to the cities to take better jobs.  When Chua writes, to what extent is she referring to Chinese immigrant parenting styles, uniquely suited to new situations, and derived from Chinese culture but distinct nonetheless.

6. There is a significant literature on Chinese immigrant parenting styles, based on lots of empirical evidence, but I don't see anyone giving it much of a close look.  Here is a simple and well-known piece, not about Asians per se, arguing that "authoritative parenting" leads to superior performance in school.  There is also evidence that the effects accumulate rather than disappear over time.  There is a lot of research here, often quite disaggregated in its questions, and it goes well beyond the twin studies and it does not by any means always yield the same answers.

7. I expect great things from Scott Sumner's children.