Category: Economics

What is the value of think tanks?

So asks Daniel Drezner, read his post.  I see no need to focus on think tanks per se, but I see four critical gaps in our current understanding.  Someone (moi?…no, you) should be working on these problems:

1. Applying frontier social science to issues of disaster preparedness.  This includes how to respond and be ready for avian flu, terrorist attacks, and the next Katrina.

2. A good health care plan that is practical, not too far from politically feasible, and applies competition to lower costs and improve service quality.  It must be incentive-compatible, yet at the same time it can’t be seen as heartless and simply letting people die.  That probably rules out "cut health care spending in half and have everyone eat better and exercise more," otherwise an appealing option.

3. How should we respond to the possibility that very small groups will have the ability to attack or blackmail us, using nuclear weapons or other decentralized sources of extreme power?  What does this equilibrium look like, and how can we make it better rather than worse?

4. How can Africa actually develop?  Don’t beg the question by listing the needed outputs — such as markets, democracy, or the rule of law — as the inputs of your policy recommendation.

Any institution — think tank or not — which tackles those problems has earned my respect.  And note that all four overlap to some extent.  All relate to how a centralized sphere of control should respond to a decentralized abuse of incentives, or how we can stop those decentralized abuses in the first place. 

What has happened to job growth?

Daniel Gross writes:

Mystified economists have pointed to various possible culprits: outsourcing, competition from China, high health care costs and lower work-force participation, to name a few. But there’s one force that so far has managed to avoid blame for the sluggish pace of job growth: Enron.

In 2000 and 2001, as the bull market imploded, there was a spike in accounting problems – a mix of outright fraud, earnings manipulation and more benign restatements necessitated by changes in business conditions. Clearly, investors were burned by earnings restatements at Enron and WorldCom, and at hundreds of smaller and less infamous companies. "Nobody had actually explored the real consequences of earnings management, as opposed to the financial ones," says Thomas Philippon, assistant professor of economics at New York University’s Stern School of Business.

In a recent National Bureau of Economic Research working paper, Professor Philippon and a colleague, Simi Kedia, assistant professor of finance and economics at Rutgers, argued that the widespread accounting problems for which Enron was emblematic might have helped suppress employment growth – in the affected companies, and in the industries in which the misreporting was concentrated.

Professors Philippon and Kedia examined the roster of companies that restated earnings from January 1997 to June 2002, as compiled by what is now the Government Accountability Office, and matched it up with available employment data. It was a regrettably large sample: 919 restatements by 845 public companies. About one-tenth of publicly traded companies announced at least one restatement.

Not surprisingly, companies that were misrepresenting their financial results – intentionally or inadvertently – helped juice employment growth in the late 1990’s as they added employees. "During periods of suspicious accounting, firms hire and invest excessively," the professors said. From 1997 to 1999, the restating companies added 500,000 jobs, a 25 percent increase.

When these companies restated their earnings, the growth they had reported often turned out to be an illusion. As a result, the same companies shed labor quickly. At its peak, Enron employed 20,000 people. But in the weeks after its earnings restatement in November 2001, this new-economy profit machine was suddenly revealed to be an old-fashioned money pit. Within months, the company was down to about 500 employees. The authors label Enron a "typical – if somewhat extreme – example" of a company whose employment rose and fell rapidly.

On the whole, Professors Philippon and Kedia conclude, companies that had to restate earnings in 2000 and 2001 axed anywhere from 250,000 to 600,000 jobs in 2001 and 2002. That would account for a significant chunk of the jobs lost during the period.

What’s more, restatements create industrywide uncertainty that can inhibit future hiring. When WorldCom was revealed to have fudged its earnings, it became clear that the business model for telecommunications and data services wasn’t nearly as profitable as WorldCom had made it out to be. "All of the sudden, the entire industry appears to have excess labor," Professor Kedia said. And once many of the assumptions about the industry’s business models turned out to be false, executives and investors were naturally gun-shy about hiring and expanding.

It is too early to evaluate this research, and let us not get carried away by monocausal theories, but today I felt I learned something.  Here is the full story.

UHaul Pricing and Free Drinks for Women Nite

Here is my analysis of UHaul pricing and the larger implications for not only ‘women drink free nites’ but many other markets.

Why is it so more expensive to rent a UHaul van to travel from  LA to Las Vegas ($454) than from Vegas to LA ($119) (more here).  Since the direct cost is similar the first thing an economist might think of is price discrimination.  But the rental market is highly competitive, especially when we take into account substitutes such as train, private car etc., so that seems like a non-starter.  A good answer needs to recognize that UHaul operates a network with significant inter-customer externalities.

Let us suppose that as the day dawns UHaul has the optimal number of trucks at each of its locations.  At the end of the day, UHaul would like the same number of trucks at each of its locations.  But this is possible only if departures equal arrivals and to help achieve that balance UHaul lowers the price on the low demand Vegas to LA trip and raises it on the high demand Vegas to LA trip.  (It’s more complicated than this because there are many more than bi-directional considerations but you get the idea.)

Put differently, a customer who travels from Las Vegas to LA reduces the cost to UHaul of running its network because it lets UHaul sell an LA to Las Vegas trip.  The direct costs may be similar but the indirect costs related to running the network are very different. UHaul’s pricing strategy reflects both the direct and indirect costs.

Network economics has some similarities to platform economics.  A bar, for example, is a platform which mediates transactions (pecuniary and non-pecuniary!) between two sorts of customers, men and women.  If men have a higher demand for going to a bar with many women (LA to Las Vegas) than women have of going to a bar with many men (Las Vegas to LA) then in a competitive market the bar must set a higher price for men than for women.  In this context, far from being an example of monopoly power, differential pricing is a result of competition.

More generally, there are many examples of platform markets.  The developer of a mall has as customers shoppers and shops.  A video game console sells itself to players and programmers.  A credit card must have users and merchants.  In some places differential pricing for men and women at nightclubs is illegal.  But in a platform market such differential pricing can make both men and women better off.  Similar things can be said about practices in other platform markets which look anti-competitive at first glance but in fact are the result of competition in the context of a platform.

More on platform economics, also called two-sided markets, in Rochet and Tirole.

Bonus points to Larry White, Mark Weaver and Michael Stack for sending in answers and double bonus points to Larry for suggesting that some of theory could be tested by looking at drink pricing at gay bars.

The “broken window fallacy” fallacy

A loyal MR reader wrote to me, complaining about Henry Hazlitt’s Economics in One Lesson.  In particular, he noted that attacking the broken window fallacy does not much damage Keynesian economics.  I agree:

1. The broken window fallacy consists of claiming that destructive acts (say storms, hurricanes, or terrorist attacks) will improve economic welfare by occasioning repair expenditures and putting people back to work.

2. Measured gdp may rise but true real income will not.  After all, something has been destroyed.  In theory the extra spending flow could offset wage and price stickiness to such a degree that employment rises and the economy comes out ahead.  But a) this is unlikely, and b) you could get the positive effects, if indeed they are there, without breaking anything.  Better monetary and fiscal policies (for me especially the former but perhaps not for Keynes; do also note that raising taxes stifles work and innovation, an indirect breaking of windows) would be called for.

3. Appreciation of #1 and #2 does not much damage Keynesian arguments.  Keynesian doctrine argues that, under the right circumstances, stronger aggregate demand will stimulate output.  It affirms 2b without needing to contradict 2a, as stated directly above.

I am not a Keynesian, but this is one reason why I’ve never been persuaded by Hazlitt’s critique of Keynes.  There is no a priori way to dispose of the possibility that a boost to nominal aggregate demand might increase employment; citing Say’s Law doesn’t do it either. 

Addendum: Alex points out that Keynes did, at least once, commit a version of the broken window fallacy.  Brad DeLong in turn criticizes Hazlitt. 

Housing the Poorest Hurricane Victims

Since many victims have had to travel quite a distance to obtain temporary shelter and many will have to move further from New Orleans to obtain permanent housing within a reasonable time, these vouchers should be available to any public housing agency in the country to serve families displaced by the hurricane.  To avoid delays in getting assistance to these families, the vouchers should be allocated to housing agencies on a first-come-first-served basis and any low-income family whose previous address was in the most affected areas should be deemed eligible.  We should not take the time to determine the condition of the family’s previous unit before granting a voucher.

Getting the poorest displaced families into permanent housing is an urgent challenge.  It requires bi-partisan support for Congress to act promptly, quick action by HUD to generate simple procedures for administering these special vouchers, and housing agencies in areas of heavy demand to add temporary staff to handle the influx of applications for assistance. 

Even with the best efforts of all parties, the proposed solution will not get all the low-income families displaced by Hurricane Katrina into permanent housing tomorrow.  However, it will be much faster than building new housing for them.  And it will show them that the federal government cares about their plight and is working to do what it can to help.

Essays on Cost

For those further interested in the opportunity cost question, the Library of Economics and Liberty is featuring this month L.S.E. Essays on Cost edited by James Buchanan and George F. Thirlby and including essays by Hayek, Coase and others.

This sentence from Buchanan’s preface caught my eye:

In any general theory of choice cost must be reckoned in a utility rather than in a commodity dimension.

Buchanan’s short book Cost and Choice is also available.

The tragedy of Jonathan Kozol

Jonathan Kozol has spent a good deal of his life writing eloquently and passionately about children and the sad state of education in America.  The depths of his passion and caring are to be admired and applauded.  The tragedy is that his eloquence has often been put to ill use attacking the one reform that would really help – private schools and school choice.  Kozol’s good intentions, therefore, earn him no free pass from me.

In a recent interview he said:

[Private schools] starve the public school system of the presence of well-educated,
politically effective parents to fight for equity for all kids.

Kozol’s argument can be summed up thusly:

Letting people escape over the Berlin Wall starves the East German system of the presence of well-educated,
politically effective people to fight for the equity of all East Germans.

Barricading parents into the poor schools their government offers them is like barricading people into communist East Germany.  People, even well-educated, politically effective people, should not be used as tools to further some social engineering scheme.

But is the argument even true?  Kozol, draws on Hirschman’s great book Exit, Voice and Loyalty, but like many who read that book he shows no sign of understanding any of its subtleties.

Yes, exit and voice can be substitutes and reducing exit may increase voice.  But more often than not, voice and exits are complements.  When you complain of delay where is your voice more likely to be heard; at a restaurant or at the department of motor vehicles?

It’s the threat of exit that makes people listen.

Moreover, shutting down exit does not guarantee that voice will arise.  The people whose children are stuck in the worst-performing schools have neither voice nor exit – they are like the people of New Orleans who did not have the means to escape nor the political power to compel help from others.

Finally, we go to the data.  Kozol’s argument implies that places with more exit should have worse public schools.  But in fact a large body of research shows that the opposite is true.  Places with more choice – whether that choice comes from private schools, charter schools, or even choice among public schools – have better schools.  Exit and the threat of exit makes educators listen.

But will Kozol listen?  Sadly, I think not because his fundamental opposition to vouchers is not economic but aesthetic.  He says:

Vouchers elevate the lowest instincts of humanity over the most beautiful instincts.

Need I quote Adam Smith in response?

Taxes and Prices

Suppose there is an
auction for a pearl.  The person with the highest demand is willing to
pay $5000 the person with the next highest demand is willing to pay
$4999.  The winner must pay a tax of $1000 to the government.

With
the tax the two bidders bid until the price reaches $4000 at $4000
(note that $3999+$1000 tax= net of $4999) the low bidder drops out and
the high bidder wins.  Total price to the high bidder is $5000, $4000
to the seller and $1000 to the government.

Now with no tax a
price of $4000 leaves two bidders in the ring so the price must rise
higher.  In fact, the price must now rise to $5000 to get the second bidder
to drop out.  Final price to the high bidder is $5000 – the seller gets
$1000 more in revenues and the government gets nothing.

If one wants to challenge the gas-tax argument then to place to do so is to argue that a temporary reduction in the tax, leading to more profits for the oil companies, will stimulate supply enough to have a significant effect on reducing price.  Any other argument is incorrect.

Marshall and Shackle on opportunity cost

…or forget all the fancy talk of "opportunity cost."  Let’s say you ask, "how much does that apple cost"?

The correct answer is $1.00, the price (gross).

The correct answer is not "the consumer surplus on what the dollar could buy elsewhere" (a net concept).

You can still figure in information about price to get both consumer surpluses and the correct decision.

"*Opportunity* cost" is a means of saying that we don’t just stop at the money ($1.00), but rather we think in terms of a foregone good or service (perhaps a pear, etc.).

But just as "cost" was a gross term, so does "opportunity cost" stay a gross term, it does not become a net one.  Only the word "opportunity" has been added to "cost," so why leap from gross to net thinking?

Buchanan and others blur all this when they start talking about the value dimension of opportunity cost.  On one hand this is properly subjectivist.  But it also encourages people to move to the "net" dimension, and notions of consumer surplus, rather than focusing on the *opportunity*.

G.L.S. Shackle wrote about a "skein of imagined alternatives."  This captures the "gross" idea properly, and remains subjectivist, but it doesn’t encourage the leap into the mix of net thinking and consumer surplus, which remains a separate concept.

I don’t have any quarrel with Alex’s economics; as far as I can see this point is semantic.  (I’ll also admit that my gross perspective on opportunity cost is somewhat anachronistic; it is one reason why mainstream economists work directly with consumer surplus.)  What disturbs me is how few economists gave $50 or $40 as the right answer; the actual answers were close to randomly distributed.  Most Web-based sources appear confused on the net vs. gross issue, but at least they hover across the $40 and $50 options.

Why Most Published Research Findings are False

Writing in PLoS Medicine, John Ioannidis says:

There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false.

Ioannidis presents a Bayesian analysis of the problem which most people will find utterly confusing.  Here’s the idea in a diagram.

Truehypo_3Suppose there are 1000 possible hypotheses to be tested.  There are an infinite number of false hypotheses about the world and only a finite number of true hypotheses so we should expect that most hypotheses are false.  Let us assume that of every 1000 hypotheses 200 are true and 800 false.

It is inevitable in a statistical study that some false hypotheses are accepted as true.  In fact, standard statistical practice guarantees that at least 5% of false hypotheses are accepted as true.  Thus, out of the 800 false hypotheses 40 will be accepted as “true,” i.e. statistically significant.

It is also inevitable in a statistical study that we will fail to accept some true hypotheses (Yes, I do know that a proper statistician would say “fail to reject the null when the null is in fact false,” but that is ugly).  It’s hard to say what the probability is of not finding evidence for a true hypothesis because it depends on a variety of factors such as the sample size but let’s say that of every 200 true hypotheses we will correctly identify 120 or 60%.  Putting this together we find that of every 160 (120+40) hypotheses for which there is statistically significant evidence only 120 will in fact be true or a rate of 75% true.

(By the way, the multiplying factors in the diagram are for those who wish to compare with Ioannidis’s notation.)

Ioannidis says most published research findings are false.  This is plausible in his field of medicine where it is easy to imagine that there are more than 800 false hypotheses out of 1000.  In medicine, there is hardly any theory to exclude a hypothesis from being tested.  Want to avoid colon cancer?   Let’s see if an apple a day keeps the doctor away.  No?  What about a serving of bananas? Let’s try vitamin C and don’t forget red wine.  Studies in medicine also have notoriously small sample sizes.  Lots of studies that make the NYTimes involve less than 50 people – that reduces the probability that you will accept a true hypothesis and raises the probability that the typical study is false.

So economics does ok on the main factors in the diagram but there are other effects which also reduce the probability the typical result is true and economics has no advantages on these – see the extension.

Sadly, things get really bad when lots of researchers are chasing the same set of hypotheses.  Indeed, the larger the number of researchers the more likely the average result is to be false!  The easiest way to see this is to note that when we have lots of researchers every true hypothesis will be found to be true but eventually so will every false hypothesis.  Thus, as the number of researchers increases, the probability that a given result is true goes to the probability in the population, in my example 200/1000 or 20 percent.

A meta analysis will go some way to fixing the last problem so the point is not that knowledge declines with the number of researchers but
rather that with lots of researchers every crackpot theory will have at least one scientific study that it can cite in it’s support.

The meta analysis approach, however, will work well only if the results that are published reflect the results that are discovered.  But editors and referees (and authors too) like results which reject the null – i.e. they want to see a theory that is supported not a paper that says we tried this and this and found nothing (which seems like an admission of failure).

Brad DeLong and Kevin Lang wrote a classic paper suggesting that one of the few times that journals will accept a paper that fails
to reject the null is when the evidence against the null is strong (and thus failing to reject the null is considered surprising and
important).  DeLong and Lang show that this can result in a paradox.  Taken on its own, a paper which fails to reject the null provides evidence in favor of the null, i.e. against the alternative hypothesis and so should increase the probability that a rational person thinks the null is true.  But when a rational person takes into account the selection effect, the fact that the only time papers which fail to reject the null are published is when the evidence against the null is strong, the publication of a paper failing to reject the null can cause him to increase his belief in the alternative theory!

What can be done about these problems?  (Some cribbed straight from Ioannidis and some my own suggestions.)

1)  In evaluating any study try to take into account the amount of background noise.  That is, remember that the more hypotheses which are tested and the less selection which goes into choosing hypotheses the more likely it is that you are looking at noise.

2) Bigger samples are better.  (But note that even big samples won’t help to solve the problems of observational studies which is a whole other problem).

3) Small effects are to be distrusted.

4) Multiple sources and types of evidence are desirable.

5) Evaluate literatures not individual papers.

6)  Trust empirical papers which test other people’s theories more than empirical papers which test the author’s theory.

7)  As an editor or referee, don’t reject papers that fail to reject the null.