Alex Tabarrok

Nearly 30% of children in India (ages 6-14) attend private schools and in some states and many urban regions a majority of the students attend private schools. Compared to the government schools, private schools perform modestly better on measures of learning (Muraldiharan and Sundararaman 2013, Tabarrok 2011) and much better on cost-efficiency. Moreover, even though the private schools are low cost and mostly serve very poor students they also have better facilities such as electricity, toilets, blackboards, desks, drinking water etc. than the government schools (e.g. here and here).

In an op-ed Vipin Veetil and Akshaya Vijayalakshmi argue that the private schools may also reduce caste discrimination:

It’s no secret that government schools in India are of poor quality. Yet few know that they are also breeding grounds for caste-based discrimination, with lower-caste students in government schools often asked to sit separately in the classroom, insulted in front of their peers and even forced to clean toilets. This despite the fact that caste discrimination is illegal in India.

…Government-school teachers aren’t necessarily more prejudiced than their private-school counterparts. But private-school teachers find it more costly to discriminate. In a survey of over 5,000 children, academic researchers James Tooley and Pauline Dixon found that students in private schools felt more respected by their teachers than children in government schools.

Caste discrimination in the government schools is also one of the reasons why the private schools focus on teaching English. Among the Dalits, English is understood as the language of liberation not just because it offers greater job prospects but even more because Hindi, Sanskrit and the regional languages are burdened by and interwoven with a history of Dalit oppression. As one Dalit put it“No one knows how to curse me as well as in Tamil.”

The Demand and Supply of Sex

by on July 28, 2014 at 4:25 am in History, Religion, Science | Permalink

Alternet: The idea that men are naturally more interested in sex than women is [so] ubiquitous that it’s difficult to imagine that people ever believed differently. And yet for most of Western history, from ancient Greece to beginning of the nineteenth century, women were assumed to be the sex-crazed porn fiends of their day. In one ancient Greek myth, Zeus and Hera argue about whether men or women enjoy sex more. They ask the prophet Tiresias, whom Hera had once transformed into a woman, to settle the debate. He answers, “if sexual pleasure were divided into ten parts, only one part would go to the man, and and nine parts to the woman.” Later, women were considered to be temptresses who inherited their treachery from Eve. Their sexual passion was seen as a sign of their inferior morality, reason and intellect, and justified tight control by husbands and fathers. Men, who were not so consumed with lust and who had superior abilities of self-control, were the gender more naturally suited to holding positions of power and influence.

Early twentieth-century physician and psychologist Havelock Ellis may have been the first to document the ideological change that had recently taken place. In his 1903 work Studies in the Psychology of Sex, he cites a laundry list of ancient and modern historical sources ranging from Europe to Greece, the Middle East to China, all of nearly the same mind about women’s greater sexual desire.

The ancient belief is consistent with the well known fact that in ancient times when a man went to a bordello the women would line up and bid for the right to sleep with him.

In other words, the ancients believed a lot of strange things at variance with the facts (which isn’t to say that the switch in belief and its timing isn’t of interest or that these kinds of beliefs no longer sway with the times). More at the link.

Now seems like an apposite time to remember, Congress intends no more than Congress smiles. As Ken Shepsle put it in his classic paper Congress is a “They,” not an “It”:

Legislative intent is an internally inconsistent, self-contradictory expression. Therefore, it has no meaning. To claim otherwise is to entertain a myth (the existence of a Rousseauian great law giver) or commit a fallacy (the false personification of a collectivity). In either instance, it provides a very insecure foundation for statutory interpretation.

Shepsle’s point is that Arrow’s impossibility theorem shows that not only do collectives not have preferences they can’t even be understood as if they had preferences. As I wrote earlier:

Suppose that a person is rational and that we observe their choices. After some time we will come to understand their choices in terms of their underlying preferences (assume stability–this is a thought experiment).  We will be able to say, “Ah, I see what this person wants. I understand now why they are choosing in the way that they do.  If I were them, I would choose in the same way.”

Arrow showed that when a group chooses, there are no underlying preferences to uncover–not even in theory. In one sense, the theorem is trivial. We know or should always have known that a group doesn’t have preferences anymore than a group smiles. What Arrow showed, however, is that without invoking special cases we can’t even rationalize group choices as if leviathan had preferences.

Put differently, if we do try to rationalize a leviathan with preferences and intention we will find that such a leviathian has the preferences and intention of a madman. Quoting Shepsle again:

…the Hart and Sacks (1958) notion that legislation should be treated as the result of “reasonable people pursuing reasonable purposes reasonably” is insufficient. Even if we do adopt this posture, even if legislators are the kinds
of reasonable people Hart and Sacks envision, it is still fruitless to attribute intent to
the product of their collective efforts. Individual intents, even if they are unambiguous,
do not add up like vectors. That is the content of Arrow; that is the malady of
majority rule….

…The courts cannot defer to something that is nonsense.

By the way, if legislative intent was nonsense in 1992 when Shepsle wrote, then today, when Congress is more divided than ever, it is nonsense on stilts.

Addendum: Zywicki and Stearn’s excellent book, Public Choice Concepts and Applications in Law has a good discussion of the issue and some of the alternative methods of interpreting a statute. One might begin with Holmes statement, “We do not inquire what the legislature meant; we ask only what the statutes mean.”

Woodford on QE

by on July 24, 2014 at 12:47 pm in Economics | Permalink

Interesting interview by David Andolfatto of Michael Woodford. Woodford is skeptical of QE.

Andolfatto

You talk about Fed purchases of risky assets. I mean, do you have in mind some loose connection of the Fed’s purchase of the mortgage-backed securities, the agency debt?

Woodford

I think that the main argument that’s been made for the desirability of the Fed asset purchases relies upon the idea that certain types of risk are going to be taken onto the Fed’s balance sheet, and the claim that taking those types of risk out of the portfolios that people in the private sector have to hold is going to make a difference for the pricing of risk in the economy. And so the whole idea that you’re concentrating certain kinds of risks on the balance sheet of the central bank, I think, is entirely the theory behind what’s going on. It’s not just an accidental effect.

And so then you have to ask: What do you think that does? And I think it’s a mistake to say, well, the central bank just takes the risk away. It doesn’t take it away. It can affect who is, in fact, going to bear the risk, because essentially it means that a public institution is taking on the risk, and that means that taxpayers as a group are going to have no choice about bearing that kind of risk. And the question is whether you think that concentrating the risks in that way is facilitating an allocation of risk that was, in fact, desirable and that the markets would have been achieving themselves through voluntary trades if financial constraints hadn’t been impeding it, or whether you’re bringing about an allocation of risk that people would have liked to trade away from if financial constraints weren’t keeping them from doing it. And you’re pushing them even further into a corner they don’t want to be in.

Moral Effects of Socialism

by on July 19, 2014 at 7:25 am in Books, Economics, Philosophy | Permalink

Dan Ariely and co-authors have an interesting new paper looking at moral behavior, specifially cheating, in people who grew up in either East or West Germany.

From 1961 to 1989, the Berlin Wall divided one nation into two distinct political regimes. We
exploited this natural experiment to investigate whether the socio-political context impacts
individual honesty. Using an abstract die-rolling task, we found evidence that East Germans
who were exposed to socialism cheat more than West Germans who were exposed to
capitalism. We also found that cheating was more likely to occur under circumstances of
plausible deniability.

…If socialism indeed promotes individual dishonesty, the specific features of this socio-political
system that lead to this outcome remain to be determined. The East German socialist regime
differed from the West German capitalist regime in several important ways. First, the system
did not reward work based to merit, and made it difficult to accumulate wealth or pass
anything on to one’s family. This may have resulted in a lack of meaning leading to
demoralization (Ariely et al., 2008), and perhaps less concern for upholding standards of
honesty. Furthermore, while the government claimed to exist in service of the people, it failed
to provide functional public systems or economic security. Observing this moral hypocrisy in government may have eroded the value citizens placed on honesty. Finally, and perhaps most
straightforwardly, the political and economic system pressured people to work around official
laws and cheat to game the system. Over time, individuals may come to normalize these types
of behaviors. Given these distinct possible influences, further research will be needed to
understand which aspects of socialism have the strongest or most lasting impacts on morality.

It’s interesting that Ariely et al. try to explain cheating as a result of socialism. My own approach would look more to the virtue ethics of capitalism and Montesquieu who famously noted that

Commerce is a cure for the most destructive prejudices; for it is almost a general rule, that wherever we find agreeable manners, there commerce flourishes; and that wherever there is commerce, there we meet with agreeable manners.

See Al-Ubaydli et al. for a market priming experiment and especially McCloskey on The Bourgeoise Virtues for more work consistent with this theme.

Bernanke v. Friedman

by on July 15, 2014 at 7:05 am in Economics | Permalink

Milton Friedman argued that the Great Depression was caused by a banking collapse that reduced the money stock and decreased velocity leading to a massive failure of aggregate demand that was not countered by the Federal Reserve. The title of his book with Anna Schwartz is apt, A Monetary History of the United States. Ben Bernanke also put the banking crisis at the center of his story of the Great Depression but the propagation mechanism was quite different. Bernanke argued that the banking crisis led to a collapse of credit. His contribution to Great Depression literature is also aptly titled, Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.

In an excellent paper from Boom and Bust Banking, Jeff Hummel shows that these two stories have different implications for policy. (FYI, B&BB was edited by David Beckworth and also contains excellent papers by Scott Sumner, Nicholas Rowe, Larry White and others. Full disclosure, I was the general editor.) In Friedman’s story what is required is monetary policy, an increase in the money stock to keep nominal GDP from falling. In Bernanke’s story what is required is actually fiscal policy (albeit fiscal policy performed by the Fed), namely emergency lending to banks to keep credit flowing. These two approaches are not mutually exclusive and in ordinary times the differences are subtle. Under the immense pressure of the great recession, however, the differences became large and important. Instead of primarily pursuing a Friedman policy of injecting liquidity into the system, Bernanke followed his nonmonetary prescription and injected credit. Bernanke’s approach has turned the Fed into what Hummel calls a central planner of credit (e.g here), an unprecedented change with potentially very large consequences for the future.

??????????????????What brought Hummel’s paper to mind today was strong support from a surprising source, a broadside against Bernanke’s handling of the great recession from the President of the Federal Reserve Bank of Richmond, Jeffrey Lacker (writing with Renee Haltom). Lacker and Haltom don’t cite Hummel but they support his analysis and although they write in careful, measured tones you don’t have to be a Straussian to recognize that it’s a direct attack on Bernanke:

When the central bank utilizes “lender of last resort” powers to allocate credit to targeted firms and markets, it encourages excessive risk-taking and contributes to financial instability. It also embroils the central bank in distributional politics and jeopardizes the independence that is critical to the central bank’s ability to ensure price stability. The lesson to be learned from the expansive use of the Fed’s emergency-lending powers in recent decades is that it threatens both financial stability and the Fed’s primary mission of ensuring monetary stability.

One thing Lacker and Haltom don’t do, however, is say how the Fed can unwind its positions. During the crisis the Fed pulled a genie out of the bottle and the genie delivered trillions to grateful borrowers. But how can the genie be put back in the bottle? The problem, in my view, is not primarily one of inflation or economics but now of politics.

The average investor in the stock market will earn less than the average stock market return–this is true even without taking into account any behavioral biases. A reasonably diversified portfolio of stocks can expect to earn 7% per year on average. Thus, it’s easy to see that the expected payoff from investing $100 and holding for 30 years is $100*(1.07)^30=$761.23. The expected payoff, however, is subject to a lot of uncertainty–even on a diversified portfolio the standard deviation is about 20% annually. Many people think that uncertainty washes out when you buy and hold for a long period of time. Not so, that is the fallacy of time diversification. Although the average return becomes more certain with more periods you don’t get the average return you get the total payoff and that becomes more uncertain with more periods.

To illustrate I ran 100,000 simulations of a 30 year stock market investment with a 7% return and a 20% standard deviation. The mean payoff across all 100,000 runs was $759.58 (recall the theoretical mean is $761.23 so we are spot on). But now consider the following. What percentage of returns would you guess lost money, i.e. had a total payoff after 30 years of less than $100?

After 30 years, 8.9% of all returns lost money!!!  In terms of recent debates, (average) r>g does not mean that wealth accumulates automatically. Fortunes can be lost even when the averages are in your favor.

Perhaps even more surprisingly what percentage of investors would you guess earned less than the average payoff of $761.23? An amazing, 69.2% of investors earned less than the average. The median payoff in my simulation was only $446.85, so the median return was not 7% but 5.1%. The average investor earned less than the average return.

The point is subtle and widely misunderstood. Here’s a simple example. Suppose that the average return is 10%. If $100 is invested for two periods the average payoff is $100*(1.1)^2=$121. But on average that is not what happens. More typically, you get say 0% in the first period and 20% in the second period, i.e. $100*(1.0)*(1.2)=$120. Notice that the average return is exactly the same, 10%, but the total payoff is smaller in the second and more realistic case–an application of Jensen’s inequality–so the average investor earns less than the average payoff. The difference here is only $1 but over 30 years that seemingly small difference accumulates.

If most investors earn less than the average it follows immediately that a few must earn more than the average. Lady luck is a bitch, she takes from the many and gives to the few. Here is the histogram of payoffs. The right-hand tail is long. Indeed, I am only showing part of the tail as there were payoffs as high as $25,000. Most investors earn less than the mean payoff.

Histogram

And here is a line plot showing the portfolio accumulation over time for a sample of 10,000 runs. Note two things. First, the variance of the total payoff is increasing over time and second, the total payoff is highly right (upper)-tailed.

Total Return 2

Addendum: There is some evidence that stock market returns are mean reverting, as makes sense if discount factors are mean reverting. Taking mean-reversion into account would moderate the numbers somewhat but would not change the qualitative results. Moreover, we don’t have many independent 30-year data points so in my view we shouldn’t put too much weight on mean-reversion.

Fracking Australia

by on July 3, 2014 at 11:33 am in Books, Economics | Permalink

As growth in China slows and Australia’s mining boom ends, Australians are asking, Can our luck last? Australia’s Lowy Institute asked me to discuss John Edward’s new monograph Beyond the Boom. My comments and those of a number of experts can be found here. Here is one bit of interest at both antipodes:

As Jon Stewart memorably illustrated, every US president since Nixon has called for freeing the US from ‘dependence on foreign oil’ (within ten years!). Every president has failed. Fracking, however, has delivered the goods. Fracking has reduced the price of energy while generating millions of jobs and reducing net emissions of greenhouse gases. The fracking revolution has only just begun in Australia. Australia has abundant supplies of natural gas and if it creates a national market and avoids parochial calls for price controls and environmental NIMBYism it will certainly become the world’s largest exporter. While profiting from natural gas production and infrastructure investment, Australia will also help the world to move closer to greenhouse gas targets.

Disruption Big Time

by on July 2, 2014 at 7:25 am in Data Source, Economics | Permalink

In an excellent post on the Lepore-Christensen fracas, John Hagel draws on Deloitte’s Shift Index to provide some data on disruption. Disruption has increased by a variety of metrics.

One of the metrics in our Shift Index looks at what economists call topple rate – the rate at which leaders fall out of their leadership position. In this case, we focused on the rate at which public US companies in the top quartile of return on assets performance fall out of this leadership position.Between 1965 and 2012, the topple rate increased by 40%.

OK, but the skeptic might reply that this is only about financial performance. Another more significant measure of fall from leadership position is provided by my old colleague and mentor, Dick Foster, who looked at the average lifespan of companies on the S&P 500.  In 1937, at the height of the Great Depression and certainly a time of great turmoil, a company on the S&P 500 had an average lifespan of 75 years.  By 2011, that lifespan had dropped to 18 years – a decline in lifespan of almost 75%.  At the same time that humans are significantly increasing their lifespan, large companies have been heading rapidly in the opposite direction.

Another measure of disruption is executive turnover which has increased.

Print

Some of Deloitte’s work also speaks to the implicit idea in Piketty that capital accumulation is easy. Once someone has capital, Piketty argues, that capital just grows and grows at r>g. Not so, and less so today than ever before. According to Deloitte the return on capital is decreasing and the volatility is increasing. Here’s the return on assets by top and bottom quartile. Even in the top quartile, r is decreasing but it’s easier than ever before to pick wrong and lose your shirt in the bottom quartile.

Print

Lots more of interest in Hagel’s post and in Deloitte’s work.

Comparative Advantage

by on July 1, 2014 at 7:31 am in Economics, Education | Permalink

Don Boudreaux’s Everyday Economics video on comparative advantage is one of the best introductions to comparative advantage that I know of in any format.

(The series is a product of MRU but I cannot take any credit for its creation.)

We are pleased to announce a brand new course at MRUniversity, Everyday Economics. The new course will cover some of the big ideas in economics but applied to everyday questions. The first section, premiering now and rolling out over the next several weeks, features Don Boudreaux on trade. Tyler will appear in a future section on food. You can expect more from me as well. Indeed, you may spot both Tyler and I in some cameos (ala Stan Lee) in some of Don’s videos!

Here’s the first video on trade and the hockey stick of human prosperity.

How Not to Bet

by on June 26, 2014 at 7:10 am in Economics, Science | Permalink

Tim Harford writes of bets:

Pundits who make wagers may look grubby but at least they are accepting a cost for failure. A more subtle advantage is that betting encourages forecasts that are specific and quantifiable.

Exactly, but not all bets are well considered. Consider the following from Christopher Keating:

I have heard global warming skeptics make all sorts of statements about how the science doesn’t support claims of man-made climate change. I have found all of those statements to be empty and without any kind of supporting evidence. I have, in turn, stated that it is not possible for the skeptics to prove their claims. And, I’m willing to put my money where my mouth is.

I am announcing the start of the $10,000 Global Warming Skeptic Challenge. The rules are easy:

1. I will award $10,000 of my own money to anyone that can prove, via the scientific method, that man-made global climate change is not occurring;

…5. I am the final judge of all entries but will provide my comments on why any entry fails to prove the point.

This is a poorly constructed bet. Notice first that the preamble, “empty”, “without any kind of supporting evidence”, “prove,” is simply unscientific bluster. Scientific thinking is Bayesian. (Appropriately, I first saw Keating’s bet flagged on the blog Prior Probability.) In contrast, this talk is suggestive of someone who doesn’t weigh evidence, a point of some importance given that he writes “I am the final judge of all entries…”. Keating is really betting that no one will change his mind and that’s not a bet that I would want to take.

Second, what counts as proof? And what does it mean to say that man-made global climate change is not occurring. How much is man made? How fast is it occurring? What are the costs? What are the benefits? Keating’s challenge is vague and poorly worded.

For an example of a much better bet consider the Caplan-Bauman bet on temperature change over the next 15 years. Caplan has agreed to pay Bauman $333.33 if the average temperature over the period 2015-2029 is more than .05C greater than the average temperature over 2000-2014 as measured by the National Climatic Data Center. If the average temperature increase over that time frame is less than .05c then Bauman will pay Caplan $1000. 

The Caplan-Bauman bet is quantifiable and specific and it contains odds, as it should since Bauman expresses much greater certainty in his belief than Caplan.

Keating seems even more certain in his beliefs than Bauman. But how certain? Will he accept my offer of the same bet at 5:1 odds?

Does any sentence better illustrate the human condition in all its political, social and biological complexities than this sentence?

New York state lawmakers have passed a bill banning residents from taking “tiger selfies” — a rising trend on dating websites in which single men post photos of themselves posing with the ferocious felines in hopes of impressing potential mates.

Dissertations are waiting to be written.

The Anti-Nanny State

by on June 21, 2014 at 7:15 am in Economics, Law | Permalink

A new report from the Migration Policy Institute calculates that:

The US government spends more on its immigration enforcement agencies than on all of its principal criminal federal law enforcement agencies combined. In FY 2012, spending for CBP, ICE and US-Visit reached nearly $18 billion. This amount exceeds by nearly 24% total spending by the FBI, Drug Enforcement Agency (DEA), Secret Service, US Marshals Service, and Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) which stood at $14.4 billion in FY 2012.

In other words, the Federal government spends more on preventing trade than on preventing murder, rape and theft. I call it the anti-nanny state. It’s hard to believe that this truly reflects the American public’s priorities.

border fence1

In Ill-Conceived, Even If Competently Administered: Software Patents, Litigation, and Innovation Shawn Miller and I recounted the logic by which software patents had gotten out of control.

The subject matter of a patent is supposed to be a process, a machine, a manufacture, a composition of matter, or a design. Patents are supposed to protect inventions, not ideas. A pharmaceutical patent, for example, protects a specific set of closely related chemical structures, but you cannot patent a particular means of curing cancer as “any means by which cancer is cured” and thereby exclude every other means of curing cancer. In theory, the same rules apply to software, but in practice the courts have allowed software patents to be much broader and much more abstract than in other areas.

…Consider U.S. Patent #5,930,474 (Dunworth, Veenstra, and Nagelkirk 1999). The patent’s primary claim is simply “A system which associates on line information with geographic areas.” The patent gives this example of what they intend to patent: “[I]f a user is interested in finding an out-of-print book, or a good price on his favorite bottle of wine, but does not want to travel outside of the Los Angeles area to acquire these goods, then the user can simply designate the Los Angeles area as a geographic location for which a topical search is to be performed” (ibid.). In any ordinary reading the patentee has a patent on an abstract idea, thus gaining the right to exclude others from using such an idea. In any other area of patent law, this type of patent would not be allowed. It is allowed for software, however, because software patents such as this one go on to detail the means of implementing such a function. Namely,

A…system comprising: a computer network wherein a plurality of computers have access to said computer network; and an organizer executing in said computer network, wherein said organizer is configured to receive search requests from any one of said plurality of computers, said organizer comprising: a database of information organized into a hierarchy of geographical areas wherein entries corresponding to each one of said hierarchy of geographical areas is further organized into topics…. (ibid.)

In other words, the means of the patent is the Internet. By merely adding some entirely nugatory terms such as computer, database, and display—nugatory because any modern method would use these devices—the patentee has turned an unpatentable idea into a patentable, and potentially very profitable, method.

The Supreme Court has today in ALICE v. CLS decisively rejected this process:

…the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention. Stating an abstract idea “while adding the words ‘apply it’” is not enough for patent eligibility. Mayo, supra, at ___ (slip op., at 3). Nor is limiting the use of an abstract idea “‘to a particular technological environment.’” Bilski, supra, at 610–611. Stating an abstract idea while adding the words “apply it with a computer” simply combines those two steps, with the same deficient result. Thus, if a patent’s recitation of a computer amounts to a mere instruction to “implemen[t]” an abstract idea “on . . . a computer,” Mayo, supra, at ___ (slip op., at 16), that addition cannot impart patent eligibility.

I see this ruling as a big win for Mark Lemley who focused on the functional claiming issues of software patents and also as a loss in prestige for the Federal Circuit. It’s evident that the Supreme Court thinks, as Dourado and I argued, that the Federal Circuit has become ideologically captured by the patent bar and in a series of cases the SC has imposed its less parochial view and reasserted its dominance over patent law.