Month: July 2011
It’s terrible on all counts; but the most offensive thing intellectually is the incredible fallacy of claiming that higher capital requirements for banks amount to keeping resources idle…the man once revered as a demigod of finance doesn’t understand basic economics — or, more likely, that he chooses not to understand what he, amazingly, is still being paid to not understand.
There is no mention of the actual literature on this topic. Here is one summary of some common views:
Hall (1993) presents evidence that from 1990 to 1992 American banks have reduced their loans by approximately $150 billion, and argues that it was largely due to the introduction of the new risk-based capital guidelines. He goes even so far as to say that “To the extent that a “credit crunch” has weakened economic activity since 1990, Basle-induced declines in lending may have been a major cause of this credit crunch.” Hence, it is not an overstatement to say that Basel I did have an impact on bank behavior as it forced them to hold higher capital ratios than it otherwise would have been the case.
It is a common belief, though by no means universally held, that the implementation of Basel meant a slower U.S. recovery from the recession of the early 1990s. Here is a summary of some of the international evidence, plus there is a general literature survey in the first few pages at that link; see for instance Peek and Rosengren (the term “capital crunch” will help in Google searches). Perhaps the literature on the early 1990s may not apply today, but Greenspan’s claim is not incoherent a priori. Furthermore the Greenspan piece links to an FT piece which a) is consistent with his general account, and b) shows various Europeans, not all of whom are bankers, sharing the same worry for today, and c) makes it clear Greenspan is not committing the “Junker fallacy” of confusing paper holdings with real resource destructions. The negative effects come through a tax on financial intermediation. Maybe just maybe one could criticize Greenspan for not being clear enough on the mechanism, but he is still right on the comparative statics and certainly not spouting nonsense.
At the theoretical level, papers on Modigliani-Miller deviations, and the interrelation between production and finance, also make Greenspan’s argument acceptable, if not necessarily correct; one can even look to Joe Stiglitz here. Krugman admits that capital requirements lower bank risk-taking but of course that can lead to less lending and, in many future world-states, lower output. That may well be a good thing, since it lowers systemic risk and thus helps output in some world states, but it’s wrong to deny the significant possibility of a real opportunity cost.
Also, bank capital requirements are not well understood in terms of a pure Modigliani-Miller debt-equity swap. For one thing, the capital requirements favor some asset classes over others, and arguably in a way which limits expected growth. The capital requirements also involve a commitment to particular accounting standards.
On matters of policy, I do in principle favor significant increases in capital requirements for banks. But should we push to impose those tougher requirements today in such a weak economy? Perhaps Krugman would be eager but I’m not so sure. For all his worries about repeating the mistakes of 1937-8, Krugman doesn’t seem to recognize this may well be another step down that path.
Most critics didn’t like it, but here is one of the better reviews. I found it original, deeply and subtly funny, and multi-dimensional in its aspirations. Film buffs will enjoy the nods and homages to High Noon, Shaka Zulu, The Searchers, Raiders of the Lost Ark, James Bond, Ray Harryhausen, Aliens, and many other movies. There is running commentary on the Bible, the history of Spanish colonialism, contemporary U.S. foreign policy, the development of the American Western, and there is even a poke at the gold standard. Not for everyone (you might just think it’s stupid), but it far exceeded my expectations.
How should we revise structural interpretations of unemployment in light of the new gdp revisions? (For summaries, here are a few economists’ reactions to the report.) Just to review briefly, I find the most plausible structural interpretations of the recent downturn to be based in the “we thought we were wealthier than we were” mechanism, leading to excess enthusiasm, excess leverage, and an eventual series of painful contractions, both AS and AD-driven, to correct the previous mistakes. I view this hypothesis as the intersection of Fischer Black, Hyman Minsky, and Michael Mandel.
A key result of the new numbers is that we had been overestimating productivity growth during a period when it actually was feeble. That is not only consistent with this structural view but it plays right into it: the high productivity growth of 2007-2009 now turns out to be an illusion and indeed the structural story all along was suggesting we all had illusions about the ongoing rate of productivity growth. As of even a mere few days ago, some of those illusions were still up and running (are they all gone now? I doubt it.)
On one specific, it is quite possible that the new numbers diminish the relevance of the zero marginal product (ZMP) worker story. The ZMP worker story tries to match the old data, which showed a lot of layoffs and skyrocketing per hour labor productivity in the very same or immediately succeeding quarters. Those numbers, taken literally, imply that the laid off workers were either producing very little to begin with or they were producing for the more distant future, a’la the Garett Jones hypothesis. The new gdp numbers will imply less of a boom in per hour labor productivity in the period when people are fired in great numbers, though I would be surprised if the final adjustments made this initially stark effect go away. BLS estimates from June 2011 still show quite a strong ZMP effect, although you can argue the final numbers for that series are not yet in. (I don’t see the relevant quarterly adjustments for per hour labor productivity in the new report, which comes from Commerce, not the BLS.) Furthermore there is plenty of evidence that the unemployed face “discrimination” when trying to find a new job. Finally, the strange and indeed relatively new countercyclicality of labor productivity also occurred in the last two recessions and it survived various rounds of data revisions. It would be premature — in the extreme — to conclude we’ve simply had normal labor market behavior in this last recession. That’s unlikely to prove the result.
Most generally, the ZMP hypothesis tries to rationalize an otherwise embarrassing fact for the structural hypothesis, namely high measured per hour labor productivity in recent crunch periods. If somehow that measure were diminished, that helps the structural story, though it would make ZMP less necessary as an auxiliary hypothesis, some would say fudge.
Other parts of the structural story find ready support in the revisions. Real wealth has fallen and so consumers have much less interest in wealth-elastic goods and services. This shows up most visibly in state and local government employment, which has fallen sharply since the beginning of the recession. Rightly or wrongly, consumers/voters view paying for these jobs as a luxury and so their number has been shrinking. Construction employment is another structural issue, and given the negative wealth effect, and the disruption of previously secure plans, there is no reason to expect excess labor demand in many sectors.
In the new report “profits before tax” are revised upward for each year. That further supports the idea of a whammy falling disproportionately on labor and the elimination of some very low product laborers.
Measured real rates of return remain negative, which is very much consistent with a structural story. Multi-factor productivity remains miserably low. In my view, a slow recovery was in the cards all along. Finally, you shouldn’t take any of this to deny the joint significance of AD problems; AS and AD problems have very much compounded each other.
“Rather than a case of abject failure,” the authors argue, “Rapa Nui is an unlikely story of success.” The islanders had migrated, perhaps accidentally, to a place with little water and “fundamentally unproductive” soil with “uniformly low” levels of phosphorus, an essential mineral for plant growth. To avoid the wind’s dehydrating effects, the newcomers circled their gardens with stone walls known as manavai. Today, the researchers discovered, abandoned manavai occupy about 6.4 square miles, a tenth of the island’s total surface.
More impressive still, about half of the island is covered by “lithic mulching,” in which the islanders scattered broken stone over the fields. The uneven surface creates more turbulent airflow, reducing daytime surface temperatures and warming fields at night. And shattering the rocks exposes “fresh, unweathered surfaces, thus releasing mineral nutrients held within the rock.” Only lithic mulching produced enough nutrients—just barely—to make Rapa Nui’s terrible soil cultivable. Breaking and moving vast amounts of stone, the islanders had engineered an entirely new, more productive landscape.
Mann sums up:
People have done lots of environmentally destructive things, heaven knows. But there are surprisingly few cases in which societies have permanently laid waste to their own subsistence. The history of Easter Island suggests that humans generally do have a long-term capacity to work with natural systems, even in extreme cases.
I just bought the Easter Island book, Mann’s new book, which I devoured immediately, is out soon.
This week also saw the release of the first annual report of America’s Financial Stability Oversight Council (FSOC), a regulatory body that was set up by the Dodd-Frank act to monitor systemic risks to the country’s financial system. It has little to say about the risk of a self-harming government…
Here is more.
It now costs about a billion dollars to develop a new drug which means that many potentially beneficial drugs are lost. Economist Michele Boldrin and physician S. Joushua Swamidass explain the problem and suggest a new approach:
Every drug approval requires a massive bet—so massive that only very large companies can afford it. Too many drugs become profitable only when the expected payoff is in the billions….in this high-stakes environment it is difficult to justify developing drugs for rare diseases. They simply do not make enough money to pay for their development….How many potentially good drugs are dropped in silence every year?
Finding treatments for rare disease should concern us all. And as we look closely at genetic signatures of important diseases, we find that each common disease is composed of several rare diseases that only appear the same on the outside.
Nowhere is this truer than with cancer. Every patient’s tumor is genetically unique. That means most cancer patients have in effect a rare disease that may benefit from a drug that works for only a small number of other patients.
…We can reduce the cost of the drug companies’ bet by returning the FDA to its earlier mission of ensuring safety and leaving proof of efficacy for post-approval studies and surveillance.
Harvard Neurologist Peter Lansbury made a similar argument several years ago:
There are also scientific reasons to replace Phase 3. The reasoning behind the Phase 3 requirement — that the average efficacy of a drug is relevant to an individual patient — flies in the face of what we now know about drug responsiveness. Very few drugs are effective in all individuals. In fact, most are not effective in large portions of the population, for reasons that we are just beginning to understand.
It’s much easier to get approval for drugs that are marginally effective in, say, half the population than drugs that are very effective in a small fraction of patients. This statistical barrier discourages the pharmaceutical industry from even beginning to attack diseases, such as Parkinson’s, that are likely to have several subtypes, each of which may respond to a different drug. These drugs are the underappreciated casualties of the Phase 3 requirement; they will never be developed because the risk of failure at Phase 3 is simply too great.
Boldrin and Swamidass offer another suggestion:
In exchange for this simplification, companies would sell medications at a regulated price equal to total economic cost until proven effective, after which the FDA would allow the medications to be sold at market prices. In this way, companies would face strong incentives to conduct or fund appropriate efficacy studies. A “progressive” approval system like this would give cures for rare diseases a fighting chance and substantially reduce the risks and cost of developing safe new drugs.
Instead of price regulations I have argued for more publicly paid for efficacy studies, to be produced by the NIH and other similar institutions. Third party efficacy studies would have the added benefit of being less subject to bias.
Importantly, we already have good information on what a safety-only system would look like: the off-label market. Drugs prescribed off-label have been through FDA required safety trials but not through FDA-approved efficacy trials for the prescribed use. The off-label market has its problems but it is vital to modern medicine because the cutting edge of treatment advances at a far faster rate than does the FDA (hence, a majority of cancer and AIDS prescriptions are often off-label, see my original study and this summary with Dan Klein). In the off-label market, firms are not allowed to advertise the off-label use which also gives them an incentive, above and beyond the sales and reputation incentives, to conduct further efficacy studies. A similar approach might be adopted in a safety-only system.
It is good to see dynamism in at least one segment of the labor market, namely tanning concierges:
With his tousled brown hair and athletic frame, Anastasio resembles Tom Cruise, circa “Risky Business.” He politely introduces himself to guests when they arrive on the pool deck, 18 stories above SoHo. Guests decide whether they’d like the teen to gently tap them on the shoulder or send a text when it’s time to turn over.
“Most people like to be texted. It’s less invasive,” Anastasio said. The text comes every twenty to thirty minutes and simply says, “Turn Over.”
He earns $15 an hour, and the full story is here. For the pointer I thank Adam Frey.
1. He twice failed the entrance exam at the Polytechnique in Paris because of his weak math skills.
2. He enrolled in a mining engineering school, wrote novels, and was an art critic for a while.
3. He was self-taught in economics.
4. Walras thought he deserved a Nobel Peace Prize, though he failed to win one.
That biographical information is from Cocktail Party Economics: The Big Ideas and Scintillating Small Talk about Markets, by Eveline J. Adomait and Richard G. Maranta. I can imagine this book as a good supplement to an undergraduate economics class with a very good basic text; it is mostly basic analytics with scattered interesting features throughout the book. Here is a short interview with one of the authors.
Looking only at debt-gdp ratios misses this point, from John Hussman:
Still, it’s precisely that short average maturity that makes the debt problematic from a long-run perspective, because it can’t be inflated away easily. In the event of sustained inflation, the debt would have to be constantly refinanced at higher and higher yields. Contrary to the assertion that the U.S. can easily inflate its debts away, it is clear that sustained inflation would create enormous risks to our long-run fiscal condition by driving interest costs to an intolerable share of revenues. At that point, any shortfall in GDP growth or government revenues would result in a rapid spike in debt-to-GDP (as Greece and other peripheral European nations are experiencing now). Prior to embarking on an inflationary course, the first thing a government would want to do is dramatically lengthen the maturity of its debts.
For the pointer I thank Andrew Sweeney.
And the student who simplified a subject by writing about it “in Lehman’s terms” baffled Iain Woodhouse, senior lecturer in the School of Geosciences at the University of Edinburgh, until he read the phrase aloud (“layman’s terms” was intended).
Determinists argue that fault and blame have no place in criminal “justice”. Neuroscientist David Eagleman, for example, made this argument recently in The Atlantic:
The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.
While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.
Eagleman and other determinists are against punishment but they recognize that incarceration still has a role to play because the public has a right to be safe. Philosopher Saul Smilansky now pounces with a timely paper on determinism and punishment.
It is surely wrong to punish people for something that is not their fault or under their control. (Hard determinists agree with this premise.) But incarceration is a type of punishment so under the hard determinist view, justice
requires that when we incarcerate criminals we must also compensate them to make up for the unjust punishment. Smilansky has a bit of a silly name for punishment with compensation, funishment.
Funishment, however, is very likely to cause a big increase in crime and that is also unjust. Smilansky concludes, therefore, that hard determinists have a problem:
[B]y its nature funishment is a practical reductio of hard determinism: it makes implementing hard determinism impossible to contemplate.
Smilansky has put hard determinists into a corner but I fear that they have a type of escape at least in practice if not in theory–it is the one used by many humanitarians in the past–punish people under the guise or belief that you are really doing them good. The inquisitors surely recognized that it was unjust to torture someone who was controlled by the devil. Nevertheless, if torture is what it takes to get the devil out, then torture is not punishment but treatment…and vice-versa. The record of the psychiatric profession and non-punishing treatment of criminals (and others) is not without blemish.
Eagleman goes to some lengths to distance himself from such conclusions. He says, for example, that
To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs.
The tension, as I see it, is that if free will is a myth then it’s not clear why we should have an ethical goal of changing people as little as possible.
From Anil Gupta, bad news:
To be sure, China’s R&D expenditure increased to 1.5% of GDP in 2010 from 1.1% in 2002, and should reach 2.5% by 2020. Its share of the world’s total R&D expenditure, 12.3% in 2010, was second only to the U.S., whose share remained steady at 34%-35%. According to the World Intellectual Property Organization, Chinese inventors filed 203,481 patent applications in 2008. That would make China the third most innovative country after Japan (502,054 filings) and the U.S. (400,769).
But more than 95% of the Chinese applications were filed domestically with the State Intellectual Property Office—and the vast majority cover “innovations” that make only tiny changes on existing designs. A better measure is to look at innovations that are recognized outside China—at patent filings or grants to China-origin inventions by the world’s leading patent offices, the U.S., the EU and Japan. On this score, China is way behind.
The most compelling evidence is the count of “triadic” patent filings or grants, where an application is filed with or patent granted by all three offices for the same innovation. According to the Organization for Economic Cooperation and Development, in 2008, the most recent year for which data are available, there were only 473 triadic patent filings from China versus 14,399 from the U.S., 14,525 from Europe, and 13,446 from Japan.
Starkly put, in 2010 China accounted for 20% of the world’s population, 9% of the world’s GDP, 12% of the world’s R&D expenditure, but only 1% of the patent filings with or patents granted by any of the leading patent offices outside China. Further, half of the China-origin patents were granted to subsidiaries of foreign multinationals.
That’s the worst news you are likely to read today. By the way: “…the allocation of government funds for R&D projects is highly politicized.”