Month: February 2010
Someone who at least tried [to cut spending] is Rep. Paul Ryan of Wisconsin, the ranking Republican on the House Budget Committee, who recently unveiled a new edition of what he calls a "Road Map for America's Future." Its willingness to reform entitlement programs is laudable. But it keeps taxes at 19 percent of gross domestic product while raising (repeat: raising) federal spending from 21.6 percent of GDP in 2012 to more than 24 percent in the 2030s. It balances the budget, all right — in 2063.
Here is the article, interesting throughout, it mostly focuses on George Wallace.
Despite the fact that 91 percent of American households get their television via cable or satellite huge chunks of radio-spectrum are locked up in the dead technology of over-the-air television. In his Economic View column today Richard Thaler features the work of our GMU colleague Tom Hazlett who argues that auctioning off the spectrum to the high value users would generate at least $100 billion for the government and generate a trillion dollars of value to consumers. Thaler writes:
I KNOW that this proposal sounds too good to be true, but I think the opportunity is real. And unlike some gimmicks from state and local governments, like selling off proceeds from the state lottery to a private company, this doesn’t solve current problems simply by borrowing from future generations. Instead, by allowing scarce resources to be devoted to more productive uses, we can create real value for the economy.
This is from The Economist:
Recruiters are clearly becoming far more sophisticated, thanks to the new search tools that are available, says Aberdeen’s Mr Saba: “You’d think with 10% unemployment, jobs would be filled more quickly, but the focus on sourcing the right people, screening them and so on means that the time to fill has not fallen.” Mr Joerres believes that the increasing sophistication of recruiters means that firms will do less “anticipatory hiring” than in previous recoveries. Instead, firms will wait to get exactly the staff they need, when they need them.
Which earthquake killed more people today?
That's from Jim Ward and I was wondering the same. Keep in mind that Haiti has been having some heavy rains. Aux Cayes has waist-high water; what do you think that does for the spread of disease?
I haven't been to Concepción since December 1989, yet I will never forget my trip there. It was the first time I learned what was for me to become an important truth. If you set off to a mid-sized city in South America — especially in the Southern Cone — your chance of finding an idyllic spot are high. There may be, in a way, nothing to do there, at least not in the sense that your guidebook can report. But it will feel so fresh, so undiscovered, so representative of the vitality of everyday life, that you will at times think you have stumbled upon paradise. Everyone there will seem so apart from the world you know and there is a sudden (and quite silly) shock at seeing how seriously they take the world they know. Plus they have superb vanilla ice cream and strawberries for dessert.
Here is the Cathedral, which was destroyed in 1939 by an earthquake.
Using a new data set on annual deaths from disasters in 57 nations from 1980 to 2002, this paper tests several hypotheses concerning natural disaster mitigation. While richer nations do not experience fewer natural disaster events than poorer nations, richer nations do suffer less death from disaster. Economic development provides implicit insurance against nature’s shocks. Democracies and nations with higher quality institutions suffer less death from natural disaster. The results are relevant for judging the incidence of a Global Warming induced increase in the count of natural disaster shocks.
I've read that the Democrats are stressing this idea more in their arguments for the health bill. Oddly, even from intellectuals, you rarely hear what is one of the strongest arguments for the bill, namely that personal genome sequencing might mean — how many years from now? — that many more people have pre-existing conditions than we currently are aware of. Alternative equilibria are that the sequencing technology won't give us much health information, that the information will stay private (don't accept that cup of coffee!), or that we should in the meantime simply wait. There's plenty to debate there but I'd like to see more discussion on the long-term future of the health insurance sector or possible lack thereof.
On related issues, Ross Douthat wants a smaller bill:
But even as a hypothetical, the more modest plan is instructive. Per the Journal, it would insure half as many people as the House and Senate bills – 15 million, all told – at a quarter of the cost. 15 million happens to be roughly the number of American citizens who don’t have insurance, aren’t already eligible for Medicaid or S-CHIP, and make less than 300 percent of the poverty line. Which suggests that you can do some of the most morally urgent work of health care reform without a mandate or price controls, and at a fraction of the current legislation’s price tag.
Jon Chait has an exasperated response. First, the importance of that extra coverage, as would result from the mini-plan, he is suddenly downgrading in the grand scheme of things. Second, and more fundamentally, I'd like to repeat, and modify, an earlier question. I understand that the mini-bill does relatively well by "almost Medicaid" patients and relatively poorly by those with pre-existing conditions.
Compare the full bill to the mini-bill. For the extra insurance coverage granted by the full bill, some of which goes to individuals with pre-existing conditions, how much are we paying per person for that coverage?
Much better (but harder) would be to see how much extra we would be paying for the coverage each additional person, conditional on that person wanting the insurance at the price he or she would have to pay for it.
A third and related approach is to assume that consumer surplus, from the mandate/subsidy mix, is small for those individuals without pre-existing conditions. Take the extra expenditure and divide by the number of people with pre-existing conditions who now fail to get coverage. What is the cost per uninsured person with a pre-existing condition?
If you, as a supporter of the full bill, want to change people's minds, those are some critical numbers. For all the work that has been put into this legislation, it doesn't seem unjust to be asking for that hitherto unprovided information. The "it's too late to turn back now" argument doesn't much sway me. Nor does Chait's claim that by passing the mini-bill we would be foregoing a "transformative" moment. If the core of the full bill doesn't make sense, the entire structure won't hold up on its own.
The numbers, please.
Washington has had a lot of snow recently but until now I had not realized that this was due to hell freezing over.
I am not vouching for this, but it is worth considering as part of the saga of Austro-Chinese business cycle theory:
…the size of the Government’s debt is vastly understated. Not included in the public debt figures are the liabilities of the local governments, which the Ministry of Finance estimated at $680bn as of the end of 2008. In addition to that, a large part of the loans extended this year (estimated at $350bn) went to finance public infrastructure projects guaranteed by local governments. Furthermore, when the Chinese government bailed out its banking system in 2003, it set up Asset Management Companies that issued bonds to the banks at par for the non-performing loans that were transferred to them. These bonds, worth about $260bn, are explicitly guaranteed by the Ministry of Finance and the Central Bank and sit on the balance sheets of the big four banks. The Chinese government also explicitly guarantees $400bn worth of debt of the three “policy banks”. In total, these off-balance sheet liabilities are equal to $1.7tn, which would bring China’s public debt to GDP ratio up to 62%, a level that is comparable to the Western European average.
Of course guaranteeing a bond is not the same as owing money yourself.
From today's NYTimes:
The Obama administration is planning to use the government’s enormous buying power to prod private companies to improve wages and benefits for millions of workers, according to White House officials and several interest groups briefed on the plan….
Because nearly one in four workers is employed by companies that have contracts with the federal government, administration officials see the plan as a way to shape social policy and lift more families into the middle class.
At a time of 10% unemployment when real wages need to fall this is bad business cycle policy. I am more worried, however, about the long term consequences of creating a dual labor market in which insiders with government or government-connected jobs are highly paid and secure while outsiders face high unemployment rates, low wages and part-time work without a career path.
Long-term unemployment is at shockingly high levels which in itself creates a dynamic of persistence because the longer a worker is unemployed the less employable they become (in part due to loss of human capital and signaling problems). Thus, getting these workers back to work is going to be hard enough as it is. Labor regulations which raise wages and make hiring and firing workers even more costly will make re-employing the long-term unemployed even more difficult.
Moreover, once an economy is in the insider-outsider equilibrium it's very difficult to get out because insiders fear that they will lose their privileges with a deregulated labor market and outsiders focus their political energy not on deregulating the labor market but on becoming insiders–see Blanchard and Summers on hysteresis in unemployment and more recently Larry Ball here. Many European economies found themselves stuck in the insider-outsider equilibrium and as a result unemployment levels in places like France and Italy hovered at 9% or more for decades.
Marc Flandreau writes:
This paper examines the historical record of the Austro-Hungarian monetary union, focusing on its bargaining dimension. As a result of the 1867 Compromise, Austria and Hungary shared a common currency, although they were fiscally sovereign and independent entities. By using repeated threats to quit, Hungary succeeded in obtaining more than proportional control and forcing the common central bank into a policy that was very favourable to it. Using insights from public economics, this paper explains the reasons for this outcome. Because Hungary would have been able to secure quite good conditions for itself had it broken apart, Austria had to provide its counterpart with incentives to stay on board. I conclude that the eventual split of Hungary after WWI was therefore not written on the wall in 1914, since the Austro-Hungarian monetary union was quite profitable to Hungarians.
Other gated versions you'll find here. The bottom line is that collapse of the currency union stemmed from political factors, not economics. Contra the author, I would say it was written on the wall.
I found this 1920 Economic Journal article, "The Disintegration of the Austro-Hungarian Currency," useful on the details of the transition. The different parts of the Austro-Hungarian empire moved to different currencies by imposing capital controls and by stamping domestic currency to make it worth less. That limits the bank run problem since moving into currency has no advantage and funds cannot be easily transferred in an advantageous manner. Once all the money is stamped the currency has in effect been devalued.
Here is a paper on the collapse of the ruble zone, though it doesn't have much on transition dynamics. I suspect the transition is much easier in the absence of free capital movements.
There is a Peter Garber IMF Working Paper on the economics of the Austro-Hungarian dissolution – apparently not on-line — which I am still trying to get my hands on. Do any of you have a pdf? At that previous link you'll find other references and links as well.
Addendum: Matt Yglesias covers the former Czechslovakia.
Kristof is correct to note:
Frankly, these are difficult issues for journalists to write about. Evidence is technical, fragmentary and conflicting, and there’s a danger of sensationalizing risks.
But he falls into these very traps when suggesting that toxins play a major role in autism. Let me pick on two sentences. Try this one:
There are genetic components to autism (identical twins are more likely to share autism than fraternal twins), but genetics explains only about one-quarter of autism cases.
Kristof doesn't note that identical twins both are autistic ninety percent or more of the time (conditional on one of the twins being autistic), yet the concordance is much lower for fraternal twins. That militates in favor of genetic explanations, although the mechanics of transmission are poorly understood. It's wrong to cite genetics as explaining one-quarter of autism cases or to imply that genetics do not explain three-quarters. There are recent studies which look for correlated genes across autistics and find less than overwhelming results and perhaps this is what he has in mind. More accurately, there is a common problem with finding "simple" genetic markers for traits which are very likely or even certain to be genetic. The degree of correlation across genetic patterns we can find should not be taken as a measure of how many autistic cases — or any other condition — can be explained by genetics. By the way, here is one paper with a plausible genetic model of autism.
Kristof also writes: "Of children born to women who took valproic acid early in pregnancy, 11 percent were autistic."
Probably he is referring to Moore et.al. (2000), "A Clinical Study of 57 Children with Fetal Anticonvulsant Syndrome." A total of four (supposedly) autistic children were observed to produce this conclusion. What happened is that some mothers took a potentially dangerous substance during pregnancy, many of their children had problems — of a variety of kinds – and some of these problems ended up resembling some features of autism or at least were interpreted as such. It's unlikely those were four autistic children in the classic sense. The paper also gives no real information on its standard of diagnosis for autism or what it means by autistic traits. It's common that papers like this find some problems in children and simply call those children "autistic," then leaping to false overall conclusions.
There's also a paper on using valproic acid to treat autism. One possibility is that the mothers taking valproic acid already were more likely to have autistic children; more likely our entire body of knowledge on valproic acid and autism doesn't offer real information.
Cross-sectional studies, spanning decades of age groups, suggest a roughly constant rate of autism, even when environmental toxins are changing considerably over those lengthy time periods. Plenty of other studies relate autism clusters successfully to non-toxin factors, such as parental education or supply-side services or standards of diagnosis.
There are likely well over 50 million autistics in the world and most of them have not had significant exposure to the cited toxins. While there are some plausible heterogeneities within autism, it is necessary to ask whether "genes *or* toxins" is one of those and probably it is not.
Epigenetic factors have not been ruled out in autism but the most careful discussions recognize that the relevant epigenetic factors — if indeed any are important – are unknown and also need not fit our usual intuitions about what is harmful in terms of direct dosages. A different way to approach the question is to ask which environmental features raise the rate of mutation. That way the genetic and epigenetic explanations are at least potentially consistent.
I'm not defending the feeding of "toxins" to children, but on examination I think virtually all of the major specific claims in this Op-Ed — at least those about autism — are wrong.
Addendum: David Bernstein scores some telling points.