Month: January 2009
Be careful how you reach out:
A new study suggests that just fingering an item on a store shelf can create an attachment that makes you willing to pay more for it.
Previous studies have shown that many people begin to feel ownership of an item – that it "is theirs" – before they even buy it. But this study, conducted by researchers at Ohio State University, is the first to show "mine, mine, mine" feelings can begin in as little as 30 seconds after first touching an object.
The WashingtonWatch.com blog is having a contest.
Take any part of the stimulus bill
and write a short case for why it’s good or bad. (Recommended: search
the bill for “$” – there are more than 350 of them.) Pick anything –
from an entire government department to the smallest program. You can
even pick a non-spending provision in the bill that you think will do
good or bad.
Entries are limited to 150 words, and they will be judged on
clarity, persuasiveness, creativity, and originality. You don’t have to
be an economist – if you are, you really must avoid being boring. If it
takes a haiku or an infomercial-style pitch to make your case, do it.
Winners will receive $100. Enter in the comments section here.
The issue is here, I was sent this summary of the articles:
Race between Education and Technology is the title of a new
book by Claudia Goldin and Lawrence Katz. In a review essay, Arnold Kling
and John Merrifield hail the book for its formulation of the problem and
theoretical core, but find ideological distortions in the execution,
diagnosis, and prescriptions.
Are the most capable women and the most
capable men equally capable? Previously, Garett Jones, John
Johnson, and Catherine Hakim questioned Christina Jonung's and
Ann-Charlotte StÃ¥hlberg's call for more women in economics. Now
Jonung and StÃ¥hlberg respond.
John Donohue reply to Carlisle Moody and Thomas Marvell.
zigzag: Micha Gisser, James McClure, Giray Ökten, and Gary
Santoni investigate the upward-sloping segment of Gary Becker's (1991)
Blair Jenkins reviews an Econlit-based sample of articles on rent
What is the best barbecue in Kansas City? It was once Arthur Bryant's but is that really still the best? I mean The Best. Many people — not just me — will benefit from your correct answer to this question.
I'm in the market for a new computer since my old machine just can't grok the large datasets that I am throwing at it. I asked Paul Heaton, a very smart and productive econometrician with RAND who works with very big datasets, for his advice. He sent me the following which I thought might interest others. Your comments appreciated as well.
a desktop system that accepts more than 8 GB of RAM, and RAM is probably
the biggest factor affecting Stata performance. A 64 bit workstation or server architecture allow for more processors and more RAM, but these components usually cost 3-4 times as much as a
comparably performing desktop. If you want the absolute best performance
(i.e. more than 4 processor cores, 16 or 32 GB of RAM), you'll probably
need to go the workstation route. A good configuration will run you
$4K versus probably $1K for a top-end desktop.
2. I've use a top-end desktop configuration with a quad-core processor
and 8 GB of RAM to run things like NIBRS or value-added models using all
the students in New York City and gotten adequate performance but expandability is key.
3. If you want to run Windows, you'll need a 64-bit version. I use
Vista business which seems to work well for me. You'll need Stata to
send you a 64-bit version and a new license; converting your Stata
license from 32 to 64-bit is cheap. You'll also want to pay to upgrade
Stata to support the appropriate amount of processor cores in your new
machine (much more expensive), this boosts performance appreciably.
4. I suggest setting up your hard drives in a RAID configuration. You
buy four identical hard drives of size X GB instead of just one and a
controller card. The controller card spreads your data across two of
the drives and makes a mirror copy of those drives on the other two;
this is done transparently so from the user's perspective it is as
though you have a single drive of size 2X GB (there are other ways of
doing RAID, but these are less relevant for your situation). There are
2 major advantages to this: 1) The hard drive is often the bottleneck,
particularly when loading large datasets; by parallelizing the
operations across four drives instead of one, your datasets load and
write a lot faster. 2) Because there is a complete copy of your data
that is maintained on-the-fly, when one of your hard drives fails,
instead of losing data or being forced into an onerous restoration of
backups, you simply see an alarm alerting you to the problem. Decent RAID cards run about $200, and disk storage is cheap, so I think
this is something everyone who does serious data analysis ought to be
Banks don't function well at low levels of capitalization, so there is a strong and understandable tendency to want to "do something." Everyone says nationalization is not intended as a long-term solution but the question is whether government ownership will succeed in building up a greater capital cushion for the banks. If the environment for banking is not favorable, it won't and banks will have to stay nationalized.
How many years of profits are needed to create the cushion of capital which is required for re-privatization? And how many years of government ownership will be needed to generate that many years of profits? Will banks owned by the government be allowed to pursue profits, rather than lending to troubled industries in the districts of influential Congressmen? Or will government just stick money in the bank and hope they have thereby created a sound enterprise?
You might take the line: "Government is bad at running bail-outs, but it sure is good at running banks," but of course that's a tough sell.
Those are the questions you should be asking. Admittedly the alternatives to nationalization don't currently look so great either.
The liquidity trap is often cited as the reason why fiscal policy is required to get us out of the downturn.
My view is this: the short-run nominal interest rate is different than is
socially optimal but that doesn't mean the economy is in a trap. Liquidity trap proponents have lots of good evidence for the former proposition but much less evidence for the latter notion of a true trap.
I think of liquidity trap arguments as stressing the extreme importance of a single market price, namely the relative price between cash and T-Bills. In general I am suspicious of macroeconomic arguments which place so much weight on a single price being out of whack. You can put the Austrians into this camp (the loan rate of interest is wrong) and you can put the supply-siders into this camp (the tax rates on labor or perhaps capital are too high). Even though I think the short-run interest rate is "wrong," from the point of view of social optimality, that is not a driving fact of central macroeconomic importance. This doesn't have to be a "pro market" argument: in fact one can think that many different prices and quantities have comparably important degrees of "wrongness."
Here's a short list of economists who have expressed (varying degrees of) skepticism about liquidity trap arguments: D.H. Robertson, Jacob Viner, Milton Friedman, Philip Cagan, Don Patinkin, Auerbach and Obstfeld, Robert H. Lucas, Greg Mankiw, and, I might add, Bernanke and Blinder. You can make a case for adding Franco Modigliani to the list, although his article is cryptic in some regards. Leo Svensson has many interesting papers critical of the idea of a liquidity trap as a binding constraint.
It is possible that of these people are wrong but they do all understand Keynes's theory of interest and they do all understand how the liquidity trap is supposed to work. Nor are they merely citing "the real balance effect" as it is usually dismissed. These economists just don't think that so much in an economy can revolve around a single incorrect price. Many or all of them believe that monetary policy can work on other prices as well and through other channels.
There is also some good evidence that maybe the Great Depression wasn't a liquidity trap either. Here is some evidence against Japan having been in a liquidity trap in the 1990s. Neither of those papers is definitive; my point is that people who are skeptical of the liquidity trap argument aren't simply being pigheaded or ideological.
The overall point is that this talk of a liquidity trap — as a true trap (and not just another screwy price) — is a speculative hypothesis, not an obvious truth.
And unless you regard "the liquidity trap" as a true trap, you needn't favor such a large fiscal stimulus.
Evolutionary biologist John Whitfield is reading Origin for the first time and writing about it, chapter by chapter.
This is Darwin year, of course, 200 years for his birthday and 150 for The Origin of the Species. I may end up covering a bit of Darwin myself. And no, history of thought is not always essential but Darwin is one of the greatest authors I have read.
That's a storefront sign up next to the Eden Center, my favorite spot for Asian food in the DC area (you must visit if you haven't already been; often the best places are in the hidden corridors inside). It is, unfortunately, likely to be bad news.
No, and Hossain and Morgan explain their tests:
In this paper, we offer new evidence regarding the economic importance of QWERTY type outcomes. We use laboratory experiments to study platform competition. Experiments have several advantages in studying platform competition: the identity of the inferior platform is clearly defined; the degree to which a platform has a “head start” is controlled; and the “life cycle” of platform competition is reproducible. So far as we are aware, we are the first to study QWERTY in the lab.
We can easily summarize our results: Somehow, the market always manages to solve the QWERTY problem. In sixty iterations of dynamic platform competition, our subjects never got stuck on the inferior platform–even when it enjoyed a substantial first-mover advantage. The remainder of the paper describes in detail the experiments and the results.
This is another theory which probably should be laid to rest. I do think it can explain being stuck in an inefficient language (switching is then truly difficult), but traditional economic examples are hard to come by.
Linda Bilmes presented an interesting paper (not online) at the AEAs looking at the fiscal stimulus in light of Katrina, Iraq and the Big Dig. Here are some key grafs:
A good play to start looking for lessons is by analyzing the three biggest recent examples of heavy government spending on infrastructure: the Iraqi reconstruction effort, Hurricane Katrina reconstruction, and the Big Dig artery construction in Boston. Let me start by pointing out that all of these were plagued by a number of serious problems.
Iraqi reconstruction: [T]he Special Inspector General for Reconstruction, Stuart Bowen,…has found that the effort has been riddled with cost overruns, project delays, fraud, failed projects and wasteful expenditures…even though the first tranche of $19 billion in Iraqi reconstruction money became available in October 2003, the Defense Department did not issue the first requests for proposals for this money until 10 months later…
Hurricane Katrina: …the US has appropriated, over $100 billion in short and long term reconstruction grants, loan subsidies [etc]…GAO found that FEMA made over $1 billion–or 16% of the total in this particular category–in fraudulent payments…items like professional football tickets and Caribbean vacations.
The Big Dig: …the largest single infrastructure project in the US…many lessons on how not to run a project…officially launched in 1982, but it did not break ground until 1991, due to environmental impact statements, technical difficulties and jurisdictional squabbles…not "completed" until 2007.
Bilmes is the co-author with Joseph Stiglitz of The Three Trillion Dollar War.
Paul Krugman has some questions:
institutions that want to “get bad assets off their balance sheets” can
do that any time they like, by writing those assets down to zero – or
by selling them at whatever price they can. If we create a new
institution to take over those assets, the $700 billion question is, at what price? And I still haven’t seen anything that explains how the price will be determined.
I suspect, though I’m not certain, that policymakers are once more
coming around to the view that mortgage-backed securities are being
systematically underpriced. But do we really know this? And how are we
going to ensure that this doesn’t end up being a huge giveaway to
Here is more detail on various plans. I see it so: If the assets are undervalued by the market, buying them up is an OK deal. Presumably the price would be determined by a reverse auction, with hard-to-track asset heterogeneity introducing some arbitrariness into the resulting prices. If these assets are not undervalued by the market, and indeed they really are worth so little, our government wishes to find a not-fully-transparent way to give financial firms greater value, also known as "huge giveaway."
Right now it seems to boil down to the original TARP idea or nationalization, take your pick. You are more likely to favor nationalization if you think that governments can run things well, if you feel there is justice in government having "upside" on the deal, and if you are keen to spend the TARP money on other programs instead.