Results for “small group theory”
49 found

Marketing Pork

Here is a great little story by Danny Vinik from Politico’s The Agenda on how so-called marketing boards are surreptitiously turned into lobbying boards.

Industries with a large number of producers find it difficult to organize collectively because of the free rider problem. Mostly, that’s a good thing because it prevents cartels. Collective action, however, could also be used to perform research or marketing that’s good for the industry as a whole but too expensive for any small subset of producers. In theory, therefore, some type of collective action could be beneficial and in agriculture governments have created checkoff programs which force producers to pay a tax to fund collective goods.

pigsCheckoffs exist for dairy farmers, mushroom producers, and even popcorn processors. Critics say they violate economic freedom and distort the market; big corporate farmers, they allege, easily find ways to influence the boards and siphon the money off to push their own causes.

“In one sense, it’s a classic case of the larger producers are the more powerful political forces within these organizations,” said Dan Glickman, the Agriculture Secretary at the end of the Clinton administration who largely supports checkoff programs.

For the unhappy hog farmers, the current problem started with the 1985 Pork Law, when Congress set up the National Pork Board and required all farmers to contribute. Today, hog farmers must hand over 40 cents out of every $100 in revenue from pork sales. The board uses the money, totaling nearly $100 million a year, to conduct research and promote the pork industry, but is not allowed to lobby.

But as Adam Smith said “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” Quite so. And in this case by creating a National Pork Board the government is providing the meeting hall and paying for the conversation. According to the law, the money from the checkoff program isn’t supposed to go for lobbying but here is where the story gets interesting.

You may recall the slogan, “Pork: The Other White Meat.” The slogan hasn’t been used for years but the National Pork Board still pays $3 million a year every year for the rights. Why would the Pork Board pay millions for an unused slogan? The key is who they are paying. The slogan is owned by National Pork Producers Council. The NPPC is a lobby group and you won’t be surprised to know that it is closely connected with the NPB (having once even shared offices).

…critics say the two groups have never been as separate as the law calls for, and now are essentially colluding through a deal that lets the Pork Board funnel money to the NPCC by assigning an absurdly inflated value to the “other white meat” slogan; the money then goes to promote the NPPC’s lobbying agenda.

A neat trick. The story is also a good object lesson in Mancur Olson’s thesis about how special interest groups grow in power over time, slowly choking off innovation as they cartelize the economy.

My thoughts on quadratic voting and politics as education

That is the new paper by Lalley and Weyl.  Here is the abstract:

While the one-person-one-vote rule often leads to the tyranny of the majority, alternatives proposed by economists have been complex and fragile. By contrast, we argue that a simple mechanism, Quadratic Voting (QV), is robustly very efficient. Voters making a binary decision purchase votes from a clearinghouse paying the square of the number of votes purchased. If individuals take the chance of a marginal vote being pivotal as given, like a market price, QV is the unique pricing rule that is always efficient. In an independent private values environment, any type-symmetric Bayes-Nash equilibrium converges towards this efficient limiting outcome as the population grows large, with inefficiency decaying as 1/N. We use approximate calculations, which match our theorems in this case, to illustrate the robustness of QV, in contrast to existing mechanisms. We discuss applications in both (near-term) commercial and (long-term) social contexts.

Eric Posner has a good summary.  I would put it this way.  Simple vote trading won’t work, because buying a single vote is too cheap and thus a liquid buyer could accumulate too much political power.  No single vote seller internalizes the threshold effect which arises when a vote buyer approaches the purchase of an operative majority.  Paying the square of the number of votes purchased internalizes this externality by an externally imposed pricing rule, as is demonstrated by the authors.  This is a new idea, which is rare in economic theory, so it should be saluted as such, especially since it is accompanied by outstanding execution.

The authors give gay marriage as an example where a minority group with more intense preferences — to allow it — could buy up the votes to make it happen, paying quadratic prices along the way.

My reservation about this and other voting schemes (such as demand revelation mechanisms) is that our notions of formal efficiency are too narrow to make good judgments about political processes through social choice theory.  The actual goal is not to take current preferences and translate them into the the right outcomes in some Coasean or Arrovian sense.  Rather the goal is to encourage better and more reasonable preferences and also to shape a durable consensus for future belief in the polity.

(It is interesting to read the authors’ criticisms of Vickrey-Clarke-Grove mechanisms on p.30, which are real but I do not think represent the most significant problems of those mechanisms, namely that they perform poorly on generating enough social consensus for broadly democratic outcomes to proceed and to become accepted by most citizens.  One neat but also repugnant feature of democratic elections is how they can serve as forums for deciding, through the readily grasped medium of one vs. another personae, which social values will be elevated and which lowered.  “Who won?” and “why did he win?” have to be fairly simple for this to be accomplished.)

I would gladly have gay marriage legal throughout the United States.  But overall, like David Hume, I am more fearful of the intense preferences of minorities than not.  I do not wish to encourage such preferences, all things considered.  If minority groups know they have the possibility of buying up votes as a path to power, paying the quadratic price along the way, we are sending intense preference groups a message that they have a new way forward.  In the longer run I fear that will fray democracy by strengthening the hand of such groups, and boosting their recruiting and fundraising.  Was there any chance the authors would use the anti-abortion movement as their opening example?

If we look at the highly successful democracies of the Nordic countries, I see subtle social mechanisms which discourage extremism and encourage conformity.  The United States has more extremism, and more intense minority preferences, and arguably that makes us more innovative more generally and may even make us more innovative politically in a good way.  (Consider say environmentalism or the earlier and more correct versions of supply-side economics, both innovations with small starts.)  But extremism makes us more innovative in bad ways too, and I would not wish to inject more American nutty extremism into Nordic politics.  Perhaps the resulting innovativeness is worthwhile only in a small number of fairly large countries which can introduce new ideas using increasing returns to scale?

By elevating persuasion over trading in politics (at some margins, at least), we encourage centrist and majoritarian groups.  We encourage groups which think they can persuade others to accept their points of view.  This may not work well in every society but it does seem to work well in many.  It may require some sense of persuadibility, rather than all voting being based on ethnic politics, as it would have been in say a democratic Singapore in the early years of that country.

In any case the relevant question is what kinds of preference formation, and which kinds of groups, we should allow voting mechanisms to encourage.  Think of it as “politics as education.”  When it comes to that question, I don’t yet know if quadratic voting is a good idea, but I don’t see any particular reason why it should be.

Addendum: On Twitter Glenn Weyl cites this paper, with Posner, which discusses some of these issues more.

What does an economist at Facebook do?

Michael Bailey, who is an economist at Facebook, reports on Quora:

I currently (Feb 2014) manage the economics research group on the Core Data Science team. We are a small group of engineer researchers (all PhDs) who study economics, business, and operations problems. As Eric Mayefsky mentioned, there are various folks with formal economics training spread across the company, usually in quantitative or product management roles.

The economics research group focuses on four research areas:

Core Economics – modeling supply and demand, operations research, pricing, forecasting, macroeconomics, econometrics, structural modeling.

Market Design – ad auctions, algorithmic game theory, mechanism design, simulation modeling, crowdsourcing.

Ads and Monetization – ads product and frontend research, advertiser experimentation, social advertising, new products and data, advertising effectiveness, marketing.

Behavioral Economics – user and advertiser behavior, economic networks, incentives, externalities, and decision making under risk and uncertainty.

I think a more interesting question is “what *could* an economist at Facebook do?” because there is a LOT of opportunity. There are incredibly important problems that only people who think carefully about causal analysis and model selection could tackle.  Facebook’s engineer to economist ratio is enormous. Software engineers are great at typical machine learning problems (given a set of parameters and data, make a prediction), but notoriously bad at answering questions out of sample or for which there’s no data. Economists spend a lot of time with observational data since we often don’t have the luxury of running experiments and we’ve honed our tools and techniques for that environment (instrumental variables for example). The most important strategic and business questions often rely on counterfactuals which require some sort of model (structural or otherwise) and that is where the economists step in.

tl;dr economists at Facebook compute counterfactuals.

How well does a minimum wage boost target the poor?

There has been a recent kerfluffle over the Sabia and Burkhauser paper (ungated here) suggesting that minimum wage increases do not very much help the American poor.  Sabia and Burkhauser report facts such as this:

Only 11.3% of workers who will gain from an increase in the federal minimum wage to $9.50 per hour live in poor households…Of those who will gain, 63.2% are second or third earners living in households with incomes three times the poverty line, well above 50,233, the income of the median household in 2007.

That’s what I call not very well targeted toward helping the poor.  To the best of my knowledge, these numbers have not been refuted or even questioned.

There has been a significant campaign lately to elevate this Arindrajit Dube piece (pdf) into a rebuttal of Sabia and Burkhauser.  I’ve now read through it, and while it is pretty dense, I don’t see that it supplies any such effective rebuttal (it is however a valuable paper, and survey paper, in its own right).

Here is an excerpt from the Dube paper:

An additional contribution of the paper is to apply the recentered influence function (RIF) regression approach of Firpo, Fortin and Lemieux (2009) to estimate unconditional quantile partial effects (UQPEs) of minimum wages on the equivalized family income distribution.

Dube also writes:

The elasticity of the poverty rate with respect to the minimum wage ranges between -0.12 and -0.37 across specifications with alternative forms of time-varying controls and lagged effects; most of these estimates are statistically significant at conventional levels.

Dube in fact counts up twelve papers on the side of “minimum wage hikes can make a reasonably-sized dent in poverty.”

Now, I don’t intend this as any kind of snide, anti-theory, or anti-technique comment, but when there is a clash between simple, validated observations and complicated regressions, no matter how state of the art the latter may be, I don’t always side with the regressions.

One interpretation of the Dube results is:

a) although a minimum wage hike applies only to some members of a community, its morale or network effects spread its benefits much more widely, or,

b) through some kind of chain link effect, a minimum wage hike pushes up the entire distribution of wages for lower-income workers

Alternatively, I would try

c) the public choice critique of econometrics is correct, these minimum wage hikes are all endogenous to complex factors, and no one has a properly specified model.  We are seeing correlations rather than causation, despite all attempts to adjust for confounding variables.

So far I am voting for c).  And there is a very simple story to tell here, namely that states which are good at fighting poverty, through whatever means, also tend to have higher minimum wages for political economy reasons.  It seems unlikely that controls are going to pick up that effect fully.

Or try another model, more tongue in cheek but instructive nonetheless.  If government is quite benevolent and omniscient, and has always done exactly the right thing in the past, we will see in the data that the minimum hikes of the past are at least somewhat effective in fighting poverty.  At the same time, the remaining options on possible minimum wage hikes will not help at all.

Dube’s paper, econometrically speaking, is a clear advance over Sabia and Burkhauser.  But Dube pays little heed to integrating econometric results with common sense facts and observations about the economy.  As Bryan Caplan has stated, the knowledge and judicious invocation of simple facts about the economy is one of the most underrated skills in professional economics.

I also get a bit nervous when the number of studies on one side of a question is counted and weighed up against common facts.  Some of these pieces are simply measuring the same correlation in (somewhat) differing ways, and the number of them says more about the publication process than anything else.  These pieces also are not all in what I would call great journals.  Maybe that is an unfair metric of judgment — I am writing this on a blog, after all.   Nonetheless I looked at the list of cited sources and pulled out the two clumps with what appeared to be the highest academic pedigree, in terms of both economist and outlet.

The first clump is a group of papers by David Neumark, with co-authors.  I find that Neumark does not himself think that minimum wage hikes do much if anything to help poverty, and he has a good claim at being the world’s number one expert on the economics of minimum wages.  In fairness to Dube, he does have some good (although I would not say decisive) criticisms of one of Neumark’s papers pushing this line.

The second source is a paper by Autor, Manning, and Smith, an NBER working paper.  They write “…the implied effect of the minimum wage on the actual wage distribution is smaller than the effect of the minimum wage on the measured wage distribution.”

Of course that hardly settles it.

You might call this one a draw, but then we return to the question of where the burden of proof lies.  I’m still stuck on, to repeat the above quotation, this:

Only 11.3% of workers who will gain from an increase in the federal minimum wage to $9.50 per hour live in poor households…Of those who will gain, 63.2% are second or third earners living in households with incomes three times the poverty line, well above 50,233, the income of the median household in 2007.

From where I stand, that hasn’t yet been knocked down.

The nature of the Medicaid cost problem

Harold Pollack writes:

The bottom 72 percent of Illinois Medicaid recipients account for 10 percent of total program spending. Average annual expenditures in this group were about $564, virtually invisible on the chart. We can’t save much money through any incentive system aimed at the typical Medicaid recipient. We spend too little on the bottom 80 percent to get much back from that. We probably spend too little on most of these people, anyway. For the bulk of Medicaid beneficiaries, cost control is less important than improved prevention, health maintenance and access to basic medical and dental services.

The real financial action unfolds on the right side of the graph, where expenditures are concentrated within a small and incredibly complicated patient group. The top 3.2 percent of recipients account for half of total Medicaid spending, with average expenditures exceeding $30,000 annually.

Many of these men and women face life-ending or life-threatening illnesses, as well as cognitive or psychiatric limitations. These patients cannot cover co-payments or assume financial risk. In theory, one might impose patient cost-sharing with some complicated risk-adjustment system. In practice, that is far beyond current technologies and administrative capabilities. Even if such a system were available, we couldn’t push the burden of medical case management onto these patients or their families.

Very much worth a ponder, and there is more in the post.

Raising Rival’s Costs and Reform in the Public Interest

How can we achieve reform in the public interest when the public is rationally ignorant and unorganized while the special interests are informed, organized and well funded?  Matt Yglesias draws some interesting lessons and hope (!) from my paper on The Separation of Commerical and Investment Banking: The Morgans vs. the Rockefellers (pdf).

Yglesias offers a brief summary of the paper:

The basic story is that the Depression led to a lot of public outrage about the financial system and the outrage was—as outrage tends to be—a little bit inchoate and not really focused on the fine-grained details of public policy. Meanwhile, the Rockefeller family and the Morgan family had some longstanding business conflicts between their respective empires. And the Glass-Steagall bill was essentially an effort by the Rockefellers to channel that inchoate public outrage in a direction that would harm the Morgans:

More than anyone else, Winthrop Aldrich, representative of the Rockefeller banking interests, was responsible for the separation of commercial and investment banking. With the help of other well-connected anti-Morgan bankers like W. Averell Harriman, Aldrich drove the separation of commercial and investment banking through Congress. Although separation raised the costs of banking to the Rockefeller group, separation hurt the House of Morgan disproportionately and gave the Rockefeller group a decisive advantage in their battle with the Morgans.

He then draws an interesting conclusion:

Tabarrok notes that when this kind of regulatory strategy is pursued in a given industry “the industry as a whole will shrink” even while one firm gains an advantage over its rivals. And here we have actually an answer to a question that’s troubled me for years: How, given political realities, can the financial sector ever be brought to heel?… It shows a way that smart and savvy would-be regulators can find ways to undermine sector-level political solidarity. Not just in ways that favor one firm against another (which would be pointless) but even in ways that shrink the sector as a whole.

Here’s the big picture. Under certain conditions, free markets channel self-interest towards the social good – that is the meaning of the invisible hand theorem. Unfortunately, there is no invisible hand theorem for politics. There are institutions, such as democracy, checks and balances and an independent judiciary, which help to channel political self-interest if not to the public good then at least away from the public evil. Even given the right macro institutions, however, breaking the iron triangle of politics is difficult. Industry self-interest and the public interest will typically align only accidentally. Universities are not less self-interested than any other actors but support for basic research is (arguably) in the public interest. The usual situation, however, is that industry self-interest pushes well beyond the point of alignment with the public interest. At current spending levels, lobbying by defense firms does not benefit the public even if national defense is a public good.

Yglesias is interested in the most difficult case when the public interest favors not a larger but a smaller industry. Will industry self-interest every align with a smaller industry? Rarely but if public anger against an industry is high then some industry participants may see that a smaller industry is consistent with their self-interest if their share of the industry grows enough as the industry shrinks–a bigger share of a smaller pie. This is the theory of raising rival’s costs that I argue led to the Glass-Steagall Act. Other examples of raising rival’s costs are firms like Costco, that already pay high wages advocating for increases in the minimum wage.

Could we apply this strategy to the provision of other public goods? Here’s an idea. The best political strategy to combat global climate change may be to bring the cleaner parts of the energy industry into a coalition with environmentalists to support a carbon tax. That means bringing the environmentalists together with the nuclear, hydro-electric and fracking part of the energy industry in a play to raise the relative costs of coal and foreign oil. Could this happen? It is unlikely but not inconceivable. As Yglesias says it would take “a smart and savvy” regulator and, I would add, a public-interested regulator (a small but let’s be charitable and say not a zero intersection) to bring the coalition together. You can see why I am less optimistic than Yglesias that the theory can be used to support the public interest but no one said that smart, savvy, public interested regulators would have it easy.

The Man of System

One sometimes hears arguments for busing or against private schools that say we need to prevent the best kids from leaving in order to benefit their less advantaged peers. I find such arguments distasteful. People should not be treated as means. I must confess, therefore, that I took some pleasure at the findings of a recent paper by Carrell, Sacerdote, and West:

We take cohorts of entering freshmen at the United States Air Force Academy and assign half
to peer groups designed to maximize the academic performance of the lowest ability students.
Our assignment algorithm uses nonlinear peer eff ects estimates from the historical pre-treatment
data, in which students were randomly assigned to peer groups. We find a negative and signi ficant treatment eff ect for the students we intended to help. We provide evidence that within our
“optimally” designed peer groups, students avoided the peers with whom we intended them to
interact and instead formed more homogeneous sub-groups. These results illustrate how policies
that manipulate peer groups for a desired social outcome can be confounded by changes in the
endogenous patterns of social interactions within the group.

I was reminded of Adam Smith’s discussion of exactly this issue in The Theory of Moral Sentiments:

The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.

Do note that this discussion is not a critique of the paper which is very well done.

Cousin Marriage and Democracy

In the United States consanguineous marriage (marriage between close relatives, often cousins) is frowned upon and in many states banned but it is common elsewhere in the world. Approximately 0.2% of all marriages are consanguineous in the United States but in India 26.6% marriages are consanguineous, in Saudi Arabia the figure is 38.4% and in Niger, Pakistan and Sudan a majority of marriages are consanguineous. Cousin marriage used joffreyto be more common in the West and was particularly common among royal families which gives some hints as to why it may sometimes be useful. Among families with titles or estates, cousin marriage will tend to keep the wealth intact–literally within the family–whereas wealth becomes more dilute more quickly with outside marriage. Cousin marriage may also increase cooperation within the extended family and help to fight off parasites.

A recent paper finds that consangunuity is strongly negatively correlated with democracy:

How might consanguinity affect democracy? Cousin marriages create extended families that
are much more closely related than is the case where such marriages are not practiced. To illustrate,
if a man’s daughter marries his brother’s son, the latter is then not only his nephew but also
his son-in-law, and any children born of that union are more genetically similar to the two grandfathers
than would be the case with non-consanguineous marriages. Following the principles of
kin selection (Hamilton, 1964) and genetic similarity theory (Rushton, 1989, 2005), the high
level of genetic similarity creates extended families with exceptionally close bonds. Kurtz succinctly
illustrates this idea in his description of Middle Eastern educational practices:

If, for example, a child shows a special aptitude in school, his siblings might willingly
sacrifice their personal chances for advancement simply to support his education. Yet once
that child becomes a professional, his income will help to support his siblings, while his
prestige will enhance their marriage prospects. (Kurtz, 2002, p. 37).

Such kin groupings may be extremely nepotistic and distrusting of non-family members in the
larger society. In this context, non-democratic regimes emerge as a consequence of individuals turning to reliable kinship groupings for support rather than to the state or the free market. It has
been found, for example, that societies having high levels of familism tend to have low levels of
generalized trust and civic engagement (Realo, Allik, & Greenfield, 2008), two important correlates
of democracy. Moreover, to people in closely related kin groups, individualism and the
recognition of individual rights, which are part of the cultural idiom of democracy, are perceived
as strange and counterintuitive ideological abstractions (Sailer, 2004).

By the way, cousin marriage results in an elevated risk of birth defects but on the same order as a 40 year old woman having children as opposed to a 30 year old. In other words, the risks are small relative to other accepted risks. Results do get worse when cousin marriage is prevalent over many generations.

Hat tip to Chris Blattman and Joshua Keating. FYI, Steve Sailer wrote an interesting piece on this issue.

Different framings when people agree

Let’s take two cases, namely higher infrastructure spending for the United States today and looser monetary policy for the eurozone.  I favor both, but often I am left discomforted by the endorsements I see, in part because I wish to see those issues framed differently.

On infrastructure spending, I prefer to start with a frustration with current and recent infrastructure spending.  It doesn’t seem very well allocated.  It takes too long.  We just spent a huge chunk through ARRA and couldn’t even clear up the backlogs at LaGuardia and Kennedy airports, the major gateways to America’s #1 city.  We don’t seem able to build up nuclear power as significant protection against climate change.  High-speed rail doesn’t seem like a good investment in the places where it is going through.

One can favor more infrastructure without thinking that “the point” is simply to demand and then get more spending.  “The point,” in my view, is to improve the quality of our decision-making and our processes of implementation.  If it were one or the other, I would rather improve the long-run quality than get the extra $$ today.  So that is the issue people should talk and write about more often, and it seems odd to me to bring up the current $$ issue without insisting on the broader and more important point about massive institutional failure.  It’s almost as if the writers don’t want to weaken their case for the extra $$ to be spent.

Alternatively, I would put it this way: I would like to be able to favor more infrastructure spending than I do (which is still to favor an upgrade).

(I also get nervous when I read 2013 claims that infrastructure spending will significantly boost employment.  I doubt if it will make any more than a very small difference in long-term unemployment, the core of our remaining employment problem.)

Ultimately, I think that these differences in framing are more important than any agreement over the conclusion, although of course both should be reported.

On eurozone monetary policy, I prefer to start by understanding the roots of poor ECB policy.  I don’t ascribe it to bad macroeconomic theory, for the most part, although it is never hard to find examples of bad macroeconomic theorizing, including in the policy community and in speeches.

I ascribe it to the desires of European voters, most of all in the wealthier northern countries.  Very often they have protected professional and service sector jobs and a privileged insider status, for both private sector and public sector reasons.  Four to six percent inflation, to them, means something close to a four to six percent real wage cut.  They won’t be able to renegotiate their way back to the previous real wage because deep down they sense — correctly — that today we live in a different world.   So they hate inflation and prefer to hold on to their insider rents.

So much of eurozone economic policy, and indeed the entire underlying structure of EU interest groups, is based on the desire to protect inside workers from possible real wage cuts.  It therefore should come as no surprise that those same forces have such a stranglehold over monetary policy.  A lot of creditor financial institutions don’t like inflation either.  Nor do old people like inflation, in part because not all of them understand indexing and in part because indexing may be imperfect for the portfolio decisions of the elderly.  The elderly are a major swing voting bloc in many countries.  In the past I have referred to “gerontocratic deflation.”

Now I don’t mind people fulminating against bad (read: tight) eurozone monetary policy per se.  But in my view it misses or at least buries the lede.  The real story here is that a “dysfunctional to begin with” set of EU interest groups have now, due to changing circumstances, become much more dysfunctional.  The core lesson here, in my view, is that governments devoted so obsessively to rent protection won’t be able to make a lot of required tough decisions.  And yes, I become frustrated when I see the whole mess somehow blamed on Austrian economics, the Austerians, or related ideas.  At best that is superficial and at worst misleading or downright false.  It’s mostly a way to score debating points, at the expense of a fuller picture of what is going on.

I don’t like the meme “it’s the xxxx, dummy,” but I’ll try it in modified form: “it’s the interest groups, mein Freund.”

So there we have a story: “entrenched EU interest groups — including labor and the elderly — hinder easy money.”  Much of the left doesn’t like to stress the former factor and much of the right doesn’t like to stress or even admit the relevance of easier money, and so you have an under-reported story.

The bottom line is this: I am happy to read that there is a “sensible middle” position on both infrastructure and monetary policy.  I am happy to hold some version of that position.  But I am unhappy when that broom is used to sweep some very important underlying issues under the carpet.  The insistence on a sensible middle position, while true, is very often a cloak for partisan reframing of the issue itself and a somewhat Orwellian forgetting of what is really going on.  If we could get the underlying issues right, better policy would have a greater chance of coming to pass.  And we would understand the world better.  It also would be harder to score points in written or televised debate.

How to think about refugee policy

Dave Bieler, a loyal MR reader, asks:

I see that you've provided some commentary on Marginal Revolution about refugee situations, but I'm curious to know what you think about refugee policies – i.e. what is the role of government? What is the role of private insitutions? How can different types of institutions and organizations improve or make worse various situations? Do you have any thoughts or links to articles or books? I think it would make for an interesting blog post!

This question may be more relevant soon, although Muslim refugees from the Middle East do not have the best chances of getting into America.  I have read that one small town in Sweden has taken in more Iraqi refugees than has the entire United States.  Here is Wikipedia on refugees.  I hold a few views:

1. Refugees are deserving of migration toleration when possible, but they are not more deserving than equally destitute non-refugees.

2. Refugees nonetheless capture the imagination of the public to some extent, albeit for a very limited period of time.  Their beleaguered status provides a useful means of framing, to boost migration for humanitarian reasons.  When it comes to private institutions, refugee issues may be a useful way of raising funds, again for humanitarian aid, although again refugees should not be privileged per se, relative to other needy victims.

3. Legal treatment of refugees is inevitably arbitrary and unfair.  There is not and will not be a clear set of rational standards for who gets in and who doesn't.  There are better and worse standards at the extreme points, but don't expect this to ever get rigorous, not even at the level of ideal theory.

4. There always exists some pool of refugees who will help the migration-accepting country, even if you do not believe that about all pools of refugees.  Let's take in some Egyptian Copts, who possibly are in danger now.  Some groups of African migrants have done quite well in the United States and we can take in more oppressed women from north Africa.  In other words, "immigration skepticism" may redirect the direction of refugee acceptance, but it need not discriminate against the idea of taking in refugees.

5. Optimal refugee policy is most of all an exercise in public relations, as ruled by the idea of the optimal extraction of sympathy.  Explicit sympathy from the public cannot be expected to last very long.  In the best case scenario, sympathy for the refugees is replaced by fruitful indifference, so as to avoid "refugee fatigue."

See my earlier remarks on sovereigntyHere is an argument against admitting refugees; I don't agree with it.

Zero marginal product workers

Matt Yglesias suggests the notion is implausible, but I am surprised to read those words.  Keep in mind, we have had a recovery in output, but not in employment.  That means a smaller number of laborers are working, but we are producing as much as before.  As a simple first cut, how should we measure the marginal product of those now laid-off workers?  I would start with the number zero.  If a restored level of output wouldn't count as evidence for the zero marginal product hypothesis, what would?  If I ran a business, fired ten people, and output didn't go down, might I start by asking whether those people produced anything useful?

It is true that the ceteris are not paribus,  But the observed changes if anything favor the hypothesis of zero marginal product. There has been no major technological breakthrough in the meantime.  If anything, there has been bad monetary policy and a dose of regulatory uncertainty.  And yet again we can produce just as much without those workers.  Think of "labor hoarding" yet without…the hoarding.

You might cite oligopoly models and argue that the workers can produce something, but firms won't hire them because they don't want to expand output, due to lack of demand.  That doesn't seem to explain that output has recovered and that profits are high.  And since there is plenty of corporate cash, it is hard to claim that liquidity constraints are preventing the reemployment of those workers.

There is another striking fact about the recession, namely that unemployment is quite low for highly educated workers but about sixteen percent for the less educated workers with no high school degree.  (When it comes to income groups, the lowest decile has an unemployment rate of over thirty percent, while it is three percent for the highest decile; I'm not sure of the time horizon for that income measure.)  This is consistent with the zero marginal product hypothesis, and yet few analysts ask whether their preferred explanation for unemployment addresses this pattern.

Garett Jones suggests that many unemployed workers are potentially productive, but that businesses do not, at this moment, want to invest in future productive improvements.  The workers only appear to have zero marginal product, because their marginal product lies in future returns not current returns.  I see this hypothesis as part of the picture, although I am not sure it explains why current unemployment is so much higher among the unskilled.  Is unskilled labor the fundamental capability-builder for the future?  I'm not so sure.

It's also interesting to look at the composition of the long-term unemployed (not the same as the composition of all the unemployed, of course).  Older workers with a college education are quite stuck, conditional on their being unemployed.  And in this group, more education predicts a longer spell of unemployment.  Is this ongoing "recalculation," optimal search theory, or is the roulette wheel simply coming up zero each time it is spun for these workers?  Maybe a bit of each.  If you want, call some of it age discrimination and relabel the idea "perceived zero marginal product."

In general, which hypotheses predict lots more short-term unemployment among the less educated, but among the long-term unemployed, a disproportionately high degree of older, more educated people?  This stylized fact seems to point toward search and recalculation ideas, with some zero marginal products tossed in.  Do aggregate demand theories yield that same data-matching prediction?  I don't see it, at least not without being paired with a theory of concomitant real shocks.

Nothing in the zero marginal product hypothesis requires that these marginal products be zero forever.  As the entire economy expands more rapidly (when will that happen?), the value of even a low quality worker can quickly become much higher.  If you are opening up a new building, suddenly you really need that extra janitor and he is indeed more productive at the new margin.

Some people identify the zero marginal product hypothesis with the "hopeless dregs of the earth" description, but the two are not necessarily the same.  Complementarity, combined with some fixed initial factors, can yield zero or near-zero marginal products of labor.  (You'll see the phrase "excess capacity" used in this context, though that matches the oligopoly hypothesis more closely.)  The "dregs of the earth" view is pessimistic, but the complementarity version of the zero marginal product idea can be quite optimistic, predicting a very rapid recovery in the labor market, once the interactions turn positive. 

The "dregs" and the "complementarities" views also have different policy recommendations.  The dregs view implies either hopelessness or a lot of fundamental retraining or ongoing assistance, while the complementarity view leads one to ask how we might mobilize positive complementarities (rather than leaving orphaned factors of production) more quickly.  Perhaps there are some fixed factors, such as managerial oversight, and entrepreneurs do not want to strain those fixed factors too hard.  How can we make such fixed factors more replicable or more flexible?

Addendum: Arnold Kling comments.

How Haiti could turn things around

I'm not suggesting that the future gains will, in moral terms, outweigh the massive loss of life and destruction, but still the future Haiti might have a higher growth rate and a higher level of gdp per capita.  Here's how.

In the previous Haitian political equilibrium, the major interest groups were five or six wealthy families and also the drug trade, plus of course the government officials themselves.  None had much to gain from market-oriented, competitive economic development.  The wealthy families would have lost their quasi-monopolies and the drug runners would have been pushed out or lost some rents.  The wealthy families are not that wealthy and their economic projects are relatively small, at least by the standards of the outside world.

Enter the rebuilding of Haiti.  Contract money will be everywhere.  From the World Bank, from the U.S., from the IADB, even from the DR.  That contract money will be significant, relative to the financial influence of either the main families or the drug trade.

There exists (ha!) a new equilibrium.  The government is still corrupt, but it is ruled by the desire to take a cut on the contracts.  Ten or twenty percent on all those contracts will be more money than either the families or the drug runners can muster.  The new government will want to bring in as many of these contracts as possible and it will (maybe) bypass the old interest groups.  Alternatively, the old interest groups will capture the rents on these contracts but will be bought off to allow further growth and openness.

Arguably the new regime in Haiti will operate much like the federal states in Mexico.  Corrupt and a mess, but oriented toward a certain kind of progress, if only to increase the returns from corruption.

You will see this in how the port of Port-Au-Prince is treated.  Previously the rate of corruption was so high that the port was hardly used.  If the port becomes a true open gateway into Haiti (if only to maximize contracts and returns from corruption), that means this scenario is coming true.

The surviving Haitians, in time, might be much better off.  Virginia Postrel lays out some theory.

Identifying and Popping Bubbles: Evidence from Experiments

On the way up, bubbles encourage excessive investment in the bubble sector.  On the way down a bursting bubble can create wealth shocks, liquidity shortages, and balance-sheet death-spirals.  For both of these reasons, it would be good to be able to identify and pop bubbles.  Identifying bubbles isn't easy, however, because, especially when interest rates are low, prices can increase rapidly with small, rational changes in investor expectations.  But the difficulty of identifying bubbles is reasonably well known.  What I think may be less appreciated is that bubbles are hard to pop even when you know that they exist.

In the lab we can create artificial assets with known dividend streams and thus known fundamental values.  Since Vernon Smith's classic experiments (JSTOR), we know that even in these cases efficient markets fail and bubbles are common.  Bubbles occur even as uncertainty about the fundamental value diminishes (JSTOR).  We also know that once a bubble starts it's difficult to stop.  Circuit breakers and brokerage fees (transaction taxes), for example, don't do much to stop bubbles (see King, Smith, Williams, and Van Boening 1993, not online.)  Investor education doesn't help (for example telling participants about previous bubbles doesn't help). Even increasing interest rates doesn't do much to stop a bubble already in progress and may increase volatility on net.

Futures markets (JSTOR) and short selling do tend to dampen but not eliminate bubbles, thus, there is a case for expanding futures markets in housing and making short selling easier (not harder!).

Bubbles are also less common with more experienced traders – this is one of the strongest findings.  Don't get too excited about this, however, it's experience with bubbles that counts not just trading experience.  I once asked Vernon, for example, how the lab evidence generalized to the larger economy.  In particular, I asked whether 3 bubble experiences in the lab–the number which seems to be necessary to dampen bubbles–might translate to 3 big bubbles in the real world such as the dot com, commodity and housing bubbles (rather than to experience with your run of the mill bubble in an individual stock).  He thought that this was a reasonable inference from the evidence.  Thus we may not see too many big bubbles during the trading lifetime of current market participants but experience is a very costly teacher.  Can we do better?

The last factor that does seem to make a difference is that bubbles liftoff and reach higher peaks when there's a lot of cash floating around.  In theory, this shouldn't matter, fundamental value is fundamental value. If an asset is worth $10 in expected value then it's worth $10 whether you have $20 in your pocket or $200.  But in practice bubbles are bigger when cash relative to asset value is high.

Note that the latter experiments are consistent with the Fed having a significant role in bubble inflation (a theory I have not pushed).  In other words, rather than identifying and popping bubbles already on the rise, not blowing bubbles in the first place may be easier and more productive.   

No one makes you shop at Wal-Mart

In increasing order of seriousness.

As noted, the heart of the book is a well-written primer on let’s call it new economics.  As such, this book would make a good supplement to an advanced undergraduate class.  But the activism and attacks on MarketThink are occasionally distracting.  Chapter 1, for example, opens with a
denunciation of inequality.  Nothing wrong with that but Slee doesn’t even attempt to show that there is any
connection between rising inequality and the failure of MarketThink theories.  He just lumps things he doesn’t like into one pile. If there were no asymmetric information, no
herding, no coordination problems and so forth I guarantee that there would
still be plenty of inequality.

For the most part, Slee illustrates the new economics with insightful, interesting and often new examples.  But there are clunkers.  I almost threw the book at the wall when he started talking about QWERTY.  Surely, Slee knows that this worn-out example is a joke?  The supposed superiority of the DVORAK keyboard was shown in studies conducted by … Dvorak.  See here.  It’s especially annoying that Slee did not reference, Winners, Losers & Microsoft.

As primer, it’s fine to illustrate with examples and move on but as an attack on markets one expects a balanced consideration of opposing theories.  For example, Slee looks at beer micro-breweries vs. mass brewers arguing that we are currently stuck in the bad mass-equilibrium because micro-breweries rely on word-of-mouth but the institutions which sustain the word-of-mouth equilibrium only work when there are already lots of micro-breweries about which one can talk.  Nice, but here is an alternative theory.  Economies of scale made mass produced beer cheaper and when push came to shove consumers chose the cheaper good product over the more expensive but slightly better product (I don’t eat at 5 star restaurants every night).  New technologies, however, have made micro-brewing more economic and as they have done so we are moving to the mass-customization world that Slee prefers.  Consumers have gotten the best of all worlds – given scarcity – in both time frames.  The beer activists in England that Slee likes moved the process along but in the direction that it was already going.

There is no comparative analysis in the book at all.  No discussion, for example, of how free riding, asymmetric information, herding etc. distorts government choice.  Also, no appreciation that what some of us MarketThink people really advocate is civil society which includes non-profits and voluntary collective action of all kinds.  And, no we are not all corporate shills (p. 106). 

It’s true that outcomes do not always illustrate preferences but often they do.   Maybe people really do not want to walk to school.  It’s subtle but Tom seems all too eager to call in the government to force us into the better equilibrium.  I worry when people start talking about how government can help us to express our true preferences.  Isn’t this what dictators always say?  True freedom is oppression. 

The chapter on power is terrible, I did throw the book against the wall.  Perhaps in order to prepare us to welcome government as the deliverer of our true preferences, Slee wants to diminish the distinction between liberty and coercion.  But a true liberal should never write things like this:

…the formal structure of democracy and free markets is not enough to rule out exploitation and plunder – characteristics usually associated with repressive regimes.

If Tom visits GMU (I happen to know he reads MR) he should watch out because I shall kick him in the shins stating, "I refute you thus."

More seriously, repressive governments around the world threaten, rob, torture and murder with impunity.  Courageous individuals have died trying to escape such regimes while others have died fighting for their rights.  No matter how great are differences in wealth, it is morally wrong to equate what goes on in repressive regimes with capitalist acts between consenting adults.    

The rise of randomized trials in economic research

Using randomized prospective trials in economic development policy is not new. Since the 1960s, the U.S. has occasionally implemented them to answer important practical questions in health care, welfare and education policy. By randomly splitting people into two groups, one of which receives an experimental intervention, researchers can set up potentially simple, unbiased comparisons between two approaches. But these evaluations typically cost hundreds of thousands to millions of dollars, largely putting them out of reach of academic researchers, says development economist Abhijit Banerjee of the Massachusetts Institute of Technology.

The emergence of cheap, skilled labor in India and other countries during the 1990s changed that, Banerjee says, because these workers could collect the data inexpensively. At the same time, nongovernmental organizations (NGOs) were proliferating and started looking for ways to evaluate their antipoverty programs.

In 2003 Banerjee and his colleagues Esther Duflo and Sendhil Mullainathan founded an M.I.T. institute devoted to the use of randomized trials, called the Poverty Action Lab. Lab members have completed or begun a variety of projects, including studies of public health measures, small-scale loans (called microcredit), the role of women in village councils, AIDS prevention, and barriers to fertilizer use. The studies typically piggyback on the expansion of an NGO or government program. Researchers work with the organization to select appropriate measures of the program’s outcome and hire an agency to collect or spot-check the data.

Here is the full story.  Here is the home page of Poverty Action Lab.  Here are their completed projects.  Here is the Primary School Deworming Project.  And on this Thanksgiving weekend, I once again express my gratitude for the link from www.politicaltheoryinfo.com.