zmp

Monday assorted links

Why are so many people still out of work?: the roots of structural unemployment

Here is my latest New York Times column, on structural unemployment.  I think of this piece as considering how aggregate demand, sectoral shift, and structural theories may all be interacting to produce ongoing employment problems.  “Automation” can be throwing some people out of work, even in a world where the theory of comparative advantage holds (more or less), but still this account will be partially parasitic on other accounts of labor market dysfunction.  For reasons related to education, skills, credentialism, and the law, it is harder for some categories of displaced workers to be reabsorbed by labor markets today.

Here are the two paragraphs which interest me the most:

Many of these labor market problems were brought on by the financial crisis and the collapse of market demand. But it would be a mistake to place all the blame on the business cycle. Before the crisis, for example, business executives and owners didn’t always know who their worst workers were, or didn’t want to engage in the disruptive act of rooting out and firing them. So long as sales were brisk, it was easier to let matters lie. But when money ran out, many businesses had to make the tough decisions — and the axes fell. The financial crisis thus accelerated what would have been a much slower process.

Subsequently, some would-be employers seem to have discriminated against workers who were laid off in the crash. These judgments weren’t always fair, but that stigma isn’t easily overcome, because a lot of employers in fact had reason to identify and fire their less productive workers.

Under one alternative view, the inability of the long-term unemployed to find new jobs is still a matter of sticky nominal wages.  With nominal gdp well above its pre-crash peak, I find that implausible for circa 2014.  Besides, these people are unemployed, they don’t have wages to be “sticky” in the first place.

Under a second view, the process of being unemployed has made these individuals less productive.  Under a third view (“ZMP”), these individuals were not very productive to begin with, and the liquidity crisis of the crash led to this information being revealed and then communicated more broadly to labor markets.  I see a combination of the second and third forces as now being in play.  Here is another paragraph from the piece:

A new paper by Alan B. Krueger, Judd Cramer and David Cho of Princeton has documented that the nation now appears to have a permanent class of long-term unemployed, who probably can’t be helped much by monetary and fiscal policy. It’s not right to describe these people as “thrown out of work by machines,” because the causes involve complex interactions of technology, education and market demand. Still, many people are finding this new world of work harder to navigate.

Tim Harford suggests the long-term unemployed may be no different from anybody else.  Krugman claims the same.  (Also in this piece he considers weak versions of the theories he is criticizing, does not consider AD-structural interaction, and ignores the evidence presented in pieces such as Krueger’s.)  I think attributing all of this labor market misfortune to luck is unlikely, and it violates standard economic theories of discrimination or for that matter profit maximization.  I do not see many (any?) employers rushing to seek out these workers and build coalitions with them.

There were two classes of workers fired in the great liquidity shortage of 2008-2010.  The first were those revealed to be not very productive or bad for firm morale.  They skew male rather than female, and young rather than old.  The second affected class were workers who simply happened to be doing the wrong thing for shrinking firms: “sorry Joe, we’re not going to be starting a new advertising campaign this year.  We’re letting you go.”

The two groups have ended up lumped together and indeed a superficial glance at their resumes may suggest — for reemployment purposes — that they are observationally equivalent.  This discriminatory outcome is unfair, and it is also inefficient, because some perfectly good workers cannot find suitable jobs.  Still, this form of discrimination gets imposed on the second class of workers only because there really are a large number of workers who fall into the first category.

Here is John Cassidy on the composition of current unemployment.  Here is Glenn Hubbard with some policy ideas.

Assorted links

Wassily Leontief and Larry Summers on technological unemployment

Here is a very interesting piece from 1983 (jstor), Population and Development Review, it is called “Technological Advance, Economic Growth, and the Distribution of Income,” here is one excerpt:

In populous, poor, less developed countries, technological unemployment has existed for a long time under the name of “disguised agricultural unemployment”; in Bangladesh, for instance, there are more people on the land than are needed to cultivate it on the basis of any available technology.  Industrialization is counted upon by the governments of most of these countries to relieve the situation by providing — as it did in the past — much additional employment.

If I may put this into my own terminology, Leontief is suggesting that at some margins fixed proportions mean many agricultural laborers, or would-be laborers, are ZMP or zero marginal product.

Haven’t you ever wondered how some traditional economies can have unemployment rates which are so high?  Those are “structural” problems, yes, but of what kind?

By the way, Brad DeLong cites Larry Summers on ZMP workers:

My friend and coauthor Larry Summers touched on this a year and a bit ago when he was here giving the Wildavski lecture. He was talking about the extraordinary decline in American labor force participation even among prime-aged males–that a surprisingly large chunk of our male population is now in the position where there is nothing that people can think of for them to do that is useful enough to cover the costs of making sure that they actually do it correctly, and don’t break the stuff and subtract value when they are supposed to be adding to it.

What are humans still good for? The turning point in Freestyle chess may be approaching

Some of you will know that Average is Over contains an extensive discussion of “freestyle chess,” where humans can use any and all tools available — most of all computers and computer programs — to play the best chess game possible.  The book also notes that “man plus computer” is a stronger player than “computer alone,” at least provided the human knows what he is doing.  You will find a similar claim from Brynjolfsson and McAfee.

Computer chess expert Kenneth W. Regan has compiled extensive data on this question, and you will see that a striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.  Ken’s explanations are a bit dense for those who don’t already know chess, computer chess, Freestyle and its lingo, but yes that is what he finds, click on the links in his link for confirmation.  In this list for instance the Freestyle teams do very very well.

Average is Over also raised the possibility that, fairly soon, the computer programs might be good enough that adding the human to the computer doesn’t bring any advantage.  (That’s been the case in checkers for some while, as that game is fully solved.)  I therefore was very interested in this discussion at RybkaForum suggesting that already might be the case, although only recently.

Think about why such a flip might be in the works, even though chess is far from fully solved.  The “human plus computer” can add value to “the computer alone” in a few ways:

1. The human may in selective cases prune variations better than the computer alone, and thus improve where the computer searches for better moves and how the computer uses its time.

2. The human can see where different chess-playing programs disagree, and then ask the programs to look more closely at those variations, to get a leg up against the computer playing alone (of course this is a subset of #1).  This is a biggie, and it is also a profound way of thinking about how humans will add insight to computer programs for a long time to come, usually overlooked by those who think all jobs will disappear.

3. The human may be better at time management, and can tell the program when to spend more or less time on a move.  “Come on, Rybka, just recapture the damned knight!”  Haven’t we all said that at some point or another?  I’ve never regretted pressing the “Move Now” button on my program.

4. The human knows the “opening book” of the computer program he/she is playing against, and can prepare a trap in advance for the computer to walk into, although of course advanced programs can to some extent “randomize” at the opening level of the game.

Insofar as the above RybkaForum thread has a consensus, it is that most of these advantages have not gone away.  But the “human plus computer” needs time to improve on the computer alone, and at sufficiently fast time controls the human attempts to improve on the computer may simply amount to noise or may even be harmful, given the possibility of human error.  Some commentators suggest that at ninety minutes per game the humans are no longer adding value to the human-computer team, whereas they do add value when the time frame is say one day per move (“correspondence chess,” as it is called in this context.)  Circa 2008, at ninety minutes per game, the best human-computer teams were better than the computer programs alone.  But 2013 or 2014 may be another story.  And clearly at, say, thirty or sixty seconds a game the human hasn’t been able to add value to the computer for some time now.

Note that as the computer programs get better, some of these potential listed advantages, such as #1, #3, and #4 become harder to exploit.  #2 — seeing where different programs disagree — does not necessarily become harder to exploit for advantage, although the human (often, not always) has to look deeper and deeper to find serious disagreement among the best programs.  Furthermore the ultimate human sense of “in the final analysis, which program to trust” is harder to intuit, the closer the different programs are to perfection.  (In contrast, the human sense of which program to trust is more acute when different programs have more readily recognizable stylistic flaws, as was the case in the past: “Oh, Deep Blue doesn’t always understand blocked pawn formations very well.”  Or “Fritz is better in the endgame.”  And so on.)

These propositions all require more systematic testing, of course.  In any case it is interesting to observe an approach to the flip point, where even the most talented humans move from being very real contributors to being strictly zero marginal product.  Or negative marginal product, as the case may be.

And of course this has implications for more traditional labor markets as well.  You might train to help a computer program read medical scans, and for thirteen years add real value with your intuition and your ability to revise the computer’s mistakes or at least to get the doctor to take a closer look.  But it takes more and more time for you to improve on the computer each year.  And then one day…poof!  ZMP for you.

Addendum: Here is an article on computer dominance in rock-paper-scissors.  This source claims freestyle does not beat the machine in poker.

Daniel Klein views the rise of government through Ngram

Here is the abstract:

In this very casual paper, I reproduce results from the Google Ngram Viewer. The main thrust is to show that around 1880 governmentalization of society and culture began to set in — a great transformation, as Karl Polanyi called it. But that great transformation came as a reaction to liberalism, the first great transformation. The Ngrams shown include liberty, constitutional liberty, faith, eternity, God, social gospel, college professors, psychology, economics, sociology, anthropology, political science, criminology, new liberalism, old liberalism, public school system, Pledge of Allegiance, income tax, government control, run the country, lead the country, lead the nation, national unity, priorities, social justice, equal opportunity, economic inequality, forced to work, living wage, social needs, our society, bundle of rights, property rights, capitalism, right-wing, left-wing, virtue, wisdom, prudence, benevolence, diligence, fortitude, propriety, ought, good conduct, bad conduct, good works, evil, sentiments, impartial, objective, subjective, normative, values, preferences, beliefs, and information.

The paper is here.  Here is one example:

Companies won’t even look at the resumes of the long-term unemployed

Read this post by Brad Plumer, here is an excerpt:

Matthew O’Brien reports on a striking new paper by Rand Ghayad…The researchers sent out 4,800 fake résumés at random for 600 job openings. What they found is that employers would rather call back someone with no relevant experience who’s only been out of work for a few months than someone with lots of relevant experience who’s been out of work for longer than six months.

In other words, it doesn’t matter how much experience you have. It doesn’t matter why you lost your previous job — it could have been bad luck. If you’ve been out of work for more than six months, you’re essentially unemployable.

…This jibes with earlier research (pdf) by Ghayad and Dickens showing that the long-term unemployed are struggling to find work no matter how many job openings pop up. And it dovetails with anecdotes that workers and human resource managers have been recounting for years now. Many firms often post job notices that explicitly exclude the unemployed.

I think of this as further illustration of what I have called ZMP workers, a once maligned concept which now is rather obviously relevant and which has plenty of evidence on its side.  It’s fine if you wish to label them “perceived by employers as ZMP workers but not really ZMP,” or “unjustly oppressed and only thus ZMP workers.”  The basic idea remains and of course “stimulus” will reemploy them only by boosting the real economy, such as by raising output and productivity and reeducating, and not by recalibrating nominal variables per se.  For these workers it is not about wage stickiness.  Most by the way would not be ZMP if the U.S. economy were growing regularly at four percent in real terms, but of course that is not easy to achieve, not from where we stand today.

I’ve sometimes seen it hinted that calling them “ZMP workers” lacks compassion, but the compassionate thing to do is to try to identify the actual problem.  A year or two ago I thought ZMP workers accounted for about 1% of potential U.S. workers (hardly all of the unemployment problem, I would stress), but if anything I am moving that estimate upwards.

Does the theory of comparative advantage apply to dolphins?

Yes, at least so far it does, these are not ZMP dolphins:

The US Navy’s most adorable employees are about to get the heave-ho because robots can do their job for less.

The submariners in question are some of the Navy’s mine-detecting dolphins which will be phased out in the next five years, according to UT Sand Diego.

The dolphins, which are part of a program that started in the 1950’s, have been deployed all over the world because of their uncanny eyesight, acute sonar and ability to easily dive up to 500 feet underwater.

Using these abilities they’ve been assigned to ports in order to spot enemy divers and find mines using their unparalleled sonar which they mark for their handlers who then disarm them.

However, the Navy has now developed an unmanned 12-foot torpedo shaped robot that runs for 24 hours and can spot mines as well as the dolphins.

And unlike dolphins which take seven years to train, the robots can be manufactured quickly.

The new submersibles will replace 24 of the Navy’s 80 dolphins who will be reassigned to other tasks like finding bombs buried under the sea floor — a task which robots aren’t good at yet.

The story is here, and for the pointer I thank the excellent Daniel Lippman.  This is by the way a barter economy:

During their prime working years, the dolphins are compensated with herring, sardines, smelt and squid.

Marcia Angell’s Mistaken View of Pharmaceutical Innovation

At Econ Talk, Marcia Angell discusses big Pharma with Russ Roberts. I think she gets a lot wrong. Here is one exchange on innovation.

Angell: The question of innovation–you said that some people feel, economists feel, [the FDA] slows up innovation: The drug companies do almost no innovation nowadays. Since the Bayh-Dole Act was enacted in 1980 they don’t have to do any innovation….

Roberts: But let’s just get a couple of facts on the table…[The] research and development budget of the pharmaceutical industry is, in 2009, was about $70 billion. That’s a very large sum of money. Are you suggesting that they don’t do anything–that that’s mostly or all marketing? That they are not trying to discover new applications of the basic research? It seems to me basic research is an important part. Putting that research into a form that can make us healthier seems to be a nontrivial thing. You think they are–what are they doing with that money?

Angell: If you look at the budgets of the major drug companies–just go to their annual reports, their Security and Exchange Commission (SEC) filings, you see that Research and Development (R&D) is really the smallest part of their budget. If you look at the big companies you can divide their budget into 4 big categories. One is R&D, one is marketing and administration; the other is profits, and the other is just the cost of making the pills and putting them in the bottles and distributing them. The smallest of those is R&D.

Notice that Angell first claims the pharmaceutical companies do almost no innovation then, when presented with a figure of $70 billion spent on R&D, she switches to an entirely different and irrelevant claim, namely that spending on marketing is even larger. Apple spends more on marketing than on R&D but this doesn’t make Apple any less innovative. Angell’s idea of splitting up company spending into a “budget” is also deeply confused. The budget metaphor suggests firms choose among R&D, marketing, profits and manufacturing costs just like a household chooses between fine dining or cable TV. In fact, if the marketing budget were cut, revenues would fall. Marketing drives sales and (expected) sales drives R&D. Angell is like the financial expert who recommends that a family save money by selling its car forgetting that without a car it makes it much harder to get to work.

Later Angell tries a third claim namely that pharma companies do no innovation because their R&D budget is mostly spent on clinical trials and, “it’s no secret how to do a clinical trial.” I find this line of reasoning bizarre. I define an innovation as the novel creation of value, in this case the novel creation of valuable knowledge. Is Angell claiming that clinical trials do not provide novel and valuable knowledge? (FYI, I have argued that the FDA is overly safety conscious and requires too many trials but Angell breezily and nastily dismisses this argument). In point of fact, most new chemical entities die in clinical trial because what we thought would work in theory doesn’t work in practice. Moreover, the information generated in the clinical trials feeds back into basic research. Angell’s understanding of innovation is cramped and limited, she thinks it begins and ends with basic science in a university lab. Edison was right, however, when he said that genius is one percent inspiration and ninety-nine percent perspiration–both parts are required and there is no one-way line of causation, perspiration can lead to inspiration as well as vice-versa. Read Derek Lowe on the reality of the drug discovery process.

Angell infuses normative claims to the industrial organization of the pharmaceutical industry. Over the past two decades there has been an increase in the number of small biotechnology companies, often funded by venture capital. Most of the small biotechs are failures, they never produce a new molecular entity (NME). But a large number of small, diverse, entrepreneurial firms can explore a big space and individual failure has been good for the small-firm industry which collectively has increased its discovery of NMEs. The small biotechs, however, are not well placed to deal with the FDA and run large clinical trials–the same is also true of university labs. So the industry as a whole is evolving towards a network model in which the smaller firms explore a wide space of targets and those that hit gold partner with one of the larger firms to pursue development. Angell focuses in on one part of the system, the larger firms and denounces them for not being innovative. Innovation, however, should be ascribed not to any single node but to the network, to the system as a whole.

Angell makes some good points about publication bias in clinical trials and the sometimes too-close-for-comfort connections between the FDA, pharmaceutical firms, and researchers. But in making these points she misses the truly important picture. Namely that new pharmaceuticals have driven increases in life expectancy but pharmaceutical productivity is declining as the costs of discovering and bringing a new drug to market are rising rapidly (on average ~1.8 billion per each NME to reach market). In my view, the network model pursued on a global scale and a more flexible and responsive FDA, both of which Angell castigates, are among the best prospects for an increase in pharmaceutical productivity and thus for increases in future life expectancy. Nevertheless, whatever the solutions are, we need to focus on the big problem of productivity if we are to translate scientific breakthroughs into improvements in human welfare.