Economics

Mike Lee says yes, see also Matt.  Maybe, I would like to go this route, but I’m not (yet?) convinced.  What if non-profits and foreign companies end up as the shareholders, as indeed the Coase theorem would seem to indicate?  Doesn’t that lower tax revenue because they wouldn’t be making capital gains filings?  And to some extent, isn’t the U.S. tax system then encouraging inefficient ownership and governance?

There may be an answer to this worry, but I’ve yet to see it.

As someone who has written about FDA reform for many years it’s gratifying that all of the people whose names have been floated for FDA Commissioner would be excellent, including Balaji Srinivasan, Jim O’Neill, Joseph Gulfo, and Scott Gottlieb. Each of these candidates understands two important facts about the FDA. First, that there is fundamental tradeoff–longer and larger clinical trials mean that the drugs that are approved are safer but at the price of increased drug lag and drug loss. Unsafe drugs create concrete deaths and palpable fear but drug lag and drug loss fill invisible graveyards. We need an FDA commissioner who sees the invisible graveyard.

Each of the leading candidates also understands that we are entering a new world of personalized medicine that will require changes in how the FDA approves medical devices and drugs. Today almost everyone carries in their pocket the processing power of a 1990s supercomputer. Smartphones equipped with sensors can monitor blood pressure, perform ECGs and even analyze DNA. Other devices being developed or available include contact lens that can track glucose levels and eye pressure, devices for monitoring and analyzing gait in real time and head bands that monitor and even adjust your brain waves.

The FDA has an inconsistent even schizophrenic attitude towards these new devices—some have been approved and yet at the same time the FDA has banned 23andMe and other direct-to-consumer genetic testing companies from offering some DNA tests because of “the risk that a test result may be used by a patient to self-manage”. To be sure, the FDA and other agencies have a role in ensuring that a device or test does what it says it does (the Theranos debacle shows the utility of that oversight). But the FDA should not be limiting the information that patients may discover about their own bodies or the advice that may be given based on that information. Interference of this kind violates the first amendment and the long-standing doctrine that the FDA does not control the practice of medicine.

Srinivisan is a computer scientist and electrical engineer who has also published in the New England Journal of Medicine, Nature Biotechnology, and Nature Reviews Genetics. He’s a co-founder of Counsyl, a genetic testing firm that now tests ~4% of all US births, so he understands the importance of the new world of personalized medicine.

The world of personalized medicine also impacts how new drugs and devices should be evaluated. The more we look at people and diseases the more we learn that both are radically heterogeneous. In the past, patients have been classified and drugs prescribed according to a handful of phenomenological characteristics such as age and gender and occasionally race or ethnic background. Today, however, genetic testing and on-the-fly examination of RNA transcripts, proteins, antibodies and metabolites can provide a more precise guide to the effect of pharmaceuticals in a particular person at a particular time.

Greater targeting is beneficial but as Peter Huber has emphasized it means that drug development becomes much less a question of does this drug work for the average patient and much more about, can we identify in this large group of people the subset who will benefit from the drug? If we stick to standard methods that means even larger and more expensive clinical trials and more drug lag and drug delay. Instead, personalized medicine suggests that we allow for more liberal approval decisions and improve our techniques for monitoring individual patients so that physicians can adjust prescribing in response to the body’s reaction. Give physicians a larger armory and let them decide which weapon is best for the task.

I also agree with Joseph Gulfo (writing with Briggeman and Roberts) that in an effort to be scientific the FDA has sometimes fallen victim to the fatal conceit. In particular, the ultimate goal of medical knowledge is increased life expectancy (and reducing morbidity) but that doesn’t mean that every drug should be evaluated on this basis. If a drug or device is safe and it shows activity against the disease as measured by symptoms, surrogate endpoints, biomarkers and so forth then it ought to be approved. It often happens, for example, that no single drug is a silver bullet but that combination therapies work well. But you don’t really discover combination therapies in FDA approved clinical trials–this requires the discovery process of medical practice. This is why Vincent DeVita, former director of the National Cancer Institute, writes in his excellent book, The Death of Cancer:

When you combine multidrug resistance and the Norton-Simon effect , the deck is stacked against any new drug. If the crude end point we look for is survival, it is not surprising that many new drugs seem ineffective. We need new ways to test new drugs in cancer patients, ways that allow testing at earlier stages of disease….

DeVita is correct. One of the reasons we see lots of trials for end-stage cancer, for example, is that you don’t have to wait long to count the dead. But no drug has ever been approved to prevent lung cancer (and only six have ever been approved to prevent any cancer) because the costs of running a clinical trial for long enough to count the dead are just too high to justify the expense. Preventing cancer would be better than trying to deal with it when it’s ravaging a body but we won’t get prevention trials without changing our standards of evaluation.

Jim O’Neill, managing director at Mithril Capital Management and a former HHS official, is an interesting candidate precisely because he also has an interest in regenerative medicine. With a greater understanding of how the body works we should be able to improve health and avoid disease rather than just treating disease but this will require new ways of thinking about drugs and evaluating them. A new and non-traditional head of the FDA could be just the thing to bring about the necessary change in mindset.

In addition, to these big ticket items there’s also a lot of simple changes that could be made at the FDA. Scott Alexander at Slate Star Codex has a superb post discussing reciprocity with Europe and Canada so we can get (at the very least) decent sunscreen and medicine for traveler’s diarrhea. Also, allowing any major pharmaceutical firm to produce any generic drug without going through a expensive approval process would be a relatively simply change that would shut down people like Martin Shkreli who exploit the regulatory morass for private gain.

The head of the FDA has tremendous power, literally the power of life and death. It’s exciting that we may get a new head of the FDA who understands both the peril and the promise of the position.

Written by John Ferejohn and Frances McCall Rosenbluth, with the subtitle War, Peace, and the Democratic Bargain, this is a very important book.  Here is the main thesis:

If the modern democratic republic is a product of wars that required both manpower and money for success, it is time to take stock of what happens to democracy once the forces that brought it into being are no longer present.  Understanding war’s role in the creation of the modern democratic republic can help us recognize democracy’s exposed flanks.  If the role of the masses in protecting the nation-state diminishes, will the cross-class coalition between political inclusiveness and property hold?

…a second question is what is to become of the swaths of the world that were off the warpath in the fourteenth and fifteenth centuries when the European state was formed?  Continued and intense warfare forged democracies with full enfranchisement and protected property rights in the Goldilocks zone: in countries that had already developed administrative capacity as monarchies, and where wars were horrendous but manageable with full mobilization…

The bad news is that in today’s world, war has stopped functioning as a democratizing force.

You can order the book here, here is the Rosa Brooks WSJ review.

I loved Jason Barr’s Building the Skyline a history of New York from the point of view of the economics of skyscrapers. Where else will you learn so much of interest about elevators?

Elevators create a particular problem. On one hand, adding more floors to the building will produce more space from which the developer can collect more money. But at some point, a new shaft and set of elevators need to be added to handle the additional traffic. This then eats into the rentable space….Do the additional floors on top generate enough rents to cover the loss of new space from the elevators?

…skyscrapers must devote about 30% of the total space to elevators, including their shafts, hallways and machine rooms.

And then you have to get the people where they want to go quickly:

The new One World Trade Center will have the fastest cars in the Western Hemisphere, operating at a top speed of 2,000 feet a minute, though a relative snail compared with the Burj Khalifa, which delivers its tenants to any of its 164 floors at a rate of 3,543 feet per minute.

…Maximum [elevator] speed has increased at an average annual rate of 1.7% since 1913.

Barr loves skyscrapers and he writes about them beautifully. Building the skyline also has excellent photos and illustrations. It’s not for everyone but if the statistics, economics, and history of New York’s skyscrapers appeals, then this is the book to get.

Hat tip: Michael Hendrix.

Shenzhen, a city in southern China known for electronics manufacturing, stood out last year, completing 11 such skyscrapers. That’s more than the US and Australia combined. The city was also China’s hottest real estate market last year.

Next was Chongqing, noted for its fast GDP growth (link in Chinese), and Guangzhou, which completed a new finance center with 111 stories and especially fast elevators.

There is much more information at the link.

That is the topic of my latest Bloomberg column, here is one excerpt:

One of the biggest objections to recent globalization is that it extended international trade at a destabilizing pace. Whether or not you agree with this negative assessment, from 1950 to 2008, international trade grew about three times faster than global gross domestic product. Since then, cross-country trade has grown much more slowly, at about the pace of global GDP growth or perhaps slower. For better or worse, that is a significant deceleration.

Elites didn’t just decide trade growth had to be slowed down. Rather, the initial rapid growth had some self-reversing properties built in. For instance, China’s growth and exports slowed down as the economy matured and wages rose, trade-intensive Europe became a smaller percentage of the global economy, and protectionist nontariff pressures have recently been rising.

The wisdom behind globalization isn’t a belief that it will be steered by very wise elites. Rather, most economic processes show elements of convergence, stability and mean-reversion, without anyone planning them.

The conclusion:

I’m not saying that all is well, as I see significant possibilities for instability in the current political configuration. But the elites have in fact been working at their job, and now it is up to voters to catch up in their understanding.

Do read the whole thing.

The excellent Douglas Irwin has a new NBER paper on that question, here is one excerpt:

Hayek (1937, 64) leveled three main criticisms against flexible exchange rates, all of which were frequently repeated during this period. First, flexible exchange rates would give rise to speculative capital flows that would be destabilizing; specifically, capital movements would reinforce exchange rate shifts arising from payments imbalances, thereby magnifying volatility and “turn what originally might have been a minor inconvenience into a major disturbance.” Second, flexible exchange rates would lead to competitive depreciations, the flexible rate counterpart to competitive devaluations, which would encourage a return to mercantilism and an increase in trade barriers. “Without stability of exchange rates it is vain to hope for any reduction of trade barriers,” he concluded (1937, 74n). Third, exchange rate instability would create risks that would discourage international trade and deter long-term foreign investment.

Frank Graham and Charles Whittlesey, both at Princeton, were among the few American economists who favored complete floating rates and monetary independence.  Now what might account for such a difference in opinion?:

1. They hadn’t yet learned that fixed rate systems just weren’t politically stable, but we now know this with the benefit of hindsight, including the failures of Bretton Woods and a new understanding that competitive devaluations don’t have to be so disastrous.

2. They were good economists, and we are plain, simple idiots.

3. Heavy-duty manufacturing exports, with only a few major exporting countries, and a lot of FDI potential in the periphery, plus plenty of highly illiquid currencies, actually militated in favor of fixed rather than floating rates at that time.

4. During that period people thought high level of international cooperation were necessary to solve problems, and this stemmed in part from the failures of World War I and later World War II.  If you favor “international cooperation” as a general value, you might then also tend to mood affiliate with the notion of fixed exchange rates.

I believe that factors #1-4 all might play a role in the complete explanation here.  Am I overlooking something?

Hunt Allcott and Matthew Gentzkow have a new paper (pdf) on this topic.  I haven’t had a chance to look at it, but here is the bottom line:

… we find: (i) social media was an important but not dominant source of news in the run-up to the election, with 14 percent of Americans calling social media their “most important” source of election news; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared eight million times; (iii) the average American saw and remembered 0.92 pro-Trump fake news stories and 0.23 pro-Clinton fake news stories, with just over half of those who recalled seeing fake news stories believing them; (iv) for fake news to have changed the outcome of the election, a single fake article would need to have had the same persuasive effect as 36 television campaign ads.

Self-recommending…

From Garrett S. Christensen and Edward Miguel, from their survey of methodological problems in economic research:

Another potentially useful tool is post-publication peer review.  Formalizing post-publication peer review puts us in relatively uncharted waters.  Yet it is worth noting that all four of the AEA’s American Economic Journals allow for comments to appear on every article’s official webpage post-publication (anonymous comments are not allowed).  The feature does not appear to be widely used, but in one case…comments placed on the website have actually resulted in changes to the article between its initial online pre-publication and the final published version.

One of the biggest problems with “economics as a science” is that economists themselves cannot usually admit how irrelevant so much of the work — even the quality work — turns out to be.  I’m all for worrying about reproducibility, transparency, and the like, but sometimes I feel those micro-debates distract our attention from this bigger and broader problem and indeed help to obscure that problem.

Addendum: This website, JournalTalk, does the same thing.

That is the new and truly excellent biography of Paul Samuelson, by Roger E. Backhouse, volume I alone, which covers only up to 1948, is over 700 pp.  So far I find it gripping, here is one bit:

…he ascribed his intelligence to genetics: “I began as an out-and-out believer in heredity.  My brothers and I were smart kids.  My cousins all weighed in above the average.  He was congenitally smart and made no secret of it, at one point noting in the early 1950s he was prescribed some medication that dulled his mind, giving him for the first time insight into “how the other half lives.”

Are you up for a 14 pp. discussion of what Samuelson learned from Gottfried Haberler?  I sure am…and if you are wondering, Lawrence Klein was the first student to complete a PhD in economics at MIT.

That is the topic of my latest Bloomberg column, here is one excerpt:

At a further margin, government’s contribution to the health care, retirement and education sectors will also seem inadequate, because at such high prices a government really cannot pay for everything. A heated political debate will ensue. Progressives will argue that significant human needs are being neglected, and they will be able to point to numerous supportive anecdotes. Conservatives will argue that the fiscal path behind such policies is unsustainable, and they will be right, too. Because it will feel to voters that government isn’t doing a good job in these high-cost areas, the conservative view will get further traction. Libertarians may promote radical spending cuts, hoping for much higher productivity growth, but the government interventions are built in so thickly that that strategy could take a long time to pay off, and in the meantime it won’t look like a political winner.

All of the various sides may be correct in their major claims, but none will have a workable solution. This actually isn’t so far from where the health-care debate stands now, and where the retirement and nursing home debate is headed as America ages.

Do read the whole thing.

As a simple rule, reject any argument that asserts “my opponent X is leaving a health care need unfilled” because indeed that is always the case.  Within Obamacare, for instance, do you favor expanding the scope of the mandate at every margin?  Probably not.  The trick is to have a good argument for why yours is the Goldilocks position, not to note that those who subsidize health care less are…doing less.  There is always someone who wants to subsidize more than you do, so fight Parfit’s “war on two fronts.”

By Vlad Tarko, order your copy here.  Here are two excerpts:

She went to Beverly Hills High School, across the street from her house.  “I’m very grateful for that opportunity,” she later recalled, “because 90 percent of the kids who went to Beverly Hills High School went on to college.  I don’t think I would have gone to college if not for that environment.”  She recalled that her “mother didn’t want me to go to college — [she] saw no reason whatsoever to do that…”

“Basically I put my husband through law school,” she recalled…Her own [first] husband objected to her getting a PhD, which led her to divorce him.

This book captures the essence of Elinor Ostrom.

The editor of this truly excellent book is Timothy N. Ogden, the subtitle is Perspectives on Randomized Trials in Development Economics, and the contributors include Angus Deaton, Dean Karlan, Lant Pritchett, David McKenzie, Judy Gueron, Rachel Glennerster, Chris Blattman, and yours truly, with a focus on randomized control trials and other experiment-related methods.  Here is one bit from the interview with me:

I would say that just about every reputable RCT has shifted my priors.  Literally every one.  That’s what’s wonderful about them, but it’s also the trick.  You might ask, “why do they shift your priors?”  They shift your priors because on the questions that are chosen, and ones that ought to be chosen, theory doesn’t tell us so much.  “How good is microcredit?” or “What’s the elasticity of demand for mosquito nets?”  Because theory doesn’t tell you much about questions like that, of course an RCT should shift your priors.  But at the same time, because theory hasn’t told you much, you don’t know how generalizable the results of those studies are.  So each one should shift your priors, and that’s the great strength and weakness of the method.

Now, you asked if any of the results surprised me.  I think the same reasoning applies.  No, none of them have surprised me because I saw the main RCT topics to date as not resolvable by theory.  So they’ve altered my priors but in a sense that can’t shake you up that much.  If you offer a mother a bag of lentils to bring her child in to be vaccinated, how much will that help?  Turns out, at least in one part of India, that helps a lot.  I believe that result.  But 10 years ago did I really think that if you offered a mother in some parts of India a bag of lentils to induce them to bring in their kids to vaccination that it wouldn’t work so well?  Of course not.  So in that sense, I’m never really surprised.

And this:

One of my worries is RCTs that surprise some people.  Take the RAND study from the 1970s that healthcare doesn’t actually make people much healthier.  You replicate that, more or less, in the recent Oregon Medicaid study.  When you have something that surprises people, they often don’t want to listen to it.  So it gets dismissed.  It seems to me that’s quite wrong.  We ought to work much more carefully on the cases where RCTs are surprising many of us, but we don’t want to do that.  So we kind of go RCT-lite.  We’re willing to soak up whatever we learn about mothers and lentils and vaccinations, but when it comes to our core being under attack, we get defensive.

I very much recommend the book, which you can purchase here.  Interviews are so often so much better than just letting everyone be a blowhard, and Ogden did a great job.

Maybe not, possibly patents were more effective.  Here is some new research from B. Zorina Khan, entitled “Prestige and Profit: The Royal Society of Arts and Incentives for Innovation, 1750-1850”:

Debates have long centered around the relative merits of prizes and other incentives for technological innovation. Some economists have cited the experience of the prestigious Royal Society of Arts (RSA), which offered honorary and cash awards, as proof of the efficacy of innovation prizes. The Society initially was averse to patents and prohibited the award of prizes for patented inventions. This study examines data on several thousand of these inducement prizes, matched with patent records and biographical information about the applicants. The empirical analysis shows that inventors of items that were valuable in the marketplace typically chose to obtain patents and to bypass the prize system. Owing to such adverse selection, prizes were negatively related to subsequent areas of important technological discovery. The RSA ultimately became disillusioned with the prize system, which they recognized had done little to promote technological progress and industrialization. The Society acknowledged that its efforts had been “futile” because of its hostility to patents, and switched from offering inducement prizes towards lobbying for reforms to strengthen the patent system. The findings suggest some skepticism is warranted about claims regarding the role that elites and nonmarket-oriented institutions played in generating technological innovation and long-term economic development.

I consider the origins of modern science to be a still under-studied topic.

It’s long been known that the Chinese government hires people to support the government with fabricated posts on social media. In China these people are known as the “50c party”, so called because the posters were rumored to be paid 50 cents (5 jiao or about $.08) to write the posts. The precise nature and extent of the 50c party has heretofore been unknown. But in an amazing new paper, Gary King, Jennifer Pan and Margaret Roberts (KPR) uncover a lot of new information using statistical sleuthing and some unusual and controversial real world sleuthing.

KPR’s data-lever is an archive of leaked emails from the Propaganda Office of Zhanggong. The archive included many 50c posters who were sending links and screenshots of their posts to the central office as evidence of their good work. Using these posts, KPR are able to trace the posters though many social media accounts and discover who the posters are and what they are posting about. Both pieces of information reveal surprises.

First, the posters are government workers paid on salary not, as the 50c phrase suggests, piece-rate workers. Second, and more importantly, it has long been assumed that propaganda posts would support the government with praise or criticize critics of the government. Not so. In fact, propaganda posts actively steer away from controversial issues. Instead, the effort appears to be to distract (especially to distract the people from organizing collective action; thus distraction campaigns peak around times and places where collective action like marches and protests might become focal). KPR write:

Distraction is a clever and useful strategy in information control in that an argument in almost any human discussion is rarely an effective way to put an end to an opposing argument. Letting an argument die, or changing the subject, usually works much better than picking an argument and getting someone’s back up…

Debate is about appealing to an individual’s reason; debate is thus implicitly individualistic, respectful of rights and epistemically egalitarian. (As I argued earlier, respect for the truth is tied to individualism because any person may have truth and reason on their side.) Authoritarians don’t care about these things and so they lie and distract with impunity and without shame. In this case, the distraction is done subtly.

From the initial archive, KPR are able to create a statistical picture of 50c posters. In one of the most remarkable parts of the paper they use this picture to identify many other plausible 50c posters not in the original archive. Then KPR test their identification with a kind of academic catfish–essentially they trick the 50c posters into self-identifying. It’s at this point that KPR’s paper begins to read more like the description of a CIA op than a standard academic paper.

We began by creating a large number of pseudonymous social media accounts. This required many research assistants and volunteers, having a presence on the ground in China at many locations across the country, among many other logistically challenging complications. We conducted the survey via “direct messaging” on Sina Weibo, which enables private communication from one account to another. With IRB permission, we do not identify ourselves as researchers and instead pose, like our respondents, as ordinary citizens.

Using their own fake accounts, KPR directly message people they think are 50c posters with a message along the lines of:

I saw your comment, it’s really inspiring, I want to ask, do you have any public opinion guidance management, or online commenting experience?

The question is phrased in a positive way and it uses the official term “public opinion guidance” rather than the 50c term which has a negative connotation. Amazingly, 59% of the people KPR identify as 50c posters answer yes, essentially outing themselves.

KPRNow, one might wonder whether such a question has evidentiary value but KPR do a clever validation exercise. First, they ask the same question to people from the original leaked archive, people whom KPR know are actual 50c posters. Second, they ask the same question of people who are very unlikely to be 50c posters. The result is that 57% of the known 50c posters answer the question, yes. Almost exactly the same percentage (59%) as in the predicted 50c sample. At the same time, only 19% of the posters known not to be 50c answer yes (that doesn’t mean that 19% are 50c but rather that 19% is a measure of the noise created by asking the question in a subtle way). What’s important is that the large 40 point difference gives good statistical grounds for validating the predicted 50c sample.

Using this kind of analysis and careful, documented, extrapolation, KPR:

…find a massive government effort, where every year the 50c party writes approximately 448 million social media posts nationwide. About 52.7% of these posts appear on government sites. The remaining 212 million posts are inserted into the stream of approximately 80 billion total posts on commercial social media sites, all in real time. If these estimates are correct, a large proportion of government web site comments, and about one of every 178 social media posts on commercial sites, are fabricated by the government. The posts are not randomly distributed but, as we show in Figure 2, are highly focused and directed, all with specific intent and content.

As if this weren’t enough, an early version of KPR’s paper leaked and when the Chinese government responded, KPR became part of the story that they had meant to observe. The government’s response is now in turn used in this paper to verify some of KPR’s arguments. Very meta.

It took courage to write this paper. I do not think any of the authors will be traveling to China any time soon.