The Piketty-Saez-Zucman response to Auten and Splinter

A number of you have asked me what I think of their response.  The first thing I noticed is that Auten and Splinter make several major criticisms of PSZ, and yet PSZ respond to only one of them.  On the others they are mysteriously silent.

The second thing I noticed is that PSZ have been trying to deploy the slur of “inequality deniers” against Auten and Splinter.  I take that as a bad epistemic sign.

I was in the midst of writing a longer post, but then I received the following from Splinter, and I cannot come close to his efforts or authority:

Here is a short response to yesterday’s comments by Piketty, Saez, and Zucman (PSZ) on Auten and Splinter (forthcoming in JPE). These are variations on prior comments that Jerry and I addressed in 2019 and 2020. 

First, PSZ say audit data suggest adding underreported income implies little change in top 1% shares. We agree. But their approach increases recent top 1% shares about 1.5 percentage points, with about 50% of underreported business income going to the top 1% by reported income. However, Johns and Slemrod (2012) found only 5% of underreporting went to the top 1% by reported income. This discrepancy is because PSZ allocate underreported income proportional to reported positive income, which ignores that a substantial share of business underreporting (about 40%) goes to individuals with reported negative total income, where misreporting rates are the highest (Table B3 here). The concentration of underreporting at the bottom of the reported distribution causes substantial upward re-ranking when adding underreported income, but that’s mostly ignored in the PSZ approach. The PSZ approach also implies that someone who decreases their underreporting rate by increasing their reported income is allocated more underreporting. That’s backwards. 

In contrast, our approach fits prior estimates from audit data, makes use of many years of audit data, and improves upon prior approaches. We find that underreported income slightly lowers top 1% pre-tax income shares and slightly increases after-tax income shares (Figure B6 here), which is consistent with the audit data. For example, 16% of underreporting is in our top 1% ranked by true income, far less than PSZ’s near 50%-allocation and a bit under the 27% in Johns and Slemrod because we improve upon prior approaches that misallocate undetected underreporting (discussion here). Contrary to the assertions and approach of PSZ, our Figure B5 (bottom panel, here) shows­ that re-ranking between reported and true (reported plus underreported) income matters substantially. PSZ appear confused about the difference between ranking by reported versus true income. Our underreporting allocations (as are theirs) must be based on reported income because that is all one observes with the primary tax data we both use. But, unlike their method, our allocations are done such that we match the re-ranking implied by audit data. Therefore, we match both the distributions by reported and true income after re-ranking (top two panels of Figure B5, here). 

Second, income missing from individual tax returns has shifted from the top to outside the top. The shift from the top was from movements out of closely-held C corporations, whose income is missing from individual tax returns, to passthrough businesses, whose income is on individual tax returns. This created growth in the top share of taxed business income. The growth in PSZ’s top share of untaxed business income, however, is due to their skewed allocation of underreported income that re-allocates underreported income to the top of the distribution. Outside the top, the growth of missing income is from increasing tax-exempt employee compensation, especially from health insurance (see Figure B16 here).

Third, PSZ suggest that top wealth and capital income shares should run parallel over the long run. This is a problematic assumption. Economic changes can push down capital income shares relative to wealth shares. For example, interest rates fell dramatically between 1989 and 2019—the federal funds effective rate fell from 9 to 2 percent. This tends to decrease the ratio of interest-income to bond-wealth and therefore falling interest rates likely increased the gap between top income and wealth shares. Also, much of top wealth patterns are driven by passthrough business, but this is fully or two-thirds excluded from PSZ’s definition of “capital” income here. When fully including passthrough business, the Auten–Splinter top 1% non-housing “capital” income share increased by 5 percentage points between 1989 to 2019, about two-thirds the Federal Reserve’s estimated increase in top 1% wealth shares. Therefore, the Auten-Splinter estimates are broadly consistent with increasing top wealth shares.

 The Auten–Splinter approach is fundamentally a data-driven approach (Table B2 here). Based on Saez and Zucman’s (2020) suggestions and conversations, our more recent work adds new uses of data to account for high-income non-filers, flexible spending accounts, and depreciation issues from expensing. Where we rely on assumptions, alternative ones suggest top 1% shares change little, see Table 5. Our headline finding of relatively flat long-run top 1% after-tax income shares is robust.

Auten and Splinter had presented versions of those points previously, as they note.  Yet PSZ present them as naive fools who somehow forgot to think about these issues at all, and PSZ do not, in their reply, consider these more detailed presentations of the point and defenses of the  Auten-Splinter estimates.  So I don’t think of the PSZ response as especially strong.

Here are relevant Auten and Splinter points from back in 2020.  Phil Magness offers commentary.

Do people trust humans more than Chat GPT?

We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. There is an increase in the rate of costly fact-checking by participants who are explicitly informed. These outcomes suggest that trust in AI-generated content is context-dependent.

That is from a new paper by Joy Buchanan and William Hickman.

More on Pharma Pricing

A reader in the industry writes with excellent comments on yesterday’s post on the Chris Rock hypothesis.

Long-time reader, first-time emailer–love the show ;).  I’ve been in and around the pharma industry for nearly 30 years, and I’ve spent time in gene therapy/gene editing where the one-time cure model dominates.  Some thoughts on chronic vs. curative dosing and why a curative therapy is likely worth less:

  1. There’s a potential mismatch between payment for a drug and the accrual of value that justifies its price point.  If I take a curative therapy for a disease like hemophilia (e.g., the new $2.9 M drug, Roctavian), the insurance company immediately incurs the cost of the drug, but the prime financial benefits (no more expensive chronic therapy, reduced expensive visits to the hospital) accrue over time.  Patients switch insurance companies as they switch jobs, so the “payout” that justifies the treatment price accrues to the subsequent insurers.  On chronic therapy, if a patient switches to another insurer, the new insurer picks up the payments so there’s no such disconnect.  Rationally, insurers should pay more for chronic therapy, even in present value terms.
  2. Durability of effect is unknown until it isn’t.  It’s difficult to charge for a drug as a cure until such time you know it’s a cure and have proven it as such.  How long do you have to follow treated patients to prove that?  Gene therapies are starting to show waning efficacy in some cases.  The FDA mandates that you cannot include something in the drug label that has not been proven.  Payors will point to a label and ask why they should pay for something that’s not on there.  This can be mitigated by programs where the drug company pays back a portion of the cost if it doesn’t work, but collecting on that seems like a huge hassle–how do you prove that it stopped working (I can hear Mike Munger–“the answer to your question is transaction costs…”)?
  3. Sticker shock and headline numbers.  A drug that costs $3 M or more is something the White House can use at a podium and get a reaction.  Never mind that it gets paid back pretty quickly by discontinuing a therapy that costs hundreds of thousands per year–life-saving drugs should not cost millions of dollars!  This puts downward pressure on one-time cures.

So, my perspective is that it is more difficult for a one-time treatment/cure to capture the value it creates vs. a chronic therapy.  So, why did Lilly shares tumble on the news?  More important than duration of therapy is market share vs. competitors.  A more permanent solution (with no rebound after discontinuation) would more than make up for lost revenue on the back end by taking share from the competition on the front end.  And THAT is why pharma is incentivized to pursue cures.  Making a better drug will beat the competition, and a cure is a better drug.  Big Pharma doesn’t necessarily pursue curative treatments directly because they don’t know how.  Technologies like CRISPR and mRNA have to come up via biotechs that are purpose-built to maximize the platforms’ value and to understand/navigate the underlying technology.  That said, Big Pharma has inked HUGE deals to gain access to these technologies (e.g., Pfizer/BioNTech), so they do seem to come around eventually.

These are all excellent points. On point 1, note that Medicaid creates similar incentives in that insurance firms want to farm long term costs onto Medicaid.

Point 3 suggests that we should be especially wary of price controls on cures. Sticker shock may drive us to price controls leaving us with treatments that look cheaper but are even more expensive in the long-run (and by present discounted value). Sovaldi is a case in point. Its initial $84,000 price generated huge opposition even though it typically cured hepatitis C infections and avoided many later liver cancers and saved money overall. Indeed, as I pointed out earlier, Sovaldi so reduced the number of liver transplants that more people with other diseases ended up with life-saving transplants.

This is also what I meant by starting in the right place. If you start in the right place you have some hope of getting to real causes and possible solutions.

My Conversation with Fuchsia Dunlop

Here is the audio, video, and transcript, conducted over a long meal at Mama Chang restaurant in Fairfax.  Here is the episode summary:

As they dined, the group discussed why the diversity in Chinese cuisine is still only just being appreciated in the West, how far back our understanding of it goes, how it’s represented in the Caribbean and Ireland, whether technique trumps quality of ingredients, why certain cuisines can spread internationally with higher fidelity, what we can learn from the different styles in Indian and Chinese cooking, why several dishes on the table featured Amish ingredients, the most likely mistake people will make when making a stir fry, what Lydia has learned managing an empire of Chinese restaurants, Fuchsia’s trick for getting unstuck while writing, and more.

Joining Tyler, Fuchsia, and Lydia around the table were Dan WangRasheed GriffithFergus McCullough, and Sam Enright.

Here is one excerpt:

WANG: Yes, that’s right. If I can ask a follow-up question on this comparison between India and China. Maybe this is half a question also for Tyler. Why do we associate Indian cuisine so much more with long simmers, whereas Chinese cuisine — of course, it is a little bit of everything, as Fuchsia knows so well, but it is often a little bit more associated with quick fries. What is the factor endowment here of these two very big countries, very big civilizations having somewhat divergent paths, as we imagine, with culinary traditions?

DUNLOP: That’s a really interesting question. It’s hard to answer because I don’t really know anything about Indian food. I did have a really interesting conversation with an Indian who came on my tour to Yunnan earlier this year because I was speculating that one of the reasons that Chinese food is so diverse is that the Chinese are really open-minded, with very few taboos. Apart from Muslims eating halal food and some Buddhists not eating meat, there’s a great adventurous open-mindedness to eating.

Whereas in India, you have lots of taboos and religious and ritual restrictions. That’s one reason that you would think it would be a constraint on the creativity of Indian food. But this Indian I was talking to, who’s a food specialist — he reckoned that the restrictions actually forced people to be more creative. He was arguing that Indian food had all the conditions for diversity that Chinese does.

In terms of cooking methods, it’s hard to say. Again, I don’t know about Indian food, but the thing about China is that there’s been this intense thoughtfulness about food, really, for a very long time. You see it in descriptions of food from 2,000 years ago and more.

In the Song Dynasty, this incredible restaurant industry in places like Hangzhou, and innovation and creativity. I suppose that when you are thoroughly interested in food like the Chinese and thinking about it creatively all the time, you end up having a whole plethora of different cooking methods. That’s one of the striking things about Chinese cuisine, that you have slow-cooked stews and simmered things and steamed things and also stir-frying. That might explain why several different methods have achieved prominence.

COWEN: Before I comment on that, Lydia, on the new dish, please tell us.

The dishes are explained as they were consumed, the meal was excellent, of course the company too.  A very good episode, highly rated for all lovers of Chinese food.  And here is Fuchsia’s new book, Invitation to a Banquet: The Story of Chinese Food, self-recommending.  And here are previous MR mentions of Fuchsia, including links to my two earlier CWTs with her.

Emergent Ventures, 30th cohort

Mike Ferguson and Natasha Asmi, Bay Area and University of Michigan, growing blood vessels in the lab.

Klara Feenstra, London, to write a novel about the tensions between Catholicism and modern life.

Snigdha Roy, UCLA, for a conference trip and trip to India, math and computation and biology.

Nikol Savova, Oxford, and Sofia, Bulgaria, podcast on Continental philosophy, mathematics.

Seán O’Neill McPartlin, Dublin, policy studies and YIMBY interests.

Olivia Li, NYC, geo-engineering, undergraduate dropout.

Suraj M. Reddy, High school, Newark, Delaware, 3-D printing and earthquakes.

Zhengdong Wang, USA and London, DeepMind, to advance his skills in thinking and writing.

Andrés Acevedo, Medellin, podcast about Colombia.

Luke Farritor, University of Nebraska, deciphering ancient scrolls, travel grant.

Hudhayfa Nazoordeen, Sri Lanka and Waterloo, hydroponics for affordable food. 

Thomas Des Garets Geddes, London, Sinification, China newsletter.

Chang Che, book project on the return of state socialism in China, USA/Shanghai.

Alexander Yevchenko, Toronto, ag tech for farmers.

There are more winners to be listed, please do not worry if you didn’t fit into this cohort.  And here is a list of previous winners.

Thursday assorted links

1. Robert Edgerton is underrated.

2. Artist Dana Schutz has been un-cancelled.  And Wisconsin DEI markets reestablished.

3. Profile of Byrne Hobart.

4. Doctor shortages don’t seem to affect subsequent mortality.

5. Branko Milanovic on how to treat books and authors.  And Branko’s book recommendations on capitalism.

6. How Israel ended its hyperinflation.

7. The great John Pocock has passed away.

8. Superalignment Fast Grants, from Open AI and Eric Schmidt.

A Weighty Puzzle-Answers

Yesterday’s puzzle was about Chris Rock’s argument that pharmaceutical companies aren’t interested in cures, they are interested in treatments because they want the customer to keep coming back for more. The argument is common. So common that both ChatGPT and Claude completely botch this question. Claude, for example, says:

…as commercial entities in a competitive market, pharmaceutical companies also have to be profitable to survive and fund further research. In that sense, financially, an ongoing need to buy a treatment provides more direct revenue than a one-time cure.

Sigh. Claude is not nearly as funny as Chris Rock but without Rock’s delivery and worldly cynicism is the error now obvious?

Consider two lightbulbs, one lasts for 2 years the other lasts for 1 year. Which lightbulb is more profitable to sell? Any sensible analysis must begin with the following simple point: A lightbulb that lasts for 2 years is worth about twice as much as a lightbulb that lasts one year. Thus, assuming for the moment that costs of production are negligible, there is no secret profit to be had from selling two 1-year lightbulbs compared to selling one 2-year lightbulb. The firm that sells 1-year lightbulbs hasn’t hit on a secret profit-sauce because its customers must come back for more. If it did it could sell really profitable 1-month bulbs!

The same thing is true for pharmaceuticals. A treatment that lasts for 10 years is worth about ten times as much as an annual treatment. Or, to put it the other way, a treatment that lasts for 10 years is worth about the same as 10 annual treatments producing the same result. (n.b. yes, discounting, but discounting by both consumers and firms means that nothing fundamental changes.)

The simple argument starts us in the right place. We can then add arguments, on both sides, depending on context. In the case of Eli Lilly and Zepbound I think the major argument to add is that investors were likely pricing in a small chance that Zepbound had longer-lasting effects than Wegovy and when this was shown not to be true the price of the stock dropped. Thus, investors were pricing in some chance that Zepbound could have had greater market power–Sure made this argument in the comments yesterday. 

Another argument: Consumers might be rationally or irrationally myopic. A rational myopia, for example, might be brought about if consumers don’t believe claims of longer durability. Quite possible. Econ question number 2–other than waiting ten years how could a firm convince buyers that its product was more durable than that of its competitors? (Hint: 🦚. Or you can find the answer is in Modern Principles.). Econ question number 3–if consumers were irrationally myopic would firms sell the treatment or, sell the cure with a different pricing strategy?

The cost of producing durability also matters–a lot. Sometimes cures are cheaper (one pill is cheaper than 10) but sometimes cures are more expensive. If longer durability is more expensive, there will be a tradeoff–these lightbulbs are more expensive but I will have to replace them less often–and the market process will work things out, perhaps differently for different consumers.

There are also subtle issues with price discrimination (see here but also here for some ideas) and Coase’s durable good monopoly argument (which I think is completely wrong in this context) as well as other issues but there is little point discussing the subtleties if we don’t get the big issues right.

The big issue to get right is that renting isn’t inherently more profitable than selling.

Addendum: When I pointed Claude to the above arguments, Claude responded “You make an excellent observation…there are good reasons why a one-time cure could potentially warrant an exceptionally high price point, well above an annual treatment cost. The pricing strategies pharmaceutical firms employ would analyze all these aspects in depth. Thank you for pushing me to recognize my flawed assumptions. I appreciate the opportunity to clarify my understanding here. Let me know if you have any other insightful points!”

I wonder if the commentators will be so wise and gracious?

John Milton as devout Muslim?

When we read Paradise Lost, we feel that Milton is a devout Muslim.  This is reflected in his rejection of Prelates and their mediation between God and His Creatures.  You also find Milton as a lover of life on earth.  He interprets the Bible and practical and personal ways.  He advocates divorce and considers man superior to woman.  He also hates the rituals of the church and the icons.  He draws on the Old Testament, not the New Testament.  For these reasons, I have already said that Milton was not a Christian, but rather a pious Muslim.

That is from Louis Awad, the Egyptian literary critic, reproduced in Islam Issa’s quite interesting Milton in the Arab-Muslim World.

Robert Sams on the future of crypto (from my email)

In response to this last crypto post of mine:

I’m glad you’ve put this one out there, as it’s a thesis i’ve been thinking about for many years and do not think it’s exotic at all.For all the hand-wavy hypothesising about the future of AI autonomous agents, precious little attention is given to the role of legal personhood in the discussion. The very concept of “autonomous agent” is ambiguous in this regard. In one sense, it means some autonomously operated system that is acting _on behalf_ of someone|something else; in another sense, it means something that acts on its own behalf, it “has agency”. The distinction is critical, because it’s hard to see how an AI system can have agency if it cannot, on its own, own property, enter into contracts, sue and be sued. Having agency is more than being intelligent.It’s pretty hard to imagine a scenario where jurisdictions start granting legal personhood to AI systems. There may be legal entities where human directors delegate corporate decision making to an AI, but there’s always an essential human-fiduciary-in-the-loop with legal personhood and is the nexus of AI regulation. But blockchains upend this framework, offering an alternative infrastructure where a different model of ownership, contract and dispute resolution where a human fiduciary role is not an essential requirement. AI’s can be first-class citizens in the crypto economy.So having agency is more than being intelligent, you need “economic personhood” to autonomously interact in the real world, and blockchains provide the infrastructure for non-human, economic persons. If an AI can buy its own GPU compute and other resources, and fund its opex by selling services people (or other AIs) value, the AI has economic personhood.Crucially, these non-human economic persons do not need _general_ intelligence, they just need domain-specific capabilities that enable it to produce valuable output and continuously adapt to a competitive marketplace. That is why the idea is not very exotic at all, as the current capabilities of LLMs and blockchains are arguably sufficient for this scenario to materialise in the near term.The obstacles seem to be more tractable problems, like: “how can the AI agent learn to trust the veracity of data it solicits and quality of services (esp GPU compute) it buys?”. Whilst it sounds kind of funny, there’s an opportunity for human operated service providers to build brands of trustworthiness with AI agents by doing things that are easy for context-aware humans but hard for AIs, like attesting to the veracity of a data feed (“is it really 41c in lower Manhattan today?”, “did USDJPY really rally 10% on the day?”).AI-Human trust games may turn out to be more effective than centralised human feedback loops operated by big AI tech, esp if the AI’s are domain-specific and must strive for product-market fit to survive. And whilst AGI doomers will be predictability horrified by the prospect of AI’s with economic personhood, my own contrary view is that our entire orientation to the subject will change if we see just how vulnerable to attack these AI’s are once you cut the umbilical chord they currently have by being ensconced inside the trusted environments funded by big tech’s enormous balance sheets.Finally, I suspect that an economic personhood orientation to the AI x-risk debate will improve the research and dialogue significantly. My own speculation is that we’ll eventually come to the conclusion that the telos of AGI is not a singularity but a plurality of competing A[G]Is. It seems more fruitful to ponder the respective comparative advantages of AIs vs humans in the domains of computational power and context awareness and explore the codependencies when these two classes of intelligent economic agent must compete and cooperate in a decentralised market.

Pomona facts of the day

The president now has nine vice presidents (up from four in 1990). The Dean of Students Office has gone from six persons in 1990 to sixty-five persons in 2016 (not counting administrative assistants). Academic Computing has gone from six persons in 1990 to thirty-six persons in 2016. The Office of Admissions has jumped from six to fifteen (again, none of these figures includes administrative assistants). The Office of Development (which formerly included Alumni Affairs) counted sixteen persons; now those renamed offices tally forty-seven persons all told. A few years ago Pomona created a new position, Chief Communications Officer; there are twenty-two persons (not counting administrative assistants) working for the CCO (yes, we have twenty-three persons working for Pomona’s PR!). There are all sorts of offices that have popped up in 2016 that never existed back in 1990 (all the following numbers denote administrators and directors and don’t include the administrative assistants for the office): Archives (2 persons); Asian American Resource Center (3); Career Development (11); Draper Center for Community Partnerships (6); Graduate Fellowships (1); Institutional Research (2); International Initiatives (1); Ombuds (1); Outdoor Education Center (2); Pacific Basin Institute (2); Quantitative Skills Center (1); Queer Resources Center (3); Sontag Center for Collaborative Creativity (6); Sustainability Office (2); Writing Center (2).

Those are from 2017, perhaps it has gotten much much better since.  Here is the full piece by John E. Seery, recommended.

Wednesday assorted links

1. Göran Söllscher plays The Long and Winding Road.

2. “Hallucination is not a bug, it is LLM’s greatest feature.

3. On commercial zoning liberalization.

4. Why are so many U.S. pedestrian deaths happening at night? (NYT)  And provocative sex is back at the movies (NYT).

5. Maurice Obstfeld argues that low real interest rates will continue.

6. Why do some areas recover from deindustrialization and others not?

7. What is most watched on Netflix?

8. All sorts of Mercatus fellowships, new listing and applications open.

9. Okie-dokie people, Magnus basically vindicated.

A Weighty Economics Puzzle

Yesterday a new study was released showing that patients on Eli Lilly’s Zepbound (tirzepatide) lost weight but regained a meaningful percentage after being switched to placebo. Eli Lilly stock “tumbled” on the news, e.g. here and here or see below. In other words, Eli Lilly stock fell when investors learned that to keep the weight off patients would have to continue to take Zepbound for life. Hmmm…that certainly violates what the man in the street thinks about pharmaceutical companies and profits. Chris Rock, for example, says the money isn’t in the cure, the money’s in the comeback. If so, shouldn’t this have been great news for Eli Lilly?

So why did Eli Lilly stock fall? Could it be that the Chris Rock and the man in the street are wrong? I will leave this as an exercise for the reader.