Results for “FDA”
465 found

Economists and the FDA

Bottom line: The public thinks the FDA is great. Regular economists think it’s pretty good. And economists who specialize in the FDA think it’s pretty bad.

That is Bryan Caplan, read more here.

Addendum: I’ll grant that those who specialize in studying a particular agency may tend to be the critics.  That being said, the "man in the street" simply has not, in most cases, considered the economic criticisms of the FDA.

At this point, we all face a dilemma.  For instance Paul Krugman cites the predominance of academic Democrats as an argument against the Republican party.  Must he then accept this evidence on the FDA?  Must Caplan become a Democrat?  When is citing professional consensus opinion most persuasive?  What is the professional consensus on this question?

Alex and the FDA in Forbes

This week’s Forbes (the Nov. 1 issue) has a feature story on Alex’s work to make drug regulation more sensible.

Alex notes that off-label drug uses are largely unregulated. No proof of efficacy is required, and off-label drug prescriptions bring a net health gain; see this paper. Yet to get a new drug approved it must go through, in addition to Phase I trials,

…Phase II and Phase III trials, which typically take years and focus on efficacy as well as safety. The long wait can cost lives and runs up new-drug costs–to an estimated $900 million per successful drug.

Tabarrok says this system makes little sense; the FDA demands costly, time-consuming efficacy tests for some uses and no tests for others. And while the FDA allows off-label prescribing by docs, it strictly limits the drugmakers’ promotion of such uses to doctors and permits none at all to patients.

Alex argues that FDA regulation ought to be reduced, making the regulation of new and old drugs more consistent. But that is not all:

Tabarrok and [Dan] Klein also offer some alternative proposals at FDAReview.org. One is to make all FDA testing optional. Drugs that didn’t go through the process would be labeled “Not FDA Approved.” Under this approach, they say, “the FDA would become a genuinely voluntary institution, much like Underwriters Laboratories.” Another idea is for the FDA to award letter grades, A to D, to claims made by drugmakers, much as it is considering doing for health claims for foods and dietary supplements. The FDA could still have its say, but wouldn’t be able to impose long delays, since a new drug could be marketed at first as “unrated.”

At the least, Tabarrok argues, the FDA should permit drug companies to sell any drug that has been approved by other sophisticated drug regulators, such as those in Canada, Australia or the European Union. Under such a system U.S. patients would get speedier access to new medicines without losing out on safety protection.

Kudos to Alex, the only sorrow is that the on-line version does not reproduce the excellent photo of him in the magazine. But you can see that at your local Borders.

Becker on the FDA

In the latest Milken Institute Review, Nobel laureaute Gary Becker argues (sign up required) that the FDA should permit drugs to be sold once they have passed a safety standard, i.e. a return to the pre-1962 system. He writes:

…a return to a safety standard alone would lower costs and raise the number of therapeutic compounds available. In particular, this would include more drugs from small biotech firms that do not have the deep pockets to invest in extended efficacy trials. And the resulting increase in competition would mean lower prices – without the bureaucratic burden of price controls…

Elimination of the efficacy requirement would give patients, rather than the FDA, the ultimate responsibility of deciding which drugs to try…To be sure, some sick individuals would try ineffective treatments that would otherwise have been prevented from reaching market under present FDA regulations. But the quantity of reliable health information now available with only a little initiative is many times greater than when the efficacy standard was introduced four decades ago.

Dan Klein and I have written extensively on this issue at our web site, FDAReview.org, and in our latest paper Do Off Label Drug Practices Argue Against FDA Efficacy Requirements?

Blood supply and the FDA

Have you ever heard of Chagas disease? It is rare in the United States but common in Latin America, where 18 million people are infected and 50,000 die of it every year. Some little thingie crawls down your mouth and sucks your blood when you are sleeping (lovely), beware the thatched hut, and next thing you know, maybe about ten or thirty years later, your weakened heart or organs explode. There is no known vaccine, cure, or treatment.

Chagas is now making its way into the United States blood supply. Ideally, all donated blood should be screened for Chagas. But, can you believe this, the FDA needs to approve all blood tests of this kind. They haven’t approved any test for Chagas, nor have they shown much urgency in this regard, here is the full story.

About 30 tests are currently in use in Latin America, but none would appear to meet the FDA’s accuracy guidelines. In the meantime it appears someone would prefer that we have no test at all.

The New York Times put it as follows:

The failure of the blood industry and its regulators to develop a test since it was endorsed by a Blood Products Advisory Committee in 1989 seems to be a combination of bureaucratic inertia and divided responsibility for such a decision. Blood banks cannot use a test that the F.D.A. has not approved. The agency usually defers to its advisory committees, which have many experts from blood banks as members.

“It’s a political process that is not always fully engaged,” said Dr. Stuart J. Kahn of the Infectious Disease Research Institute, a Seattle group hunting cures for tropical diseases.

Whatever you think of the FDA as a regulator of drugs, this kind of bureaucratic control is hard to understand. Now it is longer enough for you to beware the thatched hut, you have to worry about the blood supply as well.

Response from Devin Pope, on religious attendance

All of this is from Devin Pope, in response to Lyman Stone (and myself).  Here was my original post on the paper, concerning the degree of religious attendance.  I won’t double indent, but here is Devin and Devin alone:

“I’m super grateful for Lyman’s willingness to engage with my recent research on measuring religious worship attendance using cellphone data. Lyman and I have been able to go back and forth a bit on Twitter/X, but I thought it might be useful to send a review of this to you Tyler.

For starters, I appreciate that Lyman and I agree on a lot of stuff about the paper. He has been very kind by sharing that he agrees that many parts of my paper are interesting and “very cool work”. Where we disagree is about whether the cellphone data can provide a useful estimate for population-wide estimates of worship attendance. Specifically, Lyman’s concerns are that due to people leaving their cellphones at home when they go to church and due to questionable cellphone coverage that might exist within church buildings, the results could be super biased. He sums up his critiques well with the following: “Exactly how big these effects are is anyone’s guess. But I really think you should consider just saying, `This isn’t a valid way of estimating aggregate religious behavior. But it’s a great way to look at some unique patterns of behavior among the religious!’ Don’t make a bold claim with a bunch of caveats, just make the claim you actually have really great data for!” This a very reasonable critique and I’m grateful for him making it.

My first response to Lyman’s concerns is: we agree! I try to be super careful in how the paper is written to discuss these exact concerns that Lyman raises. Even the last line of the abstract indicates, “While cellphone data has limitations, this paper provides a unique way of understanding worship attendance and its correlates.”

Here is where we differ though… To my knowledge, there have been just 2 approaches used to estimate the number of Americans who go to worship services weekly (say, 75% of the time): Surveys that ask people “do you go to religious services weekly?” and my paper using cell phone data. It is a very hard question to answer. Time-use surveys, counting cars in parking lots, and other methods don’t allow for estimating the number of people who are frequent religious attenders because of their repeated cross-sectional designs.

There are definitely limitations with the cellphone data (I’ve had about 100 people tell me that I’m not doing a good job tracking Orthodox Jews!). I know that these issues exist. But survey data has its own issues. Social desirability bias and other issues could lead to widely incorrect estimates of the number of people who frequently attend services (and surveys are going to have a hard time sampling Orthodox Jews too!). Given the difficulty of measuring some of these questions, I think that a new method – even with limitations – is useful.

At the end of the day, one has to think hard about the degree of bias of various methods and think about how much weight to put on each. The degree of bias is also where Lyman and I disagree. In my paper, I document that the cell phone data do not do a great job of predicting the number of people who go to NBA basketball games and the number of people who go to AMC theaters. I both undercount overall attendance and don’t predict differences across NBA stadiums well at all.

The reason why Lyman is able to complain about those results so vociferously is because I’m trying to be super honest and include those results in the paper! And I don’t try to hide them. On page 2 of the paper I note: “Not all data checks are perfect. For example, I undercount the number of people who go to an AMC theater or attend NBA basketball games and provide a discussion of these mispredictions.”

There are many other data checks that look really quite good. For example, here is a Table from the paper that compares cellphone visits as predicted by the cellphone data with actual visits using data from various companies:

 

The cellphone predictions in the above table tend to do a decent job predicting many population-wide estimates of attendance to a variety of locations. The one large miss is AMC theaters where we undercount attendance by 30%. Now about half of that undercount is because the data are missing a chunk of AMC theaters (this is not due to a cellphone pinging issue, but due to a data construction issue). But even if one were to make that correction, we undercount theater attendance by 15%.

Lyman argues that one should be especially worried about undercounting worship attendance due to people leaving their phones at home. I agree that this is a huge concern that is specific to religious worship and doesn’t apply in the same way for trips to Walmart. I run and report results from a Prolific Survey (N=5k) that finds that 87% of people who attend worship regularly indicate that they “always” or “almost always” take their phone to services with them. So definitely some people are leaving their phones at home, but this survey can help guide our thinking about how large that bias might be. Are Prolific participants representative of the US as a whole? Certainly not. There is additional bias that one should think about in that regard.

Overall, my view is that estimating population-wide estimates for how many people attend religious services weekly is super hard and cellphone data has limitations. My view is that other methods (surveys) also have substantial limitations. I do not think the cellphone data limitations are as large as Lyman thinks they are and stand by the last line of the abstract that once again states, “While cellphone data has limitations, this paper provides a unique way of understanding worship attendance and its correlates.”

All of that was Devin Pope!

Michael Cook on Iran

Our primary concern in this chapter will be Iran, though toward the end we will shift the focus to Central Asia.  We can best begin with a first-order approximation of the pattern of Iranian history across the whole period.  It has four major features.  The first is the survival of something called Iran, as both a cultural and a political entity; Iran is there in the eleventh century, and it is still there in the eighteenth.  the second is an alternation between periods when Iran is ruled by a single imperial state and periods in which it break up intoa number of smaller states.  The third feature is steppe nomad power: all imperial states based in Iran in this period are the work of Turkic or Mongol nomads.  The fourth is the role of the settled Iranian population, whose lot is to pay taxes and — more rewardingly — to serve as bureaucrats and bearers of a literate culture. With this first-order approximation in mind, we can now move on to a second-order approximation in the form of an outline of the history of Iran over eight centuries that will occupy most of this chapter.

That is from his new book A History of the Muslim World: From its Origins to the Dawn of Modernity.  I had not known that in the early 16th century Iran was still predominantly Sunni.  And:

There were also Persian-speaking populations to the east of Iran that remained Sunni, and within Iran there were non-Persian ethnic groups, such as the Kurds in the west and the Baluchis in the southeast, that likewise retained their Sunnism.  But the core Persian-speaking population of the country was by now [1722] almost entirely Shiite.  Iran thus became the first and largest country in which Shiites were both politically and demographically dominant.  One effect of this was to set it apart from the Muslim world at large, a development that gave Iran a certain coherence at the cost of poisoning its relations with its neighbors.

This was also a good bit:

Yet the geography of Iran in this period was no friendlier to maritime trade than it had been in Sasanian times.  To a much greater extent than appears from a glance at the map, Iran is landlocked: the core population and prime resources of the country are located deep in the interior, far from the arid coastlands of the Persian Gulf.

In my earlier short review I wrote “At the very least a good book, possibly a great book.”  I have now concluded it is a great book.

What I’ve been reading

1. Roger Lewis, Erotic Vagrancy: Everything about Richard Burton and Elizabeth Taylor.  An amazing book, full of life and energy on every page, and yes there are 605 of them.  Imagine if Camille Paglia had stuck with it and produced case studies.  The main problem is simply that most people don’t know or care about Burton and Taylor any more?

2. David Caron, Michael Healy, 1873-1941, An Túr Gloine’s Stained Glass PioneerAn excellent book, can it be said that Michael Healy is Ireland’s fourth greatest stained glass artist?  Clarke, Geddes, and Hone would be the top three?  It is good to see him getting this attention, but what will happen when so many Irish churches are decommissioned or abandoned or simply never seen?  What does that equilibrium look like?  All the more reason to invest in this book.  What an underrated European tradition.

3. Paul Seabright, the subtitle says it all, The Divine Economy: How Religions Compete for Wealth, Power, and People.  I’ve just started to crack this one open, Paul’s books are always very smart.

4. Sahar Akhtar, Immigration & Discrimination: (un)welcoming others.  Can the idea of wrongful discrimination be applied to immigration decisions?  Maybe you believe this is a pure and simple matter of national autonomy, but what if the potential immigrants are from a former and wronged colony?  From an island nation perishing due to climate change?  Or they were previously pushed off territory that is now part of the host nation?  And yet open borders as an idea also does not work — how should one fit all these pieces together?

5. Austin Bush, The Food of Southern Thailand.  The best book I know of on southern Thailand flat out.  This one has recipes of course, but also photos, maps, anecdotes, and plenty of history.  The food is explained in conceptual terms.  Recommended, for all those with an interest.

6. Michael Cook, A History of the Muslim World: From its Origins to the Dawn of Modernity.  Mostly ends at 1800, this will become one of the standard, must-read histories of Islam and its multiple homes.  The section on India, which is what I have been reading, is strongly conceptual and novel compared to other survey books such as Hourani.  At the very least a good book, possibly a great book.

In Conversation with Próspera CEO Erick Brimen & Vitalia Co-Founder Niklas Anzinger

During my visit to Prospera, one of Honduras’ private governments under the ZEDE law, I interviewed Prospera CEO Erick Brimen and Vitalia co-founder Niklas Anzinger. I learned a lot in the interview including the real history of the ZEDE movement (e.g. it didn’t begin with Paul Romer). I also had not fully appreciated the power of reciprocity stacking.

Companies in Prospera have the unique option to select their regulatory framework from any OECD country, among others. Erick Brimen elaborated in the podcast how this enables companies to do normal, OECD approved, things in Prospera which literally could not be done legally anywhere else in the world.

…so in the medical world for instance you have drugs that are approved in some countries but not others and you have medical practitioners that are licensed in some countries but not the others and you have medical devices approved in some countries but not others and there’s like a mismatch of things that are approved in OECD countries but there’s no one location where you can say hey if they’re approved in any country they’re approved here. That is what Prosper is….Our hypothesis is that just by doing that we can leapfrog to a certain extent and it’s got nothing to do with the wild west or doing weird things.

…so here so you can have a drug approved in the UK but not in the US with a doctor licensed in the US but not in the UK with a medical device created in Israel but not yet approved by the FDA following a procedure that has been say innovated in Canada, all of that coming together here in Prospera.

Give Innovation a Chance

Elizabeth Currid-Halkett writing in the NYTimes discusses her son’s muscular dystrophy and his treatment with the controversial gene-therapy Elevidys. Currid-Halkett, like many parents whose children have been treated with Elevidys, reports much better results than appear in the statistics.

On Aug. 29, [my son] finally received the one-time infusion. Three weeks later, he was marching upstairs and able to jump over and over. After four weeks, he could hop on one foot. Six weeks after treatment, Eliot’s neurologist decided to re-administer the North Star Ambulatory Assessment, used to test boys with D.M.D. on skills like balance, jumping and getting up off the floor unassisted. In June, Eliot’s score was a 22 out of 34. In the second week of October, it was a perfect 34 — that of a typically developing, healthy 4-year-old boy. Head in my hands, I wept with joy. This was science at its very best, close to a miracle.

…a narrow focus on numbers ignores the real quality-of-life benefits doctors, patients and their families see from these treatments. During the advisory committee meeting for Elevidys in May 2023, I listened to F.D.A. analysts express skepticism about the drug after they watched videos of boys treated with Elevidys swimming and riding bikes. These experts — given the highest responsibility to evaluate treatments on behalf of others’ lives — seemed unable to see the forest for the trees as they focused on statistics versus real-life examples.

Frankly, I side with the statistics. We don’t hear from the parents in the placebo group whose children also spontaneously made improvements.

Even though I side the statistics, I side with approval. Innovation is a dynamic process. It’s not surprising that the first gene therapy for DMD offers only modest benefits; you don’t hit a home run the first time at bat. But if the therapy isn’t approved, the scientists don’t go back to the drawing board and keep going. If the therapy isn’t approved, it dies and you lose the money, experience and learning by doing that are needed to develop, refine and improve.

Approval is not the end of innovation but a stepping stone on the path of progress. Here’s an example I gave earlier of the same principle. When we banned supersonic aircraft, we lost the money, experience and learning by doing needed to develop quieter supersonic aircraft. A ban makes technological developments in the industry much slower and dependent upon exogeneous progress in other industries.

You must build to build better.

Addendum: Peter Marks is the best and perhaps the most important director CBER has ever had. CBER, the Center for Biologics Evaluation and Research, is responsible for biological products, including vaccines and gene therapies. Marks has repeatedly pushed and sometimes overruled his staff in approving products like Elevidys. Marks named and was the driving force at the FDA behind Operation Warp Speed, a tremendous FDA success and break with tradition. Marks has been challenging the FDA’s conservative culture. I hope his changes survive his tenure.

Conditional Approval for Human Drugs

Recently a new drug to extend lifespan was granted conditional approval by the FDA–the first drug ever formally approved to extend lifespan! (By the way, the entrepreneur behind this breakthrough, Celine Halioua, is an emergent ventures winner for her earlier work rapidly expanding COVID testing. Tyler knows how to spot Talent!)

Great news, right? Yes, but there are two catches. First catch: the drug is for extending the lifespan of dogs. Second catch: Conditional approval is only available for animal drugs. Conditional approval was permitted for animal drugs beginning in 2004 for minor uses and/or minor species (fish, ferrets etc.) and then expanded in 2018 to include major uses in major species. What does conditional approval allow?

Conditional Approval (CA) allows potential applicants (referred to from this point as “sponsors”) to make a new animal drug product commercially available after demonstrating the drug is safe and properly manufactured in accordance with the FDA approval standards for safety and manufacturing, but before they have demonstrated substantial evidence of effectiveness (SEE) of the conditionally approved product. Under conditional approval, the sponsor needs to demonstrate reasonable expectation of effectiveness (RXE). A drug sponsor can then market a conditionally approved product for up to five years, through annual renewals, while collecting substantial evidence of effectiveness data required to support an approval.

Here is where it gets even more interesting. Why does the FDA say that conditional approval is a good idea?

First, it’s very expensive for a drug company to develop a drug and get it approved by FDA. Second, the market for a MUMS [Minor Use, Minor Species, AT] drug is too small to generate an adequate financial return for the company. The combination of the expensive drug approval process and the small market often makes drug companies hesitant to spend a lot of resources to develop MUMS drugs when there is so little return on their investment.

By allowing a drug company to legally market a MUMS drug early (before it is fully approved), conditional approval makes the drug available sooner to be used in animals that may benefit from it. This early marketing also helps the company recoup some of the investment costs while completing the full approval.

…Similar to conditional approval for MUMS drugs, the goal of expanded conditional approval is to encourage drug companies to develop drugs for major species for serious or life-threatening conditions and to fill treatment gaps where no therapies currently exist or the available therapies are inadequate.

Sound familiar? These are exactly some of the points that I have been raising about the FDA approval process for years. In particular, by bringing forward marketing approval by up to 5 years, conditional approval makes it profitable to research and develop many more new drugs.

Conditional approval is very similar to Bart Madden’s excellent idea of a Free to Choose Medicine track, with the exception that Madden makes the creation of a public tradeoff evaluation drug database (TEDD) a condition of moving to the FTCM track. Thus, FTCM combines conditional approval with the requirement to collect and make public real-world prescribing information over time.

But why is conditional approval available only for animal drugs? Conditional approval is good for animals. People are animals. Therefore, conditional approval is good for people. QED.

Ok, perhaps it’s not that simple. One might argue that allowing animals to use drugs for which there is a reasonable expectation of effectiveness but not yet substantial evidence of effectiveness is a good idea but this is just too risky to allow for humans. But that cuts both ways. We care more about humans and so don’t want to impose risks on them that we are willing to impose on animals but for the same reasons we care more about improving the health of humans and should be willing to risk more to save them (Entering a burning building to save a child is heroic; for a ferret, it’s foolish.)

I think that the FDA’s excellent arguments for conditional approval apply to human drugs as well as to (other) animal drugs and even more so when we recognize that human beings have rights and interests in making their own choices. The Promising Pathways Act would create something like conditional approval (the act calls it provisional approval) for drugs treating human diseases that are life-threatening so there is some hope that conditional approval for human drugs becomes a reality.

Dare I say it, but could the FDA be lumbering in the right direction?

The Piketty-Saez-Zucman response to Auten and Splinter

A number of you have asked me what I think of their response.  The first thing I noticed is that Auten and Splinter make several major criticisms of PSZ, and yet PSZ respond to only one of them.  On the others they are mysteriously silent.

The second thing I noticed is that PSZ have been trying to deploy the slur of “inequality deniers” against Auten and Splinter.  I take that as a bad epistemic sign.

I was in the midst of writing a longer post, but then I received the following from Splinter, and I cannot come close to his efforts or authority:

Here is a short response to yesterday’s comments by Piketty, Saez, and Zucman (PSZ) on Auten and Splinter (forthcoming in JPE). These are variations on prior comments that Jerry and I addressed in 2019 and 2020. 

First, PSZ say audit data suggest adding underreported income implies little change in top 1% shares. We agree. But their approach increases recent top 1% shares about 1.5 percentage points, with about 50% of underreported business income going to the top 1% by reported income. However, Johns and Slemrod (2012) found only 5% of underreporting went to the top 1% by reported income. This discrepancy is because PSZ allocate underreported income proportional to reported positive income, which ignores that a substantial share of business underreporting (about 40%) goes to individuals with reported negative total income, where misreporting rates are the highest (Table B3 here). The concentration of underreporting at the bottom of the reported distribution causes substantial upward re-ranking when adding underreported income, but that’s mostly ignored in the PSZ approach. The PSZ approach also implies that someone who decreases their underreporting rate by increasing their reported income is allocated more underreporting. That’s backwards. 

In contrast, our approach fits prior estimates from audit data, makes use of many years of audit data, and improves upon prior approaches. We find that underreported income slightly lowers top 1% pre-tax income shares and slightly increases after-tax income shares (Figure B6 here), which is consistent with the audit data. For example, 16% of underreporting is in our top 1% ranked by true income, far less than PSZ’s near 50%-allocation and a bit under the 27% in Johns and Slemrod because we improve upon prior approaches that misallocate undetected underreporting (discussion here). Contrary to the assertions and approach of PSZ, our Figure B5 (bottom panel, here) shows­ that re-ranking between reported and true (reported plus underreported) income matters substantially. PSZ appear confused about the difference between ranking by reported versus true income. Our underreporting allocations (as are theirs) must be based on reported income because that is all one observes with the primary tax data we both use. But, unlike their method, our allocations are done such that we match the re-ranking implied by audit data. Therefore, we match both the distributions by reported and true income after re-ranking (top two panels of Figure B5, here). 

Second, income missing from individual tax returns has shifted from the top to outside the top. The shift from the top was from movements out of closely-held C corporations, whose income is missing from individual tax returns, to passthrough businesses, whose income is on individual tax returns. This created growth in the top share of taxed business income. The growth in PSZ’s top share of untaxed business income, however, is due to their skewed allocation of underreported income that re-allocates underreported income to the top of the distribution. Outside the top, the growth of missing income is from increasing tax-exempt employee compensation, especially from health insurance (see Figure B16 here).

Third, PSZ suggest that top wealth and capital income shares should run parallel over the long run. This is a problematic assumption. Economic changes can push down capital income shares relative to wealth shares. For example, interest rates fell dramatically between 1989 and 2019—the federal funds effective rate fell from 9 to 2 percent. This tends to decrease the ratio of interest-income to bond-wealth and therefore falling interest rates likely increased the gap between top income and wealth shares. Also, much of top wealth patterns are driven by passthrough business, but this is fully or two-thirds excluded from PSZ’s definition of “capital” income here. When fully including passthrough business, the Auten–Splinter top 1% non-housing “capital” income share increased by 5 percentage points between 1989 to 2019, about two-thirds the Federal Reserve’s estimated increase in top 1% wealth shares. Therefore, the Auten-Splinter estimates are broadly consistent with increasing top wealth shares.

 The Auten–Splinter approach is fundamentally a data-driven approach (Table B2 here). Based on Saez and Zucman’s (2020) suggestions and conversations, our more recent work adds new uses of data to account for high-income non-filers, flexible spending accounts, and depreciation issues from expensing. Where we rely on assumptions, alternative ones suggest top 1% shares change little, see Table 5. Our headline finding of relatively flat long-run top 1% after-tax income shares is robust.

Auten and Splinter had presented versions of those points previously, as they note.  Yet PSZ present them as naive fools who somehow forgot to think about these issues at all, and PSZ do not, in their reply, consider these more detailed presentations of the point and defenses of the  Auten-Splinter estimates.  So I don’t think of the PSZ response as especially strong.

Here are relevant Auten and Splinter points from back in 2020.  Phil Magness offers commentary.

More on Pharma Pricing

A reader in the industry writes with excellent comments on yesterday’s post on the Chris Rock hypothesis.

Long-time reader, first-time emailer–love the show ;).  I’ve been in and around the pharma industry for nearly 30 years, and I’ve spent time in gene therapy/gene editing where the one-time cure model dominates.  Some thoughts on chronic vs. curative dosing and why a curative therapy is likely worth less:

  1. There’s a potential mismatch between payment for a drug and the accrual of value that justifies its price point.  If I take a curative therapy for a disease like hemophilia (e.g., the new $2.9 M drug, Roctavian), the insurance company immediately incurs the cost of the drug, but the prime financial benefits (no more expensive chronic therapy, reduced expensive visits to the hospital) accrue over time.  Patients switch insurance companies as they switch jobs, so the “payout” that justifies the treatment price accrues to the subsequent insurers.  On chronic therapy, if a patient switches to another insurer, the new insurer picks up the payments so there’s no such disconnect.  Rationally, insurers should pay more for chronic therapy, even in present value terms.
  2. Durability of effect is unknown until it isn’t.  It’s difficult to charge for a drug as a cure until such time you know it’s a cure and have proven it as such.  How long do you have to follow treated patients to prove that?  Gene therapies are starting to show waning efficacy in some cases.  The FDA mandates that you cannot include something in the drug label that has not been proven.  Payors will point to a label and ask why they should pay for something that’s not on there.  This can be mitigated by programs where the drug company pays back a portion of the cost if it doesn’t work, but collecting on that seems like a huge hassle–how do you prove that it stopped working (I can hear Mike Munger–“the answer to your question is transaction costs…”)?
  3. Sticker shock and headline numbers.  A drug that costs $3 M or more is something the White House can use at a podium and get a reaction.  Never mind that it gets paid back pretty quickly by discontinuing a therapy that costs hundreds of thousands per year–life-saving drugs should not cost millions of dollars!  This puts downward pressure on one-time cures.

So, my perspective is that it is more difficult for a one-time treatment/cure to capture the value it creates vs. a chronic therapy.  So, why did Lilly shares tumble on the news?  More important than duration of therapy is market share vs. competitors.  A more permanent solution (with no rebound after discontinuation) would more than make up for lost revenue on the back end by taking share from the competition on the front end.  And THAT is why pharma is incentivized to pursue cures.  Making a better drug will beat the competition, and a cure is a better drug.  Big Pharma doesn’t necessarily pursue curative treatments directly because they don’t know how.  Technologies like CRISPR and mRNA have to come up via biotechs that are purpose-built to maximize the platforms’ value and to understand/navigate the underlying technology.  That said, Big Pharma has inked HUGE deals to gain access to these technologies (e.g., Pfizer/BioNTech), so they do seem to come around eventually.

These are all excellent points. On point 1, note that Medicaid creates similar incentives in that insurance firms want to farm long term costs onto Medicaid.

Point 3 suggests that we should be especially wary of price controls on cures. Sticker shock may drive us to price controls leaving us with treatments that look cheaper but are even more expensive in the long-run (and by present discounted value). Sovaldi is a case in point. Its initial $84,000 price generated huge opposition even though it typically cured hepatitis C infections and avoided many later liver cancers and saved money overall. Indeed, as I pointed out earlier, Sovaldi so reduced the number of liver transplants that more people with other diseases ended up with life-saving transplants.

This is also what I meant by starting in the right place. If you start in the right place you have some hope of getting to real causes and possible solutions.

Wednesday assorted links

1. Anti-Piketty on r > g, once you put entrepreneurs into the model.

2. From Loyal, potential gains in canine life extension.  And more from the NYT.

3. The economics of globalized fashion.  And Emily Oster moonlights as fashion model.

4. Please donate to Conversations with Tyler.

5. Joe Walker podcasts with Shruti Rajagopalan on India and also talent.  With transcript, there is also quite a bit of discussion of me in there.

6. Scott Alexander on Effective Altruism.

7. Niskanen symposium on Milton Friedman and the negative income tax.

8. Naming and necessity, Young Thug edition.

Labor market evidence from ChatGPT

So far some of the main effects are quite egalitarian:

Generative Artificial Intelligence (AI) holds the potential to either complement knowledge workers by increasing their productivity or substitute them entirely. We examine the short-term effects of the recent release of the large language model (LLM), ChatGPT, on the employment outcomes of freelancers on a large online platform. We find that freelancers in highly affected occupations suffer from the introduction of generative AI, experiencing reductions in both employment and earnings. We find similar effects studying the release of other image-based, generative AI models. Exploring the heterogeneity by freelancers’ employment history, we do not find evidence that high-quality service, measured by their past performance and employment, moderates the adverse effects on employment. In fact, we find suggestive evidence that top freelancers are disproportionately affected by AI. These results suggest that in the short term generative AI reduces overall demand for knowledge workers of all types, and may have the potential to narrow gaps among workers.

That is from a new paper by Xiang Hui, Oren Reshef, and Luofeng Zhou, via Fernand Pajot.  And here is an FT summary of some key results.

I would stress this point, however.  As more ordinary life and commerce structures itself around AI, more and more AI-driven or AI-enable projects will become possible.  That will favor those who are good at conceiving of projects and executing them, and those longer-run effects may well be less egalitarian.