Category: Medicine
Hire Don’t Fire at the FDA
As a longtime critic of the FDA, you might expect me to support firing FDA employees—not so! My focus has always been on reducing approval time and costs to speed drugs to patients and increase the number of new drugs. Cutting staff is more likely to slow approvals and raise costs.
To be fair, we’re talking about the firing of some 200 probationary employees from a total of some 20,000. Unusual but not earth shaking. But the firings are indiscriminate, and as I explain below, the FDA is a peculiar target for cost-cutting because user fees under PDUFA cover a significant share of the FDA’s budget so its workers are among the cheapest federal employees. So what is the point? Shock and awe in advance of bigger reforms for the FDA? Perhaps. Regardless, I think we should keep in mind the big picture on staff and speed.
The Prescription Drug User Fee Act of 1992 (PDUFA) provides strong evidence that with more staff the FDA works faster to get new and better drugs to patients. Before PDUFA, drug approvals languished at the FDA simply due to a lack of staff—harming both drug companies and patients. Congress should have increased FDA funding, as the benefits would have far outweighed the costs, but Congress failed. Instead, PDUFA created a workaround: drug firms agreed to pay user fees, with the condition that the funds be used for drug reviewers and that the FDA be held to strict review standards.
PDUFA was a tremendous success. Carpenter et al., Olson, Berndt et al. and others all find that PDUFA shortened review times and it did so primarily through the mechanism of hiring more staff. Thus, Carpenter et al. report “NDA review times shortened by 3.3 months for every 100 additional FDA staff.” Moreover, the faster approval times came at little to no expense of reduced safety. Thus, Berndt et al. report:
implementation of the PDUFAs led to substantial incremental reductions in approval times beyond what would have been observed in the absence of these legislative acts. In addition, our preliminary examination of the trends in the number of new molecular entity withdrawals, frequently used as a proxy to assess the FDA’s safety record, suggests that the proportion of approvals ultimately leading to safety withdrawals prior to PDUFA and during PDUFA I and II were not statistically different.
And in a later analysis Philipson et al. find that:
more rapid access of drugs on the market enabled by PDUFA saved the equivalent of 140,000 to 310,000 life years. Additionally, we estimate an upper bound on the adverse effects of PDUFA based on drugs submitted during PDUFA I/II and subsequently withdrawn for safety reasons, and find that an extreme upper bound of about 56,000 life years were lost. This estimate is an extreme upper bound as it assumes all withdrawals since the inception of PDUFA were due to PDUFA and that there were no patients who benefitted from the withdrawn drugs.
If we’re going to have FDA review, it should be fast and efficient. We need to shift the focus from the FDA’s balance sheet in the Federal budget to the patients it serves—more staff means faster reviews, better access to treatments, and a healthier society.
More generally, government regulation, not staffing, is the real problem. Cut regulation, and staff cuts can follow. Cut staff without cutting regulation, and the morass only gets worse.
Does Peer Review Penalize Scientific Risk Taking?
Scientific projects that carry a high degree of risk may be more likely to lead to breakthroughs yet also face challenges in winning the support necessary to be carried out. We analyze the determinants of renewal for more than 100,000 R01 grants from the National Institutes of Health between 1980 and 2015. We use four distinct proxies to measure risk taking: extreme tail outcomes, disruptiveness, pivoting from an investigator’s prior work, and standing out from the crowd in one’s field. After carefully controlling for investigator, grant, and institution characteristics, we measure the association between risk taking and grant renewal. Across each of these measures, we find that risky grants are renewed at markedly lower rates than less risky ones. We also provide evidence that the magnitude of the risk penalty is magnified for more novel areas of research and novice investigators, consistent with the academic community’s perception that current scientific institutions do not motivate exploratory research adequately.
That is from a new NBER working paper by
Reforming the NIH
It seems the Trump proposal to simply cut overhead to fifteen percent will not stand up in the courts, at least not without Congressional approval? Nonetheless a few of you have asked me what I think of the idea.
My preferred reforms for the NIH include the following:
1. Cap pre-specified overhead at 25 percent, down from a range running up to 60 percent.
2. Encourage more coverage of overhead in the proposals themselves, where the researchers are accountable for how the overhead funds are spent. Severely limit how much the “overhead” cross-subsidizes other university functions, as is currently the case.
3. Fund a greater number of proposals, with the money coming from overhead reductions, as outlined in #1 and #2.
4. Set up a new, fully independent biomedical research arm of the federal government, based on DARPA-like principles. In fact this was seriously proposed a few years ago, with widespread (but insufficient) support.
I would note a few additional points, which have been covered in earlier MR posts over the years:
5. The NIH could not get its act together during Covid to make fast grants with sufficient rapidity during a time of crisis. They performed much worse than did say the NSF.
6. A while back the NIH set up a program to make riskier grants. The program did not in fact make riskier grants.
7. The NIH killed the idea of an independent DARPA-like biomedical research agency, fearing it would limit the size and influence of the NIH itself.
8. The submission forms, their length, and the associated processes are absurd. Whether or not the costs there are high in an absolute sense, it is a sign the current NIH is far too obsessed with process, as happens to just about every mature bureaucracy.
At this point it is obvious that the NIH cannot reform itself. It is also obvious that a slower, technocratic approach just gives the interest groups — in this case it is “the states” most of all — time to mobilize to protect the current NIH. There are universities in many Congressional districts and a fair amount of money at stake.
I do not per se favor a move to fifteen percent overhead, as I do understand the associated costs on scientific research. Nonetheless I take very seriously the possibility that a radical “thoughtless” cut now stands some chance of getting us to where we ought to be in the longer run, especially since subsequent administrations will get further cracks at this problem. They can up overhead to 25 percent, and set up the new DARPA-H. I just don’t see why that is impossible, and it may not even be unlikely. So what exactly is your discount rate and risk aversion here?
I feel the defenses of the NIH I am reading do not take the entire broader analysis seriously enough. They do not take sufficiently seriously that the writers themselves have failed to adequately reform the NIH. And over time, without serious reform, the bureaucratic stultification will only get worse.
The Licensing Racket
I review a very good new book on occupational licensing, The Licensing Racket by Rebecca Haw Allensworth in the WSJ.
Most people will concede that licensing for hair braiders and interior decorators is excessive while licensing for doctors, nurses and lawyers is essential. Hair braiders pose little to no threat to public safety, but subpar doctors, nurses and lawyers can ruin lives. To Ms. Allensworth’s credit, she asks for evidence. Does occupational licensing protect consumers? The author focuses on the professional board, the forgotten institution of occupational licensing.
Governments enact occupational-licensing laws but rarely handle regulation directly—there’s no Bureau of Hair Braiding. Instead, interpretation and enforcement are delegated to licensing boards, typically dominated by members of the profession. Occupational licensing is self-regulation. The outcome is predictable: Driven by self-interest, professional identity and culture, these boards consistently favor their own members over consumers.
Ms. Allensworth conducted exhaustive research for “The Licensing Racket,” spending hundreds of hours attending board meetings—often as the only nonboard member present. At the Tennessee board of alarm-system contractors, most of the complaints come from consumers who report the sort of issues that licensing is meant to prevent: poor installation, code violations, high-pressure sales tactics and exploitation of the elderly. But the board dismisses most of these complaints against its own members, and is far more aggressive in disciplining unlicensed handymen who occasionally install alarm systems. As Ms. Allensworth notes, “the board was ten times more likely to take action in a case alleging unlicensed practice than one complaining about service quality or safety.”
She finds similar patterns among boards that regulate auctioneers, cosmetologists and barbers. Enforcement efforts tend to protect turf more than consumers. Consumers care about bad service, not about who is licensed, so take a guess who complains about unlicensed practitioners? Licensed practitioners. According to Ms. Allensworth, it was these competitor-initiated cases, “not consumer complaints alleging fraud, predatory sales tactics, and graft,” where boards gave the stiffest penalties.
You might hope that boards that oversee nurses and doctors would prioritize patient safety, but Ms. Allensworth’s findings show otherwise. She documents a disturbing pattern of boards that have ignored or forgiven egregious misconduct, including nurses and physicians extorting sex for prescriptions, running pill mills, assaulting patients under anesthesia and operating while intoxicated.
Read the whole thing.
What should I ask Sheilagh Ogilvie?
She is a Canadian economic historian at Oxford, here is from her home page:
I am an economic historian. I explore the lives of ordinary people in the past and try to explain how poor economies get richer and improve human well-being. I’m interested in how social institutions – the formal and informal constraints on economic activity – shaped economic development between the Middle Ages and the present day.
And:
My current research focusses on serfdom, human capital, state capacity, and epidemic disease. Past projects analysed guilds, merchants, communities, the family, gender, consumption, finance, proto-industry, historical demography, childhood, and social capital. I have a particular interest in the economic and social history of Central and Eastern Europe.
Here is her Wikipedia page. Her book on guilds is well known, and her latest is Controlling Contagion: Epidemics and Institutions from the Black Death to Covid. Here are her main research papers.
So what should I ask her?
Genetic Prediction and Adverse Selection
In 1994 I published Genetic Testing: An Economic and Contractarian Analysis which discussed how genetic testing could undermine insurance markets. I also proposed a solution, genetic insurance, which would in essence insure people for changes in their health and life insurance premiums due to the revelation of genetic data. Later John Cochrane would independently create Time Consistent Health Insurance a generalized form of the same idea that would allow people to have long term health insurance without being tied to a single firm.
The Human Genome Project completed in 2003 but, somewhat surprisingly, insurance markets didn’t break down, even though genetic information became more common. We know from twin studies that genetic heritability is very large but it turned out that the effect from each gene variant is very small. Thus, only a few diseases can be predicted well using single-gene mutations. Since each SNP has only a small effect on disease, to predict how genes influence disease we would need data on hundreds of thousands, even millions of people, and millions of their SNPs across the genome and their diseases. Until recently, that has been cost-prohibitive and as a result the available genetic information lacked much predictive power.
In an impressive new paper, however, Azevedo, Beauchamp and Linnér (ABL) show that data from Genome-Wide Association Studies can be used to create polygenic risk indexes (PGIs) which can predict individual disease risk from the aggregate effects of many genetic variants. The data is prodigious:
We analyze data from the UK Biobank (UKB) (Bycroft et al., 2018; Sudlow et al., 2015). The UKB contains genotypic and rich health-related data for over 500,000 individuals from across the United Kingdom who were between 40 and 69 years old at recruitment (between 2006 and 2010). UKB data is linked to the UK’s National Health Service (NHS), which maintains detailed records of health events across the lifespan and with which 98% of the UK population is registered (Sudlow et al., 2015). In addition, all UKB participants took part in a baseline assessment, in which they provided rich environmental, family history, health, lifestyle, physical, and sociodemographic data, as well as blood, saliva, and urine samples.
The UKB contains genome-wide array data for ∼800,000 genetic variants for ∼488,000 participants.
So for each of these individuals ABL construct risk indexes and they ask how significant is this new information for buying insurance in the Critical Illness Insurance market:
Critical illness insurance (CII) pays out a lump sum in the event that the insured person gets diagnosed with any of the medical conditions listed on the policy (Brackenridge et al., 2006). The lump sum can be used as the policyholder wishes. The policy pays out once and is thereafter terminated.
… Major CII markets include Canada, the United Kingdom, Japan, Australia, India, China, and Germany. It is estimated that 20% of British workers were covered by a CII policy in 2009 (Gatzert and Maegebier, 2015). The global CII market has been valued at over $100 billion in 2021 and was projected to grow to over $350 billion by 2031 (Allied Market Research, 2022).
The answer, as you might have guessed by now, is very significant. Even though current PGIs explain only a fraction of total genetic risk, they are already predictive enough so that it would make sense for individuals with high measured risk to purchase insurance, while those with low-risk would opt out—leading to adverse selection that threatens the financial sustainability of the insurance market.
Today, the 500,000 people in the UK’s Biobank don’t know their PGIs but in principle they could and in the future they will. Indeed, as GWAS sample sizes increase, PGI betas will become more accurate and they will be applied to a greater fraction of an individual’s genome so individual PGIs will become increasingly predictive, exacerbating selection problems in insurance markets.
If my paper was a distant early warning, Azevedo, Beauchamp, and Linnér provide an early—and urgent—warning. Without reform, insurance markets risk unraveling. The authors explore potential solutions, including genetic insurance, community rating, subsidies, and risk adjustment. However, the effectiveness of these measures remains uncertain, and knee-jerk policies, such as banning insurers from using genetic information, could lead to the collapse of insurance altogether.
FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation
What would happen to drug development if the FDA lost its authority to prohibit new drugs? Would research and development boom and lives be saved? Or would R&D decline and lives be lost to a flood of unsafe and ineffective drugs? Or perhaps R&D would decline as demand for new drugs faltered due to public hesitation in the absence of FDA approval? In an excellent new paper Pesko and Saenz examine one natural experiment: e-cigarettes.
The FDA banned e-cigarettes as unapproved drugs soon after their introduction in the United States. The FDA had previously banned other nicotine infused products. Thus, it was surprising when in 2010 the FDA was prohibited from regulating e-cigarettes as a drug/device when a court ruled that Congress had intended for e-cigarettes to be regulated as a tobacco product not as a drug.
As of 2010, therefore, e-cigarettes were not FDA regulated:
…e–cigarette companies were able to bypass the lengthy and costly drug approval process entirely. Additionally, without FDA drug regulation, e–cigarette companies could also freely enter the market, modify products without approval, and bypass extensive post–market reporting requirements and quality control standards.
Indeed, it wasn’t until 2016 that the FDA formally “deemed” e-cigarettes as tobacco products (deemed since they don’t actually contain tobacco) and approvals under the less stringent tobacco regulations were not required until 2020. For nearly a decade, therefore, e-cigarettes were almost entirely unregulated and then lightly regulated under the tobacco framework. So, what happened during this period?
Pesko and Saenz show that FDA deregulation led to a boom in e-cigarette research and development which improved e-cigarettes and led to many lives saved as people switched from smoking to vaping.
The boom in research and development is evidenced by a very large increase in US e-cigarette patents. We do not see a similar increase in Australia (where e-cigarettes were not deregulated) nor do we see an increase in non e-cigarette smoking cessation products (figure 1a of their paper not shown here).

Estimating the decline in smoking and smoking-attributable mortality (SAM) is more difficult but the authors assemble a large collection of data broken down by demographics and they estimate that prohibiting the FDA from regulating e-cigarettes reduced smoking attributable mortality by nearly 10% on average each year from 2011-2019 for a total savings of some 677,000 life-years.
The authors pointedly compare what happened under deregulation of e-cigarettes–innovation and lives saved–with what happened to similar smoking cessation products that remained under FDA regulation–stagnation and no reduction in smoking attributable mortality.
A key takeaway on the slowness of FDA drug regulation is that it took 9 years before nicotine gum could be sold with a higher nicotine strength, 12 years before it could be sold OTC, and 15 years before it could be sold with a flavor. Further, a recent editorial laments that there has been largely non–existent innovation in FDA–approved smoking cessation drugs since 2006 (Benowitz et al., 2023). In particular, the “world’s oldest smoking cessation aid” cyctisine, first brought to market in 1964 in Bulgaria (Prochaska et al., 2013), and with quit success rates exceeding single forms of nicotine replacement therapy (NRT) (Lindson et al., 2023), is not approved as a drug in the United States.
The authors conclude, “this situation raises concern that drugs may be over–regulated in the United States…”. Quite so.
Addendum: A quick review on the FDA literature. In addition to classic works by Peltzman on the 1962 Amendments and by myself on what we can learn about the FDA from off-label pricing we have a spate of recent new papers including Parker Rogers, which I covered earlier:
In an important and impressive new paper, Parker Rogers looks at what happens when the FDA deregulates or “down-classifies” a medical device type from a more stringent to a less stringent category. He finds that deregulated device types show increases in entry, innovation, as measured by patents and patent quality, and decreases in prices. Safety is either negligibly affected or, in the case of products that come under potential litigation, increased.
and Isakov, Lo and Montazerhodjat which finds that FDA statistical standards tend to be too conservative, especially for drugs meant to treat deadly diseases (see my comments on their paper and more links in Is the FDA Too Conservative or Too Aggressive?)
See also FDA commentary, for much more from sunscreens to lab developed tests.
It’s Time to Build the Peptidome!
Antimicrobial resistance is a growing problem. Peptides, short sequences of amino acids, are nature’s first defense against bacteria. Research on antimicrobial peptides is promising but such research could be much more productive if combined with machine learning on big data. But collecting, collating and organizing big data is a public good and underprovided. Current peptide databases are small, inconsistent, incompatible with one another and they are biased against negative controls. Thus, there is scope for a million-peptide database modelled on something like Human Genome Project or ProteinDB:
ML needs data. Google’s AlphaGo trained on 30 million moves from human games and orders of magnitude more from games it played against itself. The largest language models are trained on at least 60 terabytes of text. AlphaFold was trained on just over 100,000 3D protein structures from the Protein Data Bank.
The data available for antimicrobial peptides is nowhere near these benchmarks. Some databases contain a few thousand peptides each, but they are scattered, unstandardized, incomplete, and often duplicative. Data on a few thousand peptide sequences and a scattershot view of their biological properties are simply not sufficient to get accurate ML predictions for a system as complex as protein-chemical reactions. For example, the APD3 database is small, with just under 4,000 sequences, but it is among the most tightly curated and detailed. However, most of the sequences available are from frogs or amphibians due to path-dependent discovery of peptides in that taxon. Another database, CAMPR4, has on the order of 20,000 sequences, but around half are “predicted” or synthetic peptides that may not have experimental validation, and contain less info about source and activity. The formatting of each of these sources is different, so it’s not easy to put all the sequences into one model. More inconsistencies and idiosyncrasies stack up for the dozens of other datasets available.
There is even less negative training data; that is, data on all the amino-acid sequences without interesting publishable properties. In current ML research, labs will test dozens or even hundreds of peptide sequences for activity against certain pathogens, but they usually only publish and upload the sequences that worked.
…The data problem facing peptide research is solvable with targeted investments in data infrastructure. We can make a million-peptide database
There are no significant scientific barriers to generating a 1,000x or 10,000x larger peptide dataset. Several high-throughput testing methods have been successfully demonstrated, with some screening as many as 800,000 peptide sequences and nearly doubling the number of unique antimicrobial peptides reported in publicly available databases. These methods will need to be scaled up, not only by testing more peptides, but also by testing them against different bacteria, checking for human toxicity, and testing other chemical properties, but scaling is an infrastructure problem, not a scientific one.
This strategy of targeted data infrastructure investments has three successful precedents: PubChem, the Human Genome Project, and ProteinDB.
Much more in this excellent piece of science and economics from IFP and Max Tabarrok.
Milei Implements Peer Approval for Food
Reason: In a sweeping move to overhaul Argentina’s food trade policies, Javier Milei’s administration officially deregulated food imports and exports on Monday. The reform, outlined in Decree 35/2025, seeks to boost foreign trade, cut bureaucratic red tape, and lower consumer prices.
Federico Sturzenegger, head of the Ministry of Deregulation and State Transformation, explained in a post on X that the measure “seeks cheaper food for Argentines and more Argentine food for the world.”
Under the new policy, food products and packaging certified by countries with “high sanitary surveillance” can now enter Argentina without any additional registration or approval processes. These items will be automatically recognized under the Argentine Food Code, cutting down on administrative delays and costs for importers.
The legislation identifies countries such as Australia, New Zealand, Canada, the United States, Israel, Japan, Switzerland, and the United Kingdom, as well as the European Union, as having similar or higher sanitary standards than Argentina.
As Sturzenegger explains in his post, this measure “eliminates requirements to register and authorize: samples, products, establishments, warehouses, utensils, and containers (32 pages of paperwork).”
An excellent “peer approval” policy and one that I have long supported when it comes to the FDA and drug approvals. In fact, since 2010 the US FDA has begun to recognize other countries as having comparable food safety systems. To date, Canada, Australia and New Zealand have been recognized with a Systems Recognition partnership.
Systems Recognition (SR) is a partnership between the U.S. Food and Drug Administration (FDA) and a foreign regulatory counterpart, in which the agencies have concluded that they operate comparable regulatory programs that yield similar food safety outcomes.
Argentina’s policy is unilateral and assumes equivalence if a country uses recognized standards (e.g., Codex Alimentarius) or has high sanitary vigilance while the FDA’s SR policy is bilateral and involves more regulatory harmonization and investigation. I prefer the Argentinian approach. Nevertheless, both programs have the goals of simplifying trade, avoiding duplicate inspections, and helping to prioritize scarce inspection resources.
I encourage the FDA to build on SR for food and extend it to drugs. This could be done in a minor and major way, both of which would useful. The minor reform would be peer approval for already-approved US drugs. In this way, importation could ease drug shortages. The FDA has done this in the past on an ad-hoc basis but it should be made permanent. The second, more major reform, would to extend peer-approval to any drug or device approved by a stringent authority.
What should I ask Theodore H. Schwartz?
Yes I will be doing a Conversation with him. He is a famous brain surgeon and author of the recent and excellent book Gray Matters: A Biography of Brain Surgery.
Here is his Wikipedia page, and an opening excerpt:
Theodore H. Schwartz (born May 13, 1965) is an American medical scientist, academic physician and neurosurgeon.
Schwartz specializes in surgery for brain tumors, pituitary tumors and epilepsy. He is particularly known for developing and expanding the field of minimally-invasive endonasal endoscopic skull base and pituitary surgery and for his research on neurovascular coupling and propagation of epilepsy.
Here is his home page. So what should I ask him?
The sick leave culture that is German
Germans are the “world champions in sick leave”, according to the head of the country’s biggest insurer, who was criticised for demanding that workers without a doctor’s note are unpaid for their first day off.
With the economy slowing and the welfare system under pressure, Germany can ill afford its average per worker of 20 sick days a year, said Oliver Bäte, the chief executive of Allianz SE. The EU average is eight.
The figure of 20 days, based on research by the health insurer DAK, puts a further dent in Germany’s ailing work ethic reputation. Last April, Christian Lindner, then finance minister, admitted that the French, Italians and other nationalities worked “a lot more than we do”, after OECD data showed Germans put in significantly fewer working hours per year than their EU and British neighbours…
“In countries like Switzerland and Denmark people work a month longer per year on average — with comparable pay,” he pointed out.
Here is more from the Times of London. If you can get through the gate, you will see it is Mexico that is the work ethic country.
The Cows in the Coal Mine
I remain stunned at how poorly we are responding to the threat from H5N1. Our poor response to COVID was regrettable but perhaps understandable given the US hadn’t faced a major pandemic in decades. Having been through COVID, however, you would think that we would be primed. But no. Instead of acting aggressively to stop the spread in cows we took a gamble that avian flu would fizzle out. It didn’t. California dairy herds are now so awash in flu that California has declared a state of emergency. Hundreds of herds across the United States have been infected.
I don’t think we are getting a good picture of what is happening to the cows because we don’t like to look too closely at our food supply. But I reported in September what farmers were saying:
The cows were lethargic and didn’t move. Water consumption dropped from 40 gallons to 5 gallons a day. He gave his cows aspirin twice a day, increased the amount of water they were getting and gave injections of vitamins for three days.
Five percent of the herd had to be culled.
“They didn’t want to get up, they didn’t want to drink, and they got very dehydrated,” Brearley said, adding that his crew worked around the clock to treat nearly 300 cows twice a day. “There is no time to think about testing when it hits. You have to treat it. You have sick cows, and that’s our job is to take care of them.”
Here’s another report from a vet:
…the scale of the farmers’ efforts to treat the sick cows stunned him. They showed videos of systems they built to hydrate hundreds of cattle at once. In 14-hour shifts, dairy workers pumped gallons of electrolyte-rich fluids into ailing cows through metal tubes inserted into the esophagus.
“It was like watching a field hospital on an active battlefront treating hundreds of wounded soldiers,” he said.
Here’s Reuters:
Cows in California are dying at much higher rates from bird flu than in other affected states, industry and veterinary experts said, and some carcasses have been left rotting in the sun as rendering plants struggle to process all the dead animals.
…Infected herds in California are seeing mortality rates as high as 15% or 20%, compared to 2% in other states, said Keith Poulsen, a veterinarian and director of the Wisconsin Veterinary Diagnostic Laboratory who has researched bird flu.
The California Department of Food and Agriculture did not respond to questions about the mortality rate from bird flu.
Does this remind you of anything? Must we wait until the human morgues are overrun?
The case fatality rate for cows appears to be low but significant, perhaps 2%. A small number of pigs have also been infected. On the other hand, over 100 million chickens, turkeys and ducks have been killed or culled.
There have now been 66 cases in humans in the US. Moreover, the CDC reports that in at least one case the virus appears to have evolved within its human host to become more infectious. We don’t know that for sure but it’s not good news. Recall that in theory a single mutation will make the virus much more capable of infecting humans.
When I wrote on December 1 that A Bird Flu Pandemic Would Be One of the Most Foreseeable Catastrophes in History Manifold Markets was predicting a 9% probability of greater than 1 million US human cases in 2025. Today the prediction is at 20%.
Once again, we may get lucky and that is still the way to bet but only the weak rely on luck. Strong civilizations don’t pray for luck. They crush the bugs. So far, we are not doing that.
Happy new year.
The New FDA and the Regulation of Laboratory Developed Tests
The FDA under President Trump and new FDA head Martin Makary should rapidly reverse the FDA’s powergrab on laboratory developed tests. To recap, laboratory developed tests (LDTs) are the kind your doctor orders, they are a service not a product and are not sold directly to patients. Congress has never given the FDA the authority to regulate LDTs. Indeed, in 2015, Paul Clement, the former US Solicitor General under George W. Bush, and Laurence Tribe, a leading liberal constitutional lawyer, wrote an article that rejected the FDA’s claims writing that the “FDA’s assertion of authority over laboratory-developed testing services is clearly foreclosed by the FDA’s own authorizing statute” and “by the broader statutory context.”
Moreover, in addition to legal reasons there are sound public policy reasons to reject FDA regulation of LDTs. Lab developed tests have never been FDA regulated, except briefly during the pandemic when the FDA used the declaration of emergency to issue so-called “guidance documents” saying that any SARS-COV-II test had to be pre-approved by the FDA. Thus, the FDA reversed the logic of emergency. In ordinary times, pre-approval was not necessary but when speed was of the essence it became necessary to get FDA pre-approval. The FDA’s pre-approval process slowed down testing in the United States and it wasn’t until after the FDA lifted its restrictions in March that tests from the big labs became available.
In a remarkably prescient passage, Clement and Tribe (2015, p. 18) had warned of exactly this kind of delay:
The FDA approval process is protracted and not designed for the rapid clearance of tests. Many clinical laboratories track world trends regarding infectious diseases ranging from SARS to H1N1 and Avian Influenza. In these fast-moving, life-or-death situations, awaiting the development of manufactured test kits and the completion of FDA’s clearance procedures could entail potentially catastrophic delays, with disastrous consequences for patient care.
We are seeing the same kind of FDA-caused delay for tests for bird-flu.
Moreover, unlike some of the proposals associated with incoming HHS head Robert Kennedy, reversing the FDA on lab-developed tests has significant support from a wide-variety of experts. Here, for example, is the American Hospital Association:
…we strongly believe that the FDA should not apply its device regulations to hospital and health system LDTs. These tests are not devices; rather, they are diagnostic tools developed and used in the context of patient care. As such, regulating them using the device regulatory framework would have an unquestionably negative impact on patients’ access to essential testing. It would also disrupt medical innovation in a field demonstrating tremendous benefits to patients and providers.
The Trump administration has a number of options:
…the LDT Final Rule was promulgated in time to escape Congressional Review Act scrutiny; however, the executive branch and a Republican-controlled Congress have other tools to limit or vitiate FDA’s authority. These include, in no particular order:
The U.S. Department of Health and Human Services (HHS) could revoke the LDT Final Rule. The recission of a rule is treated the same as the promulgation of a new rule. If HHS revokes the final rule, the cases will likely be dismissed as moot. The timing of such action is uncertain at this time.
FDA could extend or revise its policies of enforcement discretion. LDTs are currently subject to FDA’s phaseout policy which has five stages, the last of which begins in May 2028. Specific categories of IVDs will continue under an enforcement discretion policy indefinitely as described in the preamble to the final rule. HHS could quickly issue such a revised policy or policies without prior public comment if it determines such policy meets the threshold in 21 CFR 10.115(g)(2).
Congress could act. With a Republican-controlled House and Senate to start the new Trump administration, there is a chance that efforts to legislate the regulation of LDTs could be reignited. Based on prior congressional efforts, it is likely that such legislation would place LDTs under control by CMS and CLIA, rather than require LDTs to comply with FDA requirements.
HHS could let the litigation continue. The new administration may view the U.S. District Court for the Eastern District of Texas to be sympathetic to the Plaintiffs’ arguments and therefore proceed unabridged assuming the final rule will be struck-down, if that is indeed the deregulatory objective of the new administration.
The U.S. Department of Justice (DOJ) could act concerning the litigation. DOJ options are constrained by ethics rules but DOJ could request to amend its filings, pause the case pending rule-making proceedings, or take other actions intended to stall or moot the litigation in a deregulatory fashion.
Health insurance companies are not the main villain
First of all, insurance companies just don’t make that much profit. UnitedHealth Group, the company of which Brian Thompson’s UnitedHealthcare is a subsidiary, is the most valuable private health insurer in the country in terms of market capitalization, and the one with the largest market share. Its net profit margin is just 6.11%…
That’s only about half of the average profit margin of companies in the S&P 500. And other big insurers are even less profitable. Elevance Health, the second-biggest, has a margin of between 2% and 4%. Centene’s margin is usually around 1% to 2%. Cigna Group’s margin is usually around 2% to 3%. And so on. These companies are just making very little profit at all.
And:
In other words, Americans’ much-hated private health insurers are paying a higher percent of the cost of Americans’ health care than the government insurance systems of Sweden and Denmark and the UK are paying. The only reason Americans’ bills are higher is that U.S. health care provision costs so much more in the first place.
And:
In fact, the Kaiser Family Foundation does detailed comparisons between U.S. health care spending and spending in other developed countries. And it has concluded that most of this excess spending comes from providers — from hospitals, pharma companies, doctors, nurses, tech suppliers, and so on…
Recommended, here is the full post.
You Have Been Warned
New paper in Science, A single mutation in bovine influenza H5N1 hemagglutinin switches specificity to human receptors. If that isn’t clear enough, here is the editor’s summary:
In 2021, a highly pathogenic influenza H5N1 clade 2.3.4.4b virus was detected in North America that is capable of infecting a diversity of avian species, marine mammals, and humans. In 2024, clade 2.3.4.4b virus spread widely in dairy cattle in the US, causing a few mild human cases, but retaining specificity for avian receptors. Historically, this virus has caused up to 30% fatality in humans, so Lin et al. performed a genetic and structural analysis of the mutations necessary to fully switch host receptor recognition. A single glutamic acid to leucine mutation at residue 226 of the virus hemagglutinin was sufficient to enact the change from avian to human specificity. In nature, the occurrence of this single mutation could be an indicator of human pandemic risk. —Caroline Ash
Time to stock up on Tamiflu and Xofluza.
Addendum: See also A Bird Flu Pandemic Would Be One of the Most Foreseeable Catastrophes in History
