Category: Law
AI Discovers New Uses for Old Drugs
The NYTimes has an excellent piece by Kate Morgan on AI discovering new uses for old drugs:
A little over a year ago, Joseph Coates was told there was only one thing left to decide. Did he want to die at home, or in the hospital?
Coates, then 37 and living in Renton, Wash., was barely conscious. For months, he had been battling a rare blood disorder called POEMS syndrome, which had left him with numb hands and feet, an enlarged heart and failing kidneys. Every few days, doctors needed to drain liters of fluid from his abdomen. He became too sick to receive a stem cell transplant — one of the only treatments that could have put him into remission.
“I gave up,” he said. “I just thought the end was inevitable.”
But Coates’s girlfriend, Tara Theobald, wasn’t ready to quit. So she sent an email begging for help to a doctor in Philadelphia named David Fajgenbaum, whom the couple met a year earlier at a rare disease summit.
By the next morning, Dr. Fajgenbaum had replied, suggesting an unconventional combination of chemotherapy, immunotherapy and steroids previously untested as a treatment for Coates’s disorder.
Within a week, Coates was responding to treatment. In four months, he was healthy enough for a stem cell transplant. Today, he’s in remission.
The lifesaving drug regimen wasn’t thought up by the doctor, or any person. It had been spit out by an artificial intelligence model.
AI is excellent at combing through large amounts of data to find surprising connections.
Discovering new uses for old drugs has some big advantages and one disadvantage. A big advantage is that once a drug has been approved for some use it can be prescribed for any use–thus new uses of old drugs do not have to go through the lengthy and arduous FDA approval procedures. In essence, off-label uses have been safety-tested but not FDA efficacy-tested in the new use. I use this fact about off-label prescribing to evaluate the FDA. During COVID, for example, the British Recovery trial, discovered that the common drug, dexamethasone could reduce mortality by up to one-third in hospitalized patients on oxygen support that knowledge was immediately applied, saving millions of lives worldwide:
Within hours, the result was breaking news across the world and hospitals were adopting the drug into the standard care given to all patients with COVID-19. In the nine months following the discovery, dexamethasone saved an estimated one million lives worldwide.
New uses for old drugs are typically unpatentable, which helps keep them cheap—but the disadvantage is that this also weakens private incentives to discover them. While FDA trials for these new uses are often unnecessary, making development costs much lower, the lack of strong market protection can still deter investment. The FDA offers some limited exclusivity through programs like 505(b)(2), which grants temporary protection for new clinical trials or safety and efficacy data. These programs are hard to calibrate—balancing cost and reward is difficult—but likely provide some net benefits.
The NIH should continue prioritizing research into unpatentable treatments, as this is where the market is most challenged. More broadly, research on novel mechanisms to support non-patentable innovations is valuable. That said, I’m not overly concerned about under-investment in repurposing old drugs, especially as AI further reduces the cost of discovery.
Peter Marks Forced Out at FDA
Peter Marks was key to President Trump’s greatest first-term achievement: Operation Warp Speed. In an emergency, he pushed the FDA to move faster—against every cultural and institutional incentive to go slow. He fought the system and won.
I had some hope that FDA commissioner Marty Makary would team with Marks at CBER. Makary understands that the FDA moves too slowly. He wrote in 2021:
COVID has given us a clear-eyed look at a broken Food and Drug Administration that’s mired in politics and red tape.
Americans can now see why medical advances often move at turtle speed. We need fresh leadership at the FDA to change the culture at the agency and promote scientific advancement, not hinder it.
This starts at the top. Our public health leaders have become too be accepting of the bureaucratic processes that would outrage a fresh eye. For example, last week the antiviral pill Molnupiravir was found to cut COVID hospitalizations in half and, remarkably, no one who got the drug died.
The irony is that Molnupiravir was developed a year ago. Do the math on the number of lives that could have been saved if health officials would have moved fast, allowing rolling trials with an evaluation of each infection and adverse event in real-time. Instead, we have a process that resembles a 7-part college application for each of the phase 1, 2, and 3 clinical trials.
A Makary-Marks team could have moved the FDA in a very promising direction. Unfortunately, disputes with RFK Jr proved too much. Marks was especially and deservedly outraged by the measles outbreak and the attempt to promote vitamins over vaccines:
“It has become clear that truth and transparency are not desired by the Secretary, but rather he wishes subservient confirmation of his misinformation and lies,” Marks wrote in a resignation letter referring to HHS Secretary Robert F. Kennedy Jr.
Thus, as of now, the FDA is moving in the wrong direction and Makary has lost an ally against RFK.
In other news, the firing of FDA staff is slowing down approvals, as I predicted it would.
Rethinking regulatory fragmentation
Regulatory fragmentation occurs when multiple federal agencies oversee a single issue. Using the full text of the Federal Register, the government’s official daily publication, we provide the first systematic evidence on the extent and costs of regulatory fragmentation. Fragmentation increases the firm’s costs while lowering its productivity, profitability, and growth. Moreover, it deters entry into an industry and increases the propensity of small firms to exit. These effects arise from redundancy and, more prominently, from inconsistencies between government agencies. Our results uncover a new source of regulatory burden, and we show that agency costs among regulators contribute to this burden.
That is from a new paper by Joseph Kalmenovitz, Michelle Lowry, and Ekaterina Volkova, forthcoming in Journal of Finance. Via the excellent Kevin Lewis.
What Follows from Lab Leak?
Does it matter whether SARS-CoV-2 leaked from a lab in Wuhan or had natural zoonotic origins? I think on the margin it does matter.
First, and most importantly, the higher the probability that SARS-CoV-2 leaked from a lab the higher the probability we should expect another pandemic.* Research at Wuhan was not especially unusual or high-tech. Modifying viruses such as coronaviruses (e.g., inserting spike proteins, adapting receptor-binding domains) is common practice in virology research and gain-of-function experiments with viruses have been widely conducted. Thus, manufacturing a virus capable of killing ~20 million human beings or more is well within the capability of say ~500-1000 labs worldwide. The number of such labs is growing in number and such research is becoming less costly and easier to conduct. Thus, lab-leak means the risks are larger than we thought and increasing.
A higher probability of a pandemic raises the value of many ideas that I and others have discussed such as worldwide wastewater surveillance, developing vaccine libraries and keeping vaccine production lines warm so that we could be ready to go with a new vaccine within 100 days. I want to focus, however, on what new ideas are suggested by lab-leak. Among these are the following.
Given the risks, a “Biological IAEA” with similar authority as the International Atomic Energy Agency to conduct unannounced inspections at high-containment labs does not seem outlandish. (Indeed the Bulletin of Atomic Scientists are about the only people to have begun to study the issue of pandemic lab risk.) Under the Biological Weapons Convention such authority already exists but it has never been used for inspections–mostly because of opposition by the United States–and because the meaning of biological weapon is unclear, as pretty much everything can be considered dual use. Notice, however, that nuclear weapons have killed ~200,000 people while accidental lab leak has probably killed tens of millions of people. (And COVID is not the only example of deadly lab leak.) Thus, we should consider revising the Biological Weapons Convention to something like a Biological Dangers Convention.
BSL3 and especially BSL4 safety procedures are very rigorous, thus the issue is not primarily that we need more regulation of these labs but rather to make sure that high-risk research isn’t conducted under weaker conditions. Gain of function research of viruses with pandemic potential (e.g. those with potential aerosol transmissibility) should be considered high-risk and only conducted when it passes a review and is done under BSL3 or BSL4 conditions. Making this credible may not be that difficult because most scientists want to publish. Thus, journals should require documentation of biosafety practices as part of manuscript submission and no journal should publish research that was done under inappropriate conditions. A coordinated approach among major journals (e.g., Nature, Science, Cell, Lancet) and funders (e.g. NIH, Wellcome Trust) can make this credible.
I’m more regulation-averse than most, and tradeoffs exist, but COVID-19’s global economic cost—estimated in the tens of trillions—so vastly outweighs the comparatively minor cost of upgrading global BSL-2 labs and improving monitoring that there is clear room for making everyone safer without compromising research. Incredibly, five years after the crisis and there has be no change in biosafety regulation, none. That seems crazy.
Many people convinced of lab leak instinctively gravitate toward blame and reparations, which is understandable but not necessarily productive. Blame provokes defensiveness, leading individuals and institutions to obscure evidence and reject accountability. Anesthesiologists and physicians have leaned towards a less-punitive, systems-oriented approach. Instead of assigning blame, they focus in Morbidity and Mortality Conferences on openly analyzing mistakes, sharing knowledge, and redesigning procedures to prevent future harm. This method encourages candid reporting and learning. At its best a systems approach transforms mistakes into opportunities for widespread improvement.
If we can move research up from BSL2 to BSL3 and BSL4 labs we can also do relatively simple things to decrease the risks coming from those labs. For example, let’s not put BSL4 labs in major population centers or in the middle of a hurricane prone regions. We can also, for example, investigate which biosafety procedures are most effective and increase research into safer alternatives—such as surrogate or simulation systems—to reduce reliance on replication-competent pathogens.
The good news is that improving biosafety is highly tractable. The number of labs, researchers, and institutions involved is relatively small, making targeted reforms feasible. Both the United States and China were deeply involved in research at the Wuhan Institute of Virology, suggesting at least the possibility of cooperation—however remote it may seem right now.
Shared risk could be the basis for shared responsibility.
Bayesian addendum *: A higher probability of a lab-leak should also reduce the probability of zoonotic origin but the latter is an already known risk and COVID doesn’t add much to our prior while the former is new and so the net probability is positive. In other words, the discovery of a relatively new source of risk increases our estimate of total risk.
Not the precedent I have been looking for
The Federal Communications Commission is prepared to block mergers and acquisitions involving companies that continue promoting diversity, equity and inclusion policies, FCC Chairman Brendan Carr said Friday.
Why Spain’s transition to democracy remains controversial
New podcast series on Latin American political economy, with Rasheed Griffith and Diego Sánchez de la Cruz, all in English.
NIMBY contrarianism
The standard view of housing markets holds that the flexibility of local housing supply–shaped by factors like geography and regulation–strongly affects the response of house prices, house quantities and population to rising housing demand. However, from 2000 to 2020, we find that higher income growth predicts the same growth in house prices, housing quantity, and population regardless of a city’s estimated housing supply elasticity. We find the same pattern when we expand the sample to 1980 to 2020, use different elasticity measures, and when we instrument for local housing demand. Using a general demand-and-supply framework, we show that our findings imply that constrained housing supply is relatively unimportant in explaining differences in rising house prices among U.S. cities. These results challenge the prevailing view of local housing and labor markets and suggest that easing housing supply constraints may not yield the anticipated improvements in housing affordability.
That is from a new NBER working paper by Schuyler Louie, John A. Mondragon, and Johannes Wieland.
What Did We Learn From Torturing Babies?
As late as the 1980s it was widely believed that babies do not feel pain. You might think that this was an absurd thing to believe given that babies cry and exhibit all the features of pain and pain avoidance. Yet, for much of the 19th and 20th centuries, the straightforward sensory evidence was dismissed as “pre-scientific” by the medical and scientific establishment. Babies were thought to be lower-evolved beings whose brains were not yet developed enough to feel pain, at least not in the way that older children and adults feel pain. Crying and pain avoidance were dismissed as simply reflexive. Indeed, babies were thought to be more like animals than reasoning beings and Descartes had told us that an animal’s cries were of no more import than the grinding of gears in a mechanical automata. There was very little evidence for this theory beyond some gesturing’s towards myelin sheathing. But anyone who doubted the theory was told that there was “no evidence” that babies feel pain (the conflation of no evidence with evidence of no effect).
Most disturbingly, the theory that babies don’t feel pain wasn’t just an error of science or philosophy—it shaped medical practice. It was routine for babies undergoing medical procedures to be medically paralyzed but not anesthetized. In one now infamous 1985 case an open heart operation was performed on a baby without any anesthesia (n.b. the link is hard reading). Parents were shocked when they discovered that this was standard practice. Publicity from the case and a key review paper in 1987 led the American Academy of Pediatrics to declare it unethical to operate on newborns without anesthesia.
In short, we tortured babies under the theory that they were not conscious of pain. What can we learn from this? One lesson is humility about consciousness. Consciousness and the capacity to suffer can exist in forms once assumed to be insensate. When assessing the consciousness of a newborn, an animal, or an intelligent machine, we should weigh observable and circumstantial evidence and not just abstract theory. If we must err, let us err on the side of compassion.
Claims that X cannot feel or think because Y should be met with skepticism—especially when X is screaming and telling you different. Theory may convince you that animals or AIs are not conscious but do you want to torture more babies? Be humble.
We should be especially humble when the beings in question are very different from ourselves. If we can be wrong about animals, if we can be wrong about other people, if we can be wrong about our own babies then we can be very wrong about AIs. The burden of proof should not fall on the suffering being to prove its pain; rather, the onus is on us to justify why we would ever withhold compassion.
Hat tip: Jim Ward for discussion.
The Shortage that Increased Ozempic Supply
It sometimes happens that a patient needs a non-commercially-available form of a drug, a different dosage or a specific ingredient added or removed depending on the patient’s needs. Compounding pharmacies are allowed to produce these drugs without FDA approval. Moreover, since the production is small-scale and bespoke the compounded drugs are basically immune from any patent infringement claims. The FDA, however, also has an oddly sensible rule that says when a drug is in shortage they will allow it be compounded, even when the compounded version is identical to the commercial version.
The shortage rule was meant to cover rare drugs but when demand for the GLP-1 drugs like Ozempic and Zepbound skyrocketed, the FDA declared a shortage and big compounders jumped into the market offering these drugs at greatly reduced prices. Moreover, the compounders advertised heavily and made it very easy to get a “prescription.” Thus, the GLP-1 compounders radically changed the usual story where the patient asks the compounder to produce a small amount of a bespoke drug. Instead the compounders were selling drugs to millions of patients.
Thus, as a result of the shortage rule, the shortage led to increased supply! The shortage has now ended, however, which means you can expect to see many fewer Hims and Hers ads.
Scott Alexander makes an interesting point in regard to this whole episode:
I think the past two years have been a fun experiment in semi-free-market medicine. I don’t mean the patent violations – it’s no surprise that you can sell drugs cheap if you violate the patent – I mean everything else. For the past three years, ~2 million people have taken complex peptides provided direct-to-consumer by a less-regulated supply chain, with barely a fig leaf of medical oversight, and it went great. There were no more side effects than any other medication. People who wanted to lose weight lost weight. And patients had a more convenient time than if they’d had to wait for the official supply chain to meet demand, get a real doctor, spend thousands of dollars on doctors’ visits, apply for insurance coverage, and go to a pharmacy every few weeks to pick up their next prescription. Now pharma companies have noticed and are working on patent-compliant versions of the same idea. Hopefully there will be more creative business models like this one in the future.
The GLP-1 drugs are complex peptides and the compounding pharmacies weren’t perfect. Nevertheless, I agree with Scott that, as with the off-label market, the experiment in relaxed FDA regulation was impressive and it does provide a window onto what a world with less FDA regulation would look like.
Hat tip: Jonathan Meer.
Visits to the Doctor, Per Year
The number of times people visit the doctor per year varies tremendously across OECD countries from a low of 2.9 in Chile to a high of 17.5 (!) in Korea. I haven’t run the numbers officially but it doesn’t seem that there is much correlation with medical spending per capita or life expectancy.
Data can be found here.
Hat tip: Emil Kirkegaard on X.
Sentences to ponder
The daughters of immigrants enjoy higher absolute mobility than daughters of locals in most destinations, while immigrant sons primarily enjoy this advantage in countries with long histories of immigration.
That is from a new and very interesting paper by Leah Boustan, et.al. You have pondered the implied policy recommendations, right?
The political economy of Manus AI
Early reports are pretty consistent, and they indicate that Manus agentic AI is for real, and ahead of its American counterparts. I also hear it is still glitchy Still, it is easy to imagine Chinese agentic AI “getting there” before the American product does. If so, what does that world look like?
The cruder way of putting the question is: “are we going to let Chinese agentic bots crawl all over American computers?”
The next step question is: “do we in fact have a plausible way to stop this from happening?”
Many Chinese use VPNs to get around their own Great Firewall and access OpenAI products. China could toughen its firewall and shut down VPNs, but that is very costly for them. America doesn’t have a Great Firewall at all, and the First Amendment would seem to prevent very tough restrictions on accessing the outside world. Plus there can always be a version of the new models not directly connected to China.
We did (sort of) pass a TikTok ban, but even that applied only to the app. Had the ban gone through, you still could have accessed TikTok through its website. And so, one way or another, Americans will be able to access Manus.
Manus will crawl your computer and do all sorts of useful tasks for you. If not right now, probably within a year or not much more. An American alternative might leapfrog them, but again maybe not.
It is easy to imagine government banning Manus from its computers, just as the state of Virginia banned DeepSeek from its computers. I’m just not sure that matters much. Plenty of people will use it on their private computers, and it could become an integral part of many systems, including systems that interact with the U.S. public sector.
It is not obvious that the CCP will be able to pull strings to manipulate every aspect of Manus operations. I am not worried that you might order a cheeseburger on-line, and end up getting Kung Pao chicken. Still, the data collected by the parent company will in principle be CCP- accessible. Remember that advanced AI can be used to search through that information with relative ease. And over time, though probably not initially, you can imagine a Manus-like entity designed to monitor your computer for information relevant to China and the CCP. Even if it is not easy for a Manus-like entity to manipulate your computer in a “body snatchers-like” way, you can see the points of concern here.
Financial firms might be vulnerable to information capture attacks. Will relatives of U.S. military personnel be forbidden from having agentic Chinese AI on their computers? That does not seem enforceable.
Maybe you’re all worried now!
But should you be?
Whatever problems American computer owners might face, Chinese computer owners will face too. And the most important Chinese computer owner is the CCP and its affiliates, including the military.
More likely, Manus will roam CCP computers too. No, I don’t think that puts “the aliens” in charge, but who exactly is in charge? Is it Butterfly Effect, the company behind Manus, and its few dozen employees? In the short run, yes, more or less. But they too over time are using more and more agentic AIs, perhaps different brands from other companies too.
Think of some new system of checks and balances as being created, much as an economy is itself a spontaneous order. And in this new spontaneous order, a lot of the cognitive capital is coming outside the CCP.
In this new system, is the CCP still the smartest or most powerful entity in China? Or does the spontaneous order of various AI models more or less “rule it”? To what extent do the decisions of the CCP become a derivative product of Manus (and other systems) advice, interpretation, and data gathering?
What exactly is the CCP any more?
Does the importance of Central Committee membership decline radically?
I am not talking doomsday scenarios here. Alignment will ensure that the AI entities (for instance) continue to supply China with clean water, rather than poisoning the water supply. But those AI entities have been trained on information sets that have very different weights than what the CCP implements through its Marxism-swayed, autocracy-swayed decisions. Chinese AI systems look aligned with the CCP, given that they have some crude, ex post censorship and loyalty training. But are the AI systems truly aligned in terms of having the same limited, selective set of information weights that the CCP does? I doubt it. If they did, probably they would not be the leading product.
(There is plenty of discussion of alignment problems with AI. A neglected issue is whether the alignment solution resulting from the competitive process is biased on net toward “universal knowledge” entities, or some other such description, rather than “dogmatic entities.” Probably it is, and probably that is a good thing? …But is it always a good thing?)
Does the CCP see this erosion of its authority and essence coming? If so, will they do anything to try to preempt it? Or maybe a few of them, in Straussian fashion, allow it or even accelerate it?
Let’s say China can indeed “beat” America at AI, but at the cost of giving up control over China, at least as that notion is currently understood. How does that change the world?
Solve for the equilibrium!
Who exactly should be most afraid of Manus and related advances to come?
Who loses the most status in the new, resulting checks and balances equilibrium?
Who gains?
The Role of Unrealized Gains and Borrowing in the Taxation of the Rich
Of relevance to some recent debates:
We have four main findings: First, measuring “economic income” as currently-taxed income plus new unrealized gains, the income tax base captures 60% of economic income of the top 1% of wealth-holders (and 71% adjusting for inflation) and the vast majority of income for lower wealth groups. Second, adjusting for unrealized gains substantially lessens the degree of progressivity in the income tax, although it remains largely progressive. Third, we quantify for the first time the amount of borrowing across the full wealth distribution. Focusing on the top 1%, while total borrowing is substantial, new borrowing each year is fairly small (1-2% of economic income) compared to their new unrealized gains, suggesting that “buy, borrow, die” is not a dominant tax avoidance strategy for the rich. Fourth, consumption is less than liquid income for rich Americans, partly because the rich have a large amount of liquid income, and partly because their savings rates are high, suggesting that the main tax avoidance strategy of the super-rich is “buy, save, die.”
Here is the full piece by Edward G. Fox and Zachary D. Liscow. Via the excellent Kevin Lewis.
Should OneQuadrillionOwls be worried?
IMO this is one of the more compelling “disaster” scenarios — not that AI goes haywire because it hates humanity, but that it acquires power by being effective and winning trust — and then, that there is a cohort of humans that fear this expansion of trust and control, and those humans find themselves at odds with the nebulously part-human-part-AI governance structure, and chaos ensues.
It’s a messy story that doesn’t place blame at the feet of the AI per se, but in the fickleness of the human notion of legitimacy. It’s not enough to do a good job at governance — you have to meet the (maybe impossible, often contradictory) standards of human citizens, who may dislike you because of what you are or what you represent in some emotional sense.
As soon as any entity (in this case, AI) is given significant power, it has to grapple with questions of legitimacy, and with the thorniest question of all — how shall I deal with people who are trying to undermine my power?
Here is the relevant Reddit thread. Change is tough!
New results on AI and lawyer productivity
From a new piece by Daniel Schwarcz, et.al., here is part of the abstract:
This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all.
Of course those are now obsolete tools, but the results should all the more for the more advanced models.