Category: Law
What Follows from Lab Leak?
Does it matter whether SARS-CoV-2 leaked from a lab in Wuhan or had natural zoonotic origins? I think on the margin it does matter.
First, and most importantly, the higher the probability that SARS-CoV-2 leaked from a lab the higher the probability we should expect another pandemic.* Research at Wuhan was not especially unusual or high-tech. Modifying viruses such as coronaviruses (e.g., inserting spike proteins, adapting receptor-binding domains) is common practice in virology research and gain-of-function experiments with viruses have been widely conducted. Thus, manufacturing a virus capable of killing ~20 million human beings or more is well within the capability of say ~500-1000 labs worldwide. The number of such labs is growing in number and such research is becoming less costly and easier to conduct. Thus, lab-leak means the risks are larger than we thought and increasing.
A higher probability of a pandemic raises the value of many ideas that I and others have discussed such as worldwide wastewater surveillance, developing vaccine libraries and keeping vaccine production lines warm so that we could be ready to go with a new vaccine within 100 days. I want to focus, however, on what new ideas are suggested by lab-leak. Among these are the following.
Given the risks, a “Biological IAEA” with similar authority as the International Atomic Energy Agency to conduct unannounced inspections at high-containment labs does not seem outlandish. (Indeed the Bulletin of Atomic Scientists are about the only people to have begun to study the issue of pandemic lab risk.) Under the Biological Weapons Convention such authority already exists but it has never been used for inspections–mostly because of opposition by the United States–and because the meaning of biological weapon is unclear, as pretty much everything can be considered dual use. Notice, however, that nuclear weapons have killed ~200,000 people while accidental lab leak has probably killed tens of millions of people. (And COVID is not the only example of deadly lab leak.) Thus, we should consider revising the Biological Weapons Convention to something like a Biological Dangers Convention.
BSL3 and especially BSL4 safety procedures are very rigorous, thus the issue is not primarily that we need more regulation of these labs but rather to make sure that high-risk research isn’t conducted under weaker conditions. Gain of function research of viruses with pandemic potential (e.g. those with potential aerosol transmissibility) should be considered high-risk and only conducted when it passes a review and is done under BSL3 or BSL4 conditions. Making this credible may not be that difficult because most scientists want to publish. Thus, journals should require documentation of biosafety practices as part of manuscript submission and no journal should publish research that was done under inappropriate conditions. A coordinated approach among major journals (e.g., Nature, Science, Cell, Lancet) and funders (e.g. NIH, Wellcome Trust) can make this credible.
I’m more regulation-averse than most, and tradeoffs exist, but COVID-19’s global economic cost—estimated in the tens of trillions—so vastly outweighs the comparatively minor cost of upgrading global BSL-2 labs and improving monitoring that there is clear room for making everyone safer without compromising research. Incredibly, five years after the crisis and there has be no change in biosafety regulation, none. That seems crazy.
Many people convinced of lab leak instinctively gravitate toward blame and reparations, which is understandable but not necessarily productive. Blame provokes defensiveness, leading individuals and institutions to obscure evidence and reject accountability. Anesthesiologists and physicians have leaned towards a less-punitive, systems-oriented approach. Instead of assigning blame, they focus in Morbidity and Mortality Conferences on openly analyzing mistakes, sharing knowledge, and redesigning procedures to prevent future harm. This method encourages candid reporting and learning. At its best a systems approach transforms mistakes into opportunities for widespread improvement.
If we can move research up from BSL2 to BSL3 and BSL4 labs we can also do relatively simple things to decrease the risks coming from those labs. For example, let’s not put BSL4 labs in major population centers or in the middle of a hurricane prone regions. We can also, for example, investigate which biosafety procedures are most effective and increase research into safer alternatives—such as surrogate or simulation systems—to reduce reliance on replication-competent pathogens.
The good news is that improving biosafety is highly tractable. The number of labs, researchers, and institutions involved is relatively small, making targeted reforms feasible. Both the United States and China were deeply involved in research at the Wuhan Institute of Virology, suggesting at least the possibility of cooperation—however remote it may seem right now.
Shared risk could be the basis for shared responsibility.
Bayesian addendum *: A higher probability of a lab-leak should also reduce the probability of zoonotic origin but the latter is an already known risk and COVID doesn’t add much to our prior while the former is new and so the net probability is positive. In other words, the discovery of a relatively new source of risk increases our estimate of total risk.
Not the precedent I have been looking for
The Federal Communications Commission is prepared to block mergers and acquisitions involving companies that continue promoting diversity, equity and inclusion policies, FCC Chairman Brendan Carr said Friday.
Why Spain’s transition to democracy remains controversial
New podcast series on Latin American political economy, with Rasheed Griffith and Diego Sánchez de la Cruz, all in English.
NIMBY contrarianism
The standard view of housing markets holds that the flexibility of local housing supply–shaped by factors like geography and regulation–strongly affects the response of house prices, house quantities and population to rising housing demand. However, from 2000 to 2020, we find that higher income growth predicts the same growth in house prices, housing quantity, and population regardless of a city’s estimated housing supply elasticity. We find the same pattern when we expand the sample to 1980 to 2020, use different elasticity measures, and when we instrument for local housing demand. Using a general demand-and-supply framework, we show that our findings imply that constrained housing supply is relatively unimportant in explaining differences in rising house prices among U.S. cities. These results challenge the prevailing view of local housing and labor markets and suggest that easing housing supply constraints may not yield the anticipated improvements in housing affordability.
That is from a new NBER working paper by Schuyler Louie, John A. Mondragon, and Johannes Wieland.
What Did We Learn From Torturing Babies?
As late as the 1980s it was widely believed that babies do not feel pain. You might think that this was an absurd thing to believe given that babies cry and exhibit all the features of pain and pain avoidance. Yet, for much of the 19th and 20th centuries, the straightforward sensory evidence was dismissed as “pre-scientific” by the medical and scientific establishment. Babies were thought to be lower-evolved beings whose brains were not yet developed enough to feel pain, at least not in the way that older children and adults feel pain. Crying and pain avoidance were dismissed as simply reflexive. Indeed, babies were thought to be more like animals than reasoning beings and Descartes had told us that an animal’s cries were of no more import than the grinding of gears in a mechanical automata. There was very little evidence for this theory beyond some gesturing’s towards myelin sheathing. But anyone who doubted the theory was told that there was “no evidence” that babies feel pain (the conflation of no evidence with evidence of no effect).
Most disturbingly, the theory that babies don’t feel pain wasn’t just an error of science or philosophy—it shaped medical practice. It was routine for babies undergoing medical procedures to be medically paralyzed but not anesthetized. In one now infamous 1985 case an open heart operation was performed on a baby without any anesthesia (n.b. the link is hard reading). Parents were shocked when they discovered that this was standard practice. Publicity from the case and a key review paper in 1987 led the American Academy of Pediatrics to declare it unethical to operate on newborns without anesthesia.
In short, we tortured babies under the theory that they were not conscious of pain. What can we learn from this? One lesson is humility about consciousness. Consciousness and the capacity to suffer can exist in forms once assumed to be insensate. When assessing the consciousness of a newborn, an animal, or an intelligent machine, we should weigh observable and circumstantial evidence and not just abstract theory. If we must err, let us err on the side of compassion.
Claims that X cannot feel or think because Y should be met with skepticism—especially when X is screaming and telling you different. Theory may convince you that animals or AIs are not conscious but do you want to torture more babies? Be humble.
We should be especially humble when the beings in question are very different from ourselves. If we can be wrong about animals, if we can be wrong about other people, if we can be wrong about our own babies then we can be very wrong about AIs. The burden of proof should not fall on the suffering being to prove its pain; rather, the onus is on us to justify why we would ever withhold compassion.
Hat tip: Jim Ward for discussion.
The Shortage that Increased Ozempic Supply
It sometimes happens that a patient needs a non-commercially-available form of a drug, a different dosage or a specific ingredient added or removed depending on the patient’s needs. Compounding pharmacies are allowed to produce these drugs without FDA approval. Moreover, since the production is small-scale and bespoke the compounded drugs are basically immune from any patent infringement claims. The FDA, however, also has an oddly sensible rule that says when a drug is in shortage they will allow it be compounded, even when the compounded version is identical to the commercial version.
The shortage rule was meant to cover rare drugs but when demand for the GLP-1 drugs like Ozempic and Zepbound skyrocketed, the FDA declared a shortage and big compounders jumped into the market offering these drugs at greatly reduced prices. Moreover, the compounders advertised heavily and made it very easy to get a “prescription.” Thus, the GLP-1 compounders radically changed the usual story where the patient asks the compounder to produce a small amount of a bespoke drug. Instead the compounders were selling drugs to millions of patients.
Thus, as a result of the shortage rule, the shortage led to increased supply! The shortage has now ended, however, which means you can expect to see many fewer Hims and Hers ads.
Scott Alexander makes an interesting point in regard to this whole episode:
I think the past two years have been a fun experiment in semi-free-market medicine. I don’t mean the patent violations – it’s no surprise that you can sell drugs cheap if you violate the patent – I mean everything else. For the past three years, ~2 million people have taken complex peptides provided direct-to-consumer by a less-regulated supply chain, with barely a fig leaf of medical oversight, and it went great. There were no more side effects than any other medication. People who wanted to lose weight lost weight. And patients had a more convenient time than if they’d had to wait for the official supply chain to meet demand, get a real doctor, spend thousands of dollars on doctors’ visits, apply for insurance coverage, and go to a pharmacy every few weeks to pick up their next prescription. Now pharma companies have noticed and are working on patent-compliant versions of the same idea. Hopefully there will be more creative business models like this one in the future.
The GLP-1 drugs are complex peptides and the compounding pharmacies weren’t perfect. Nevertheless, I agree with Scott that, as with the off-label market, the experiment in relaxed FDA regulation was impressive and it does provide a window onto what a world with less FDA regulation would look like.
Hat tip: Jonathan Meer.
Visits to the Doctor, Per Year
The number of times people visit the doctor per year varies tremendously across OECD countries from a low of 2.9 in Chile to a high of 17.5 (!) in Korea. I haven’t run the numbers officially but it doesn’t seem that there is much correlation with medical spending per capita or life expectancy.
Data can be found here.
Hat tip: Emil Kirkegaard on X.
Sentences to ponder
The daughters of immigrants enjoy higher absolute mobility than daughters of locals in most destinations, while immigrant sons primarily enjoy this advantage in countries with long histories of immigration.
That is from a new and very interesting paper by Leah Boustan, et.al. You have pondered the implied policy recommendations, right?
The political economy of Manus AI
Early reports are pretty consistent, and they indicate that Manus agentic AI is for real, and ahead of its American counterparts. I also hear it is still glitchy Still, it is easy to imagine Chinese agentic AI “getting there” before the American product does. If so, what does that world look like?
The cruder way of putting the question is: “are we going to let Chinese agentic bots crawl all over American computers?”
The next step question is: “do we in fact have a plausible way to stop this from happening?”
Many Chinese use VPNs to get around their own Great Firewall and access OpenAI products. China could toughen its firewall and shut down VPNs, but that is very costly for them. America doesn’t have a Great Firewall at all, and the First Amendment would seem to prevent very tough restrictions on accessing the outside world. Plus there can always be a version of the new models not directly connected to China.
We did (sort of) pass a TikTok ban, but even that applied only to the app. Had the ban gone through, you still could have accessed TikTok through its website. And so, one way or another, Americans will be able to access Manus.
Manus will crawl your computer and do all sorts of useful tasks for you. If not right now, probably within a year or not much more. An American alternative might leapfrog them, but again maybe not.
It is easy to imagine government banning Manus from its computers, just as the state of Virginia banned DeepSeek from its computers. I’m just not sure that matters much. Plenty of people will use it on their private computers, and it could become an integral part of many systems, including systems that interact with the U.S. public sector.
It is not obvious that the CCP will be able to pull strings to manipulate every aspect of Manus operations. I am not worried that you might order a cheeseburger on-line, and end up getting Kung Pao chicken. Still, the data collected by the parent company will in principle be CCP- accessible. Remember that advanced AI can be used to search through that information with relative ease. And over time, though probably not initially, you can imagine a Manus-like entity designed to monitor your computer for information relevant to China and the CCP. Even if it is not easy for a Manus-like entity to manipulate your computer in a “body snatchers-like” way, you can see the points of concern here.
Financial firms might be vulnerable to information capture attacks. Will relatives of U.S. military personnel be forbidden from having agentic Chinese AI on their computers? That does not seem enforceable.
Maybe you’re all worried now!
But should you be?
Whatever problems American computer owners might face, Chinese computer owners will face too. And the most important Chinese computer owner is the CCP and its affiliates, including the military.
More likely, Manus will roam CCP computers too. No, I don’t think that puts “the aliens” in charge, but who exactly is in charge? Is it Butterfly Effect, the company behind Manus, and its few dozen employees? In the short run, yes, more or less. But they too over time are using more and more agentic AIs, perhaps different brands from other companies too.
Think of some new system of checks and balances as being created, much as an economy is itself a spontaneous order. And in this new spontaneous order, a lot of the cognitive capital is coming outside the CCP.
In this new system, is the CCP still the smartest or most powerful entity in China? Or does the spontaneous order of various AI models more or less “rule it”? To what extent do the decisions of the CCP become a derivative product of Manus (and other systems) advice, interpretation, and data gathering?
What exactly is the CCP any more?
Does the importance of Central Committee membership decline radically?
I am not talking doomsday scenarios here. Alignment will ensure that the AI entities (for instance) continue to supply China with clean water, rather than poisoning the water supply. But those AI entities have been trained on information sets that have very different weights than what the CCP implements through its Marxism-swayed, autocracy-swayed decisions. Chinese AI systems look aligned with the CCP, given that they have some crude, ex post censorship and loyalty training. But are the AI systems truly aligned in terms of having the same limited, selective set of information weights that the CCP does? I doubt it. If they did, probably they would not be the leading product.
(There is plenty of discussion of alignment problems with AI. A neglected issue is whether the alignment solution resulting from the competitive process is biased on net toward “universal knowledge” entities, or some other such description, rather than “dogmatic entities.” Probably it is, and probably that is a good thing? …But is it always a good thing?)
Does the CCP see this erosion of its authority and essence coming? If so, will they do anything to try to preempt it? Or maybe a few of them, in Straussian fashion, allow it or even accelerate it?
Let’s say China can indeed “beat” America at AI, but at the cost of giving up control over China, at least as that notion is currently understood. How does that change the world?
Solve for the equilibrium!
Who exactly should be most afraid of Manus and related advances to come?
Who loses the most status in the new, resulting checks and balances equilibrium?
Who gains?
The Role of Unrealized Gains and Borrowing in the Taxation of the Rich
Of relevance to some recent debates:
We have four main findings: First, measuring “economic income” as currently-taxed income plus new unrealized gains, the income tax base captures 60% of economic income of the top 1% of wealth-holders (and 71% adjusting for inflation) and the vast majority of income for lower wealth groups. Second, adjusting for unrealized gains substantially lessens the degree of progressivity in the income tax, although it remains largely progressive. Third, we quantify for the first time the amount of borrowing across the full wealth distribution. Focusing on the top 1%, while total borrowing is substantial, new borrowing each year is fairly small (1-2% of economic income) compared to their new unrealized gains, suggesting that “buy, borrow, die” is not a dominant tax avoidance strategy for the rich. Fourth, consumption is less than liquid income for rich Americans, partly because the rich have a large amount of liquid income, and partly because their savings rates are high, suggesting that the main tax avoidance strategy of the super-rich is “buy, save, die.”
Here is the full piece by Edward G. Fox and Zachary D. Liscow. Via the excellent Kevin Lewis.
Should OneQuadrillionOwls be worried?
IMO this is one of the more compelling “disaster” scenarios — not that AI goes haywire because it hates humanity, but that it acquires power by being effective and winning trust — and then, that there is a cohort of humans that fear this expansion of trust and control, and those humans find themselves at odds with the nebulously part-human-part-AI governance structure, and chaos ensues.
It’s a messy story that doesn’t place blame at the feet of the AI per se, but in the fickleness of the human notion of legitimacy. It’s not enough to do a good job at governance — you have to meet the (maybe impossible, often contradictory) standards of human citizens, who may dislike you because of what you are or what you represent in some emotional sense.
As soon as any entity (in this case, AI) is given significant power, it has to grapple with questions of legitimacy, and with the thorniest question of all — how shall I deal with people who are trying to undermine my power?
Here is the relevant Reddit thread. Change is tough!
New results on AI and lawyer productivity
From a new piece by Daniel Schwarcz, et.al., here is part of the abstract:
This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all.
Of course those are now obsolete tools, but the results should all the more for the more advanced models.
A $5 million gold card for immigrants?
That is the topic of my latest Bloomberg column, here is one excerpt:
As usual, however, the devil is in the details. There is a good chance Trump’s proposal could work out well — and a chance it could severely damage the nation.
One worry is selection effects. The $5 million fee means the program would skew toward older people, and would probably also skew somewhat male. Neither of those biases is a problem if other methods of establishing residence remain robust. But will they?
With a gold card program, the government would have a financial incentive to limit other ways of establishing residency. You can get an O-1 visa or an H-1B, for instance, if you have a strong record of accomplishment or an interested employer with a proper priority and perhaps some luck. Neither of those options cost anything close to $5 million, even with legal fees. Not everyone with a spare $5 million can get an O-1, or a proper job offer, but still: At the margin, these options would compete with each other.
These other options are well-suited for getting young, talented people into the US, which is precisely the weakness of the gold card proposal. Ideally the US would expand these other paths, but with a gold card program they might be narrowed so the government can reap more revenue from sales of gold cards.
I favor the idea in principle, but am worried it might be part of a broader package to tighten immigration more generally.
Can Enhanced Street Lighting Improve Public Safety at Scale?
Street lighting is often believed to influence street crime, but most prior studies have examined small-scale interventions in limited areas. The effect of large-scale lighting enhancements on public safety remains uncertain. This study evaluates the impact of Philadelphia’s citywide rollout of enhanced street lighting, which began in August 2023. Over 10 months, 34,374 streetlights were upgraded across 13,275 street segments, converting roughly one-third of the city’s street segments to new LED fixtures that provide clearer and more even illumination. We assess the effect of these upgrades on total crime, violent crime, property crime, and nuisance crime. Results show a 15% decline in outdoor nighttime street crimes and a 21% reduction in outdoor nighttime gun violence following the streetlight upgrades. The upgrades may account for approximately 5% of the citywide reduction in gun violence during this period, or about one sixth of the 31% citywide decline. Qualitative data further suggests that residents’ perceptions of safety and neighborhood vitality improved following the installation of new streetlights. Our study demonstrates that large-scale streetlight upgrades can lead to significant reductions in crime rates across urban areas, supporting the use of energy-efficient LED lighting as a crime reduction strategy. These findings suggest that other cities should consider similar lighting interventions as part of their crime prevention efforts. Further research is needed to explore the impact of enhanced streetlight interventions on other types of crime and to determine whether the crime-reduction benefits are sustained when these upgrades are implemented across the entire City of Philadelphia for extended periods.
That is from a new paper by John M. Macdonald, et.al. Via the excellent Kevin Lewis.
Hire Don’t Fire at the FDA
As a longtime critic of the FDA, you might expect me to support firing FDA employees—not so! My focus has always been on reducing approval time and costs to speed drugs to patients and increase the number of new drugs. Cutting staff is more likely to slow approvals and raise costs.
To be fair, we’re talking about the firing of some 200 probationary employees from a total of some 20,000. Unusual but not earth shaking. But the firings are indiscriminate, and as I explain below, the FDA is a peculiar target for cost-cutting because user fees under PDUFA cover a significant share of the FDA’s budget so its workers are among the cheapest federal employees. So what is the point? Shock and awe in advance of bigger reforms for the FDA? Perhaps. Regardless, I think we should keep in mind the big picture on staff and speed.
The Prescription Drug User Fee Act of 1992 (PDUFA) provides strong evidence that with more staff the FDA works faster to get new and better drugs to patients. Before PDUFA, drug approvals languished at the FDA simply due to a lack of staff—harming both drug companies and patients. Congress should have increased FDA funding, as the benefits would have far outweighed the costs, but Congress failed. Instead, PDUFA created a workaround: drug firms agreed to pay user fees, with the condition that the funds be used for drug reviewers and that the FDA be held to strict review standards.
PDUFA was a tremendous success. Carpenter et al., Olson, Berndt et al. and others all find that PDUFA shortened review times and it did so primarily through the mechanism of hiring more staff. Thus, Carpenter et al. report “NDA review times shortened by 3.3 months for every 100 additional FDA staff.” Moreover, the faster approval times came at little to no expense of reduced safety. Thus, Berndt et al. report:
implementation of the PDUFAs led to substantial incremental reductions in approval times beyond what would have been observed in the absence of these legislative acts. In addition, our preliminary examination of the trends in the number of new molecular entity withdrawals, frequently used as a proxy to assess the FDA’s safety record, suggests that the proportion of approvals ultimately leading to safety withdrawals prior to PDUFA and during PDUFA I and II were not statistically different.
And in a later analysis Philipson et al. find that:
more rapid access of drugs on the market enabled by PDUFA saved the equivalent of 140,000 to 310,000 life years. Additionally, we estimate an upper bound on the adverse effects of PDUFA based on drugs submitted during PDUFA I/II and subsequently withdrawn for safety reasons, and find that an extreme upper bound of about 56,000 life years were lost. This estimate is an extreme upper bound as it assumes all withdrawals since the inception of PDUFA were due to PDUFA and that there were no patients who benefitted from the withdrawn drugs.
If we’re going to have FDA review, it should be fast and efficient. We need to shift the focus from the FDA’s balance sheet in the Federal budget to the patients it serves—more staff means faster reviews, better access to treatments, and a healthier society.
More generally, government regulation, not staffing, is the real problem. Cut regulation, and staff cuts can follow. Cut staff without cutting regulation, and the morass only gets worse.