Results for “pandemic model”
107 found

Are macroeconomic models true only “locally”?

That is the theme of my latest Bloomberg column, here is one excerpt:

It is possible, contrary to the predictions of most economists, that the US will get through this disinflationary period and make the proverbial “soft landing.” This should prompt a more general reconsideration of macroeconomic forecasts.

The lesson is that they have a disturbing tendency to go wrong. It is striking that Larry Summers was right two years ago to warn about pending inflationary pressures in the US economy, when most of his colleagues were wrong. Yet Summers may yet prove to be wrong about his current warning about the looming threat of a recession. The point is that both his inflation and recession predictions stem from the same underlying aggregate demand model.

You will note that yesterday’s gdp report came in at 2.9%, hardly a poor performance.  And more:

It is understandable when a model is wrong because of some big and unexpected shock, such as the war in Ukraine. But that is not the case here. The US might sidestep a recession for mysterious reasons specific to the aggregate demand model. The Federal Reserve’s monetary policy has indeed been tighter, and disinflations usually bring high economic costs.

It gets more curious yet. Maybe Summers will turn out to be right about a recession. When recessions arrive, it is often quite suddenly. Consulting every possible macroeconomic theory may be of no help.

Or consider the 1990s. President Bill Clinton believed that federal deficits were too high and were crowding out private investment. The Treasury Department worked with a Republican Congress on a package of fiscal consolidation. Real interest rates fell, and the economy boomed — but that is only the observed correlation. The true causal story remains murky.

Two of the economists behind the Clinton package, Summers and Bradford DeLong, later argued against fiscal consolidation, even during the years of full employment under President Donald Trump [and much higher national debt]. The new worry instead was secular stagnation based on insufficient demand, even though the latter years of the Trump presidency saw debt and deficits well beyond Clinton-era levels.

The point here is not to criticize Summers and DeLong as inconsistent. Rather, it is to note they might have been right both times.

And what about that idea of secular stagnation — the notion that the world is headed for a period of little to no economic growth? The theory was based in part on the premise that global savings were high relative to investment opportunities. Have all those savings gone away? In most places, measured savings rose during the pandemic. Yet the problem of insufficient demand has vanished, and so secular stagnation theories no longer seem to apply.

To be clear, the theory of secular stagnation might have been true pre-pandemic. And it may yet return as a valid concern if inflation and interest rates return to pre-pandemic levels. The simple answer is that no one knows.

Note that Olivier Blanchard just wrote a piece “Secular Stagnation is Not Over,” well-argued as usual.  Summers, however, has opined: “we’ll not return to the era of secular stagnation.”  I was not present, but I can assume this too was well-argued as usual!

What we know about road deaths during the pandemic

The study verified that the absence of traffic jams played some role in allowing drivers to reach dangerous speeds on too-wide roads, but the researchers also found that the most significant differences between their forecast and real-world death totals happened in the dead of night, when most roads have always been congestion-free.

Between 10 p.mm and 1:59 a.m., deaths were nearly 22 percent higher than expected; during the typical morning rush hours, by contrast, deaths were actually 6.3 percent lower than the model anticipated they’d be. The late afternoon and evening rush hour, meanwhile, “did not differ significantly from the forecast.”

…2020 also saw an increase in hit-and-runs, which clocked in at 31.2 percent higher than originally forecast.

…According to AAA, “about 70 percent of the entire increase in driver fatal crash involvement [between May and December of 2020] was specifically among males under the age of 40.” Tefft suspects that increase may have been particularly driven by the minuscule subset of young, male motorists who were emboldened to do risky things on the road when the world shut down, though the data doesn’t tell him exactly why.

The article has further points of interest.

Model these Sweden Denmark lower inflation rates

Sweden’s annual inflation rate rose to 2.5 percent in September of 2021 from 2.1 percent in August but below market expectations of 2.7 percent. It was the highest since November of 2011, mainly due to prices of housing & utilities (5.1 percent vs 3.8 percent in August), namely electricity and transport (6.2 percent vs 6.4 percent), of which fuels. Additional upward pressure came from education (2.5 percent vs 2 percent); restaurants & hotels (2.4 percent vs 2.6 percent); miscellaneous goods & services (2 percent vs 1.4 percent) and food & non-alcoholic beverages (0.9 percent vs 0.3 percent). Consumer prices, measured with a fixed interest rate, rose 2.8 percent year-on-year in September, the fastest pace since October of 2008, below market expectations of 3 percent but above the central bank’s target of 2 percent. On a monthly basis, both the CPI and the CPIF rose 0.5 percent.

Here is the link, they are an open economy facing lots of supply shocks, right?  So what is up?

And Denmark:

Denmark’s annual inflation increased to 2.2% in September of 2021 from 1.8% in the previous month. It was the highest inflation rate since November 2012, due to a rise in both prices of electricity (15.2%), pointing to the highest annual increase since December 2008 and gas (52.8%), which is the highest annual increase since July 1980.

I thank Vero for the pointer.  In an email to me she asks:

“If supply issues are the only cause of our inflation woes, then why is it that countries that spent less than 5% of GDP on the pandemic are experiencing average inflation of 2.15%? While countries that spent over 15% of GDP are experiencing average inflation of 3.94%? I don’t know the answer but I think it is worth asking this question.”

Anyone?

Pandemic sentences to ponder

Of course, there are national health systems in Canada, Mexico, England, and France, among many others, and the uniformity of failure across this heterodox group suggests that structure may have made less of a difference than culture.

“One of the common features is that we are a medical-centric group of countries,” says Michael Mina, a Harvard epidemiologist who has spent the pandemic advocating for mass rollout of rapid testing on the pregnancy-kit model — only to meet resistance at every turn by those who insisted on a higher, clinical standard for tests. “We have an enormous focus on medicine and individual biology and individual health. We have very little focus as a group of nations on prioritizing the public good. We just don’t. It’s almost taboo — I mean, it is taboo. We have physicians running the show — that’s a consistent thing, medical doctors across the western European countries, driving the decision-making.” The result, he says, has been short-sighted calculations that prioritize absolute knowledge about everything before advising or designing policy about anything.

…in East Asia, countries didn’t wait for the WHO’s guidance to change on aerosols or asymptomatic transmission before masking up, social-distancing, and quarantining. “They acted fast. They acted decisively,” says Mina. “They made early moves. They didn’t sit and ponder: ‘What should we do? Do we have all of the data before we make a single decision?’ And I think that is a common theme that we’ve seen across all the Western countries—a reluctance to even admit that it was a big problem and then to really act without all of the information available. To this day, people are still not acting.” Instead, he says, “decision-makers have been paralyzed. They would rather just not act and let the pandemic move forward than act aggressively, but potentially be wrong.”

This, he says, reflects a culture of medicine in which the case of the individual patient is paramount.

Here is more from David Wallace-Wells, interesting throughout and with a cameo from yours truly.

Medical ethics? (model this)

Steven Joffe, MD, MPH, a medical ethicist at the University of Pennsylvania, said he doesn’t believe clinicians “should be lowering our standards of evidence because we’re in a pandemic.”

Link here.  That sentence is a good litmus test for whether you think clearly about trade-offs, statistical and speed trade-offs included, procedures vs. final ends of value (e.g., human lives), and how obsessed you are with mood affiliation (can you see through his question-begging invocation of “lowering our standards”?).  It is stunning to me that a top researcher at an Ivy League school literally cannot think properly about his subject area at all, and furthermore has no compunction admitting this publicly.  As Alex wrote just earlier today: “Waiting for more data isn’t “science,” it’s sometimes an excuse for an unscientific status-quo bias.”

To be clear, we should run more and better RCT trials of Ivermectin, the topic at hand for Joffe (and in fact Fast Grants is helping to fund exactly that).  But of course the “let’s go ahead and actually do this” decision should be different in a pandemic, just as the “just how much of a hurry are we in here anyway?” calculus should differ as well.  I do not know enough to judge whether Ivermectin should be in hospital treatment protocols, as it is in many countries, but I do not condemn this simply on the grounds of it representing a “lower standard.”  It might instead reflect a “higher standard” of concern for human lives, and you will note the drug is not considered harmful as it is being administered.

If you apply the standards of Joffe’s earlier work, we should not be proceeding with these RCTs, including presumably vaccine RCTs, until we have assured that all of the participants truly understand the difference between “research” and “treatment” as part of the informed consent protocols.  No “therapeutic misconception” should be allowed.  Really?

If the pandemic has changed my mind about anything, it is the nature of expertise.

Preparing for a Pandemic: Accelerating Vaccine Availability

In Preparing for a Pandemic, (forthcoming AER PP), by myself and a host of worthies including Susan Athey, Eric Budish, Canice Prendergast, Scott Duke Kominers, Michael Kremer and others equally worthy, we explain the model that we have been using to estimate the value of vaccines and to advise governments. The heart of the paper is the appendix but the paper gives a good overview. Based on our model, we advised governments to go big and we had some success but everywhere we went we were faced with sticker shock. We recommended that even poor countries buy vaccines in advance and that high-income countries make large investments in vaccine capacity of $100b or more in total.

It’s now obvious that we should have spent more but the magnitudes are still astounding. The world spent on the order of $20b or so on vaccines and got a return in the trillions! It was hard to get governments to spend billions on vaccines despite massive benefit-to-cost ratios yet global spending on fiscal support was $14 trillion! Even now, there is more to be done to vaccinate the world quickly, but still we hesitate.

I went over the model for Jess Hoel’s class and we also had a spirited discussion of First Doses First and other policies to stretch the vaccine supply.

The end of the Swedish model

The government this week proposed an emergency law that would allow it to lock down large parts of society; the first recommended use of face masks came into force; and the authorities gave schools the option to close for pupils older than 13 — all changes to its strategy to combat the pandemic.

“I don’t think Sweden stands out [from the rest of the world] very much right now,” said Jonas Ludvigsson, professor of clinical epidemiology at Karolinska Institutet in Stockholm. “Most of the things that made Sweden different have changed — either in Sweden or elsewhere.”

…Sweden has reported more than 2,000 Covid-19 deaths in a month and 535 in the past eight days alone. This compares with 465 for the pandemic as a whole in neighbouring Norway, which has half the population. As Sweden’s King Carl XVI Gustaf said just before Christmas: “We have failed.”

Here is more from the FT.  U.S. Covid deaths per day have now exceeded 4,000 for some days, and they are running at about 50% of the normal number for total daily deaths.  And no, it is not that the payments to classify these as Covid deaths have increased, rather the virus and the deaths have increased.  So the “no big deal” question we now can consider settled?  The new and more contagious strains haven’t even started playing a major role yet in the United States.

Minimum wage laws during a pandemic

From Michael Strain at Bloomberg:

In July 2019, the nonpartisan Congressional Budget Office estimated that a $15 minimum wage would eliminate 1.3 million jobs. The CBO also forecast that such an increase would reduce business income, raise consumer prices, and slow the economy.

The U.S. economy will be very weak throughout 2021. The nation will need more business income, not less; more jobs, not fewer; and faster, not slower, economic growth. A $15 minimum wage would move the economy in the wrong direction across all these fronts.

I fully agree, and in fact would go further.  On Twitter I wrote in response to Noah:

Surely in a pandemic these businesspeople are right and the accumulated non-pandemic research literature doesn’t apply so much, right? Pretty much all models imply we should cut the minimum wage, if only temporarily, for small business at the very least.

Put in whatever exotic assumptions you wish, a basic model will spit out a lower optimal minimum wage for 2020-21, again for small business at the very least.  This is the advice that leading Democratic economists should be offering to Biden.

Dark matter, second waves and epidemiological modelling

Here is a new paper from Karl FristonAnthony Costello, and Deenan Pillay:

Background Recent reports based on conventional SEIR models suggest that the next wave of the COVID-19 pandemic in the UK could overwhelm health services, with fatalities that far exceed the first wave. These models suggest non-pharmaceutical interventions would have limited impact without intermittent national lockdowns and consequent economic and health impacts. We used Bayesian model comparison to revisit these conclusions, when allowing for heterogeneity of exposure, susceptibility, and viral transmission. Methods We used dynamic causal modelling to estimate the parameters of epidemiological models and, crucially, the evidence for alternative models of the same data. We compared SEIR models of immune status that were equipped with latent factors generating data; namely, location, symptom, and testing status. We analysed daily cases and deaths from the US, UK, Brazil, Italy, France, Spain, Mexico, Belgium, Germany, and Canada over the period 25-Jan-20 to 15-Jun-20. These data were used to estimate the composition of each country’s population in terms of the proportions of people (i) not exposed to the virus, (ii) not susceptible to infection when exposed, and (iii) not infectious when susceptible to infection. Findings Bayesian model comparison found overwhelming evidence for heterogeneity of exposure, susceptibility, and transmission. Furthermore, both lockdown and the build-up of population immunity contributed to viral transmission in all but one country. Small variations in heterogeneity were sufficient to explain the large differences in mortality rates across countries. The best model of UK data predicts a second surge of fatalities will be much less than the first peak (31 vs. 998 deaths per day. 95% CI: 24-37)–substantially less than conventional model predictions. The size of the second wave depends sensitively upon the loss of immunity and the efficacy of find-test-trace-isolate-support (FTTIS) programmes. Interpretation A dynamic causal model that incorporates heterogeneity of exposure, susceptibility and transmission suggests that the next wave of the SARS-CoV-2 pandemic will be much smaller than conventional models predict, with less economic and health disruption. This heterogeneity means that seroprevalence underestimates effective herd immunity and, crucially, the potential of public health programmes.

This would appear to be one of the very best treatments so far, though I would stress I have not seen anyone with a good understanding of the potential rotation (or not) of super-spreaders, especially as winter comes and also as offices reopen.  In that regard, at the very least, modeling a second wave is difficult.

Via Yaakov Saxon, who once came up with a scheme so clever I personally sent him money for nothing.

Pandemics and persistent heterogeneity

It has become increasingly clear that the COVID-19 epidemic is characterized by overdispersion whereby the majority of the transmission is driven by a minority of infected individuals. Such a strong departure from the homogeneity assumptions of traditional well-mixed compartment model is usually hypothesized to be the result of short-term super-spreader events, such as individual’s extreme rate of virus shedding at the peak of infectivity while attending a large gathering without appropriate mitigation. However, heterogeneity can also arise through long-term, or persistent variations in individual susceptibility or infectivity. Here, we show how to incorporate persistent heterogeneity into a wide class of epidemiological models, and derive a non-linear dependence of the effective reproduction number R_e on the susceptible population fraction S. Persistent heterogeneity has three important consequences compared to the effects of overdispersion: (1) It results in a major modification of the early epidemic dynamics; (2) It significantly suppresses the herd immunity threshold; (3) It significantly reduces the final size of the epidemic. We estimate social and biological contributions to persistent heterogeneity using data on real-life face-to-face contact networks and age variation of the incidence rate during the COVID-19 epidemic, and show that empirical data from the COVID-19 epidemic in New York City (NYC) and Chicago and all 50 US states provide a consistent characterization of the level of persistent heterogeneity. Our estimates suggest that the hardest-hit areas, such as NYC, are close to the persistent heterogeneity herd immunity threshold following the first wave of the epidemic, thereby limiting the spread of infection to other regions during a potential second wave of the epidemic. Our work implies that general considerations of persistent heterogeneity in addition to overdispersion act to limit the scale of pandemics.

Here is the full paper by Alexei Tkachenko, et.al., via the excellent Alan Goldhammer.  These models are looking much better than the ones that were more popular in the earlier months of the pandemic (yes, yes I know epidemiologists have been studying heterogeneity for a long time, etc.).

A multi-risk SIR model with optimally targeted lockdown

Or you could say “all-star economists write Covid-19 paper.”  Daron Acemoglu, Victor Chernozhukov, Iván Werning, and Michael D. Whinston have a new NBER working paper.  Here is part of the abstract:

For baseline parameter values for the COVID-19 pandemic applied to the US, we find that optimal policies differentially
targeting risk/age groups significantly outperform optimal uniform policies and most of the gains can be realized by having stricter lockdown policies on the oldest group. For example, for the same economic cost (24.3% decline in GDP), optimal semi–targeted or fully-targeted policies reduce mortality from 1.83% to 0.71% (thus, saving 2.7 million lives) relative to optimal uniform policies. Intuitively, a strict and long lockdown for the most vulnerable group both reduces infections and enables less strict lockdowns for the lower-risk groups.

Note the paper is much broader-ranging than that, though I won’t cover all of its points.  Note this sentence:

Such network versions of the SIR model may behave very differently from a basic homogeneous-agent version of the framework.

And:

…we find that semi-targeted policies that simply apply a strict lockdown on the oldest group can achieve the majority of the gains from fully-targeted policies.

Here is a related Twitter thread.  I also take the authors’ model to imply that isolating infected individuals will yield high social returns, though that is presented in a more oblique manner.

Again, I would say we are finally making progress.  One question I have is whether the age-specific lockdown in fact collapses into some other policy, once you remove paternalism as an underlying assumption.  The paper focuses on deaths and gdp, not welfare per se.  But what if older people wish to go gallivanting out and about?  Most of the lockdown in this paper is for reasons of “protective custody,” and not because the older people are super-spreaders.  Must we lock them up (down?), so that we do not feel too bad about our own private consumption and its second-order consequences?  What if they ask to be released, in full knowledge of the relevant risks?

The macroeconomics of pandemics

By Eichenbaum, Rebelo, and Trabandt:

We extend the canonical epidemiology model to study the interaction between economic decisions and epidemics. Our model implies that people’s decision to cut back on consumption and work reduces the severity of the epidemic, as measured by total deaths. These decisions exacerbate the size of the recession caused by the epidemic. The competitive equilibrium is not socially optimal because infected people do not fully internalize the e§ect of their economic decisions on the spread of the virus. In our benchmark scenario, the optimal containment policy increases the severity of the recession but saves roughly 0.6 million lives in the U.S.

I would add this: if you hold the timing and uncertainty of deaths constant, death and output tend to move together. That is, curing people and developing remedies and a vaccine will do wonders for gdp, through the usual channels.  The tricky trade-off is between output and the timing of deaths.  Whatever number of people are going to die, it is better to “get that over with” and clear up the uncertainty.  Policy is thus in the tricky position of wishing to both minimize the number of deaths and yet also to speed them along.  Good luck with that!  In terms of an optimum, might it be possible that some of the victims do not…get infected and die quickly enough?  Might that be the more significant market failure?

Via Harold Uhlig.  In any case, kudos to the authors for focusing their energies on this critical problem.

Maybe We Won’t All Die in a Pandemic

The high frequency of modern travel has led to concerns about a devastating pandemic since a lethal pathogen strain could spread worldwide quickly. Many historical pandemics have arisen following pathogen evolution to a more virulent form. However, some pathogen strains invoke immune responses that provide partial cross-immunity against infection with related strains. Here, we consider a mathematical model of successive outbreaks of two strains: a low virulence strain outbreak followed by a high virulence strain outbreak. Under these circumstances, we investigate the impacts of varying travel rates and cross-immunity on the probability that a major epidemic of the high virulence strain occurs, and the size of that outbreak. Frequent travel between subpopulations can lead to widespread immunity to the high virulence strain, driven by exposure to the low virulence strain. As a result, major epidemics of the high virulence strain are less likely, and can potentially be smaller, with more connected subpopulations. Cross-immunity may be a factor contributing to the absence of a global pandemic as severe as the 1918 influenza pandemic in the century since.

From a new paper in bioRxiv, the biological preprint service analagous to arXiv.

Hat tip: Paul Kedrosky.

Jason Abaluck writes me about masks and the Bangladesh RCT study

This is all him, no double indent though:

“As a regular reader of your blog and one of the PIs of the Bangladesh Mask RCT (now in press at Science), I was surprised to see your claim that, “With more data transparency, it does not seem to be holding up very well”:

  1. The article you linked claims, in agreement with our study, that our intervention led to a roughly 10% reduction in symptomatic seropositivity (going from 12% to 41% of the population masked). Taking this estimate at face value, going from no one masked to everyone masked would imply a considerably larger effect. Additionally:
    1. We see a similar – but more precisely estimated – proportionate reduction in Covid symptoms [95% CI: 7-17%] (pre-registered), corresponding to ~1,500 individuals with Covid symptoms prevented
    2. We see larger proportionate drops in symptomatic seropositivity and Covid in villages where mask-use increased by more (not pre-registered), with the effect size roughly matching our main result

The naïve linear IV estimate would be a 33% reduction in Covid from universal masking. People underwhelmed by the absolute number of cases prevented need to ask, what did you expect if masks are as effective as the observational literature suggests? I see our results as on the low end of these estimates, and this is precisely what we powered the study to detect.

  1. Let’s distinguish between:
    1. The absolute reduction in raw consenting symptomatic seropositives (20 cases prevented)
    2. The absolute reduction in the proportion of consenting symptomatic seropositives (0.08 percentage points, or 105 cases prevented)
    3. The relative reduction in the proportion of consenting symptomatic seropositives (9.5% in cases)

Ben Recht advocates analyzing a) – the difference in means not controlling for population. This is not the specification we pre-registered, as it will have less power due to random fluctuations in population (and indeed, the difference in raw symptomatic seropositives overlooks the fact that the treatment population was larger – there are more people possibly ill!). Fixating on this specification in lieu of our pre-registered one (for which we powered the study) is reverse p-hacking.

RE: b) vs. c), we find a result of almost identical significance in a linear model, suggesting the same proportionate reduction if we divide the coefficient by the base rate. We believe the relative reduction in c) is more externally valid, as it is difficult to write down a structural pandemic model where masks lead to an absolute reduction in Covid regardless of the base rate (and the absolute number in b) is a function of the consent rate in our study).

  1. It is certainly true that survey response bias is a potential concern. We have repeatedly acknowledged this shortcoming of any real-world RCT evaluating masks (that respondents cannot be blinded). The direction of the bias is unclear — individuals might be more attuned to symptoms in the treatment group. We conduct many robustness checks in the paper. We have now obtained funding to replicate the entire study and collect blood spots from symptomatic and non-symptomatic individuals to partially mitigate this bias (we will still need to check for balance in blood consent rates with respect to observables, as we do in the current study).
  1. We do not say that surgical masks work better than cloth masks. What we say is that the evidence in favor of surgical masks is more robust. We find an effect on symptomatic seropositivity regardless of whether we drop or impute missing values for non-consenters, while the effect of cloth masks on symptomatic seropositivity depends on how we do this imputation. We find robust effects on symptoms for both types of masks.

I agree with you that our study identifies only the medium-term impact of our intervention, and there are critically important policy questions about the long-term equilibrium impact of masking, as well as how the costs and benefits scale for people of different ages and vaccination statuses.”

Dose Optimization Trials Enable Fractional Dosing of Scarce Drugs

During the pandemic, when vaccines doses were scarce, I argued for fractional dosing to speed vaccination and maximize social benefits. But what dose? In my latest paper, just published in PNAS, with Phillip Boonstra and Garth Strohbehn, I look at optimal trial design when you want to quickly discover a fractional dose with good properties while not endangering patients in the trial.

[D]ose fractionation, rations the amount of a divisible scarce resource that is allocated to each individual recipient [36]. Fractionation is a utilitarian attempt to produce “the greatest good for the greatest number” by increasing the number of recipients who can gain access to a scarce resource by reducing the amount that each person receives, acknowledging that individuals who receive lower doses may be worse off than they would be had they received the “full” dose. If, for example, an effective intervention is so scarce that the vast majority of the population lacks access, then halving the dose in order to double the number of treated individuals can be socially valuable, provided the effectiveness of the treatment falls by less than half. For variable motivations, vaccine dose fractionation has previously been explored in diverse contexts, including Yellow Fever, tuberculosis, influenza, and, most recently, monkeypox [712]. Modeling studies strongly suggest that vaccine dose fractionation strategies, had they been implemented, would have meaningfully reduced COVID-19 infections and deaths [13], and perhaps limited the emergence of downstream SARS-CoV-2 variants [6].

…Confident employment of fractionation requires knowledge of a drug’s dose-response relationship [613], but direct observation of both that relationship and MDSE, rather than pharmacokinetic modeling, appears necessary for regulatory and public health authorities to adopt fractionation [1516]. Oftentimes, however, early-phase trials of a drug develop only coarse and limited dose-response information, either intentionally or unintentionally. A speed-focused approach to drug development, which is common for at least two reasons, tends to preclude dose-response studies. The first reason is a strong financial incentive to be “first to market.” The majority of marketed cancer drugs, for example, have never been subjected to randomized, dose-ranging studies [1718]. The absence of dose optimization may raise patients’ risk. Further, in an industry sponsored study, there is a clear incentive to test the maximum tolerated dose (MTD) in order to observe a treatment effect, if one exists. The second reason, observed during the COVID-19 pandemic, is a focus on speed for public health. Due to ethical and logistical challenges, previously developed methods to estimate dose-response and MDSE have not routinely been pursued during COVID-19 [19]. The primary motivation of COVID-19 clinical trial infrastructure has been to identify any drug with any efficacy rather than maximize the benefits that can be generated from each individual drug [3182021]. Conditional upon a therapy already having demonstrated efficacy, there is limited desire on the part of firms, funders, or participants to possibly be exposed to suboptimal dosages of an efficacious drug, even if the lower dose meaningfully reduced risk or extended benefits [16]. Taken together, then, post-marketing dose optimization is a commonly encountered, high-stakes problem–the best approach for which is unknown.

…With that motivation, we present in this manuscript the development an efficient trial design and treatment arm allocation strategy that quickly de-escalates the dose of a drug that is known to be efficacious to a dose that more efficiently expands societal benefits.

The basic idea is to begin near the known efficacious dose level and then deescalate dose levels but what is the best de-escalation strategy given that we want to quickly find an optimal dosage level but also don’t want to go so low that we endanger patients? Based on Bayesian trials under a variety of plausible conditions we conclude that the best strategy is Targeted Randomization (TR). At each stage, TR identifies the dose-level most likely to be optimal but randomizes the next subject(s) to either it or one of the two dose-levels immediately below it. The probability of randomization across three dose-levels explored in TR is proportional to the posterior probability that each is optimal. This strategy balances speed of optimization while reducing danger to patients.

Read the whole thing.