It is urgent to understand the future of severe acute respiratory syndrome–coronavirus 2 (SARS-CoV-2) transmission. We used estimates of seasonality, immunity, and cross-immunity for betacoronaviruses OC43 and HKU1 from time series data from the USA to inform a model of SARS-CoV-2 transmission. We projected that recurrent wintertime outbreaks of SARS-CoV-2 will probably occur after the initial, most severe pandemic wave. Absent other interventions, a key metric for the success of social distancing is whether critical care capacities are exceeded. To avoid this, prolonged or intermittent social distancing may be necessary into 2022. Additional interventions, including expanded critical care capacity and an effective therapeutic, would improve the success of intermittent distancing and hasten the acquisition of herd immunity. Longitudinal serological studies are urgently needed to determine the extent and duration of immunity to SARS-CoV-2. Even in the event of apparent elimination, SARS-CoV-2 surveillance should be maintained since a resurgence in contagion could be possible as late as 2024.
That is the abstract of a new piece by Stephen M. Kissler, Christine Tedijanto, Edward Goldstein, Yonatan H. Grad, and Marc Lipsitch.
The implication of course is that changes to the structure of production will be far-reaching unlike say in 2008. Ongoing social distancing will limit productivity and very drastically shape demand. This to some extent militates against response measures that assume “the economy as we knew it” will be bouncing back in a few months’ time.
That is the topic of my Bloomberg column, here is one bit:
As May begins, it seems highly likely that the states will be reopening at their own paces and with their own sets of accompanying restrictions, with some places not reopening at all. There is likely to be further divergence at the city and county level, with say New York City having very different policies and practices than Utica or Rochester upstate.
Such divergence in state policy is hardly new. But until now states have typically had many policies in common, on such broad issues as education and law enforcement and on narrower ones such as support for Medicaid. Now and suddenly, on the No. 1 issue by far, the states will radically diverge.
Hence the idea that America is inching closer to what it was under the Articles of Confederation, which governed the U.S. from 1781 to 1789. The U.S. constitutional order has not changed in any explicit manner, but the issues on which the states are allowed to diverge have gone from being modest and relatively inconsequential to significant and meaningful if not dominant.
This divergence may create further pressures on federalism. In Rhode Island, for example, state police have sought to stop cars with New York state license plates at the border, hindering or delaying their entrance. Whether such activities are constitutional, most governors do have broad authority to invoke far-reaching emergency powers.
As some states maintain strict lockdowns while others reopen and allow Covid-19 to spread, such border-crossing restrictions could become more common — and more important. Maryland has been stricter with pandemic control than has Virginia, so perhaps Maryland will deny or discourage entry from Virginia — in metropolitan Washington, there are only a few bridges crossing the river that divides the two states. Or maybe Delaware won’t be so keen to take in so many visitors from New Jersey, while Texas will want to discourage or block migration from Louisiana.
To be clear, I think this unusual situation will recede once Covid-19 is no longer such a serious risk.
Here are the slides, definitely recommended. Might this be my favorite epidemiological model so far?
I interpret the last few slides as being gloomy for some “star early performers,” including California, though you should not necessarily attribute that view to the authors.
…Swedish state epidemiologist Anders Tegnell remains calm: he is not seeing the kind of rapid increase that might threaten to overwhelm the Swedish health service, and unlike policymakers in the UK, he has been entirely consistent that that is his main objective.
That is from a new piece by Freddie Sayers, asserting that “the jury is still out” when it comes to Sweden. I cannot reproduce all of the graphs in that piece, but scroll through and please note that in terms of per capita deaths Sweden seems to be doing better than Belgium, France, or the United Kingdom, all of which have serious lockdowns (Sweden does not). If you measure extant trends, Sweden is in the middle of the pack for Europe. And here is data on new hospital admissions:
Now I understand that ideally one should compare similar “time cohorts” across countries, not absolute numbers or percentages. That point is logically impeccable, but still as the clock ticks it seems less likely to account for the Swedish anomaly.
Of course we still need more days and weeks of data.
To be clear, I am not saying the United States can or should copy Sweden. Sweden has an especially large percentage of people living alone, the Swedes are probably much better at complying with informal norms for social distancing, and obesity is much less of a problem in Sweden than America, probably hypertension too.
But I’d like to ask a simple question: who predicted this and who did not? And which of our priors should this cause us to update?
I fully recognize it is possible and maybe even likely that Sweden ends up being like Japan, in the sense of having a period when things seem (relatively) fine and then discovering they are not. (Even in Singapore the second wave has arrived, from in-migration, and may well be worse than the first.) But surely the chance of that scenario has gone down just a little?
And here is a new study on Lombardy by Daniil Gorbatenko:
The data clearly suggest that the spread had been trending down significantly even before the initial lockdown. They invalidate the fundamental assumption of the Covid-19 epidemiological models and with it, probably also the rationale for the harshest measures of suppression.
One possibility (and I stress that word possibility) is that these Lombardy data, shown at the link, are reflecting the importance of potent “early spreaders,” often family members, who give Covid-19 to their families fairly quickly, but after which the average rate of spread falls rapidly.
I’ll stand by my claim that the pieces on this one show an increasing probability of not really adding up. In the meantime, I am very happy to pull out and signal boost the best criticisms of these results.
There is another round of prize winners, and I am pleased and honored to announce them:
1. Petr Ludwig.
Petr has been instrumental in building out the #Masks4All movement, and in persuading individuals in the Czech Republic, and in turn the world, to wear masks. That already has saved numerous lives and made possible — whenever the time is right — an eventual reopening of economies. And I am pleased to see this movement is now having an impact in the United States.
Here is Petr on Twitter, here is the viral video he had a hand in creating and promoting, his work has been truly impressive, and I also would like to offer praise and recognition to all of the people who have worked with him.
The covid19india project is a website for tracking the progress of Covid-19 cases through India, and it is the result of a collaboration.
It is based on a large volunteer group that is rapidly aggregating and verifying patient-level data by crowdsourcing.They portray a website for tracking the progress of Covid-19 cases through India and open-sources all the (non-personally identifiable) data for researchers and analysts to consume. The data for the react based website and the cluster graph are a crowdsourced Google Sheet filled in by a large and hardworking Ops team at covid19india. They manually fill in each case, from various news sources, as soon as the case is reported. Top contributor amongst 100 odd other code contributors and the maintainer of the website is Jeremy Philemon, an undergraduate at SUNY Binghamton, majoring in Computer Science. Another interesting contribution is from Somesh Kar, a 15 year old high school student at Delhi Public School RK Puram, New Delhi. For the COVID-19 India tracker he worked on the code for the cluster graph. He is interested in computer science tech entrepreneurship and is a designer and developer in his free time. Somesh was joined in this effort by his brother, Sibesh Kar, a tech entrepreneur in New Delhi and the founder of MayaHQ.
3. Debes Christiansen, the head of department at the National Reference Laboratory for Fish and Animal Diseases in the capital, Tórshavn, Faroe Islands.
Here is the story of Debes Christiansen. Here is one part:
A scientist who adapted his veterinary lab to test for disease among humans rather than salmon is being celebrated for helping the Faroe Islands avoid coronavirus deaths, where a larger proportion of the population has been tested than anywhere in the world.
Debes was prescient in understanding the import of testing, and also in realizing in January that he needed to move quickly.
Please note that I am trying to reach Debes Christiansen — can anyone please help me in this endeavor with an email?
Here is the list of the first cohort of winners, here is the original prize announcement. Most of the prize money still remains open to be won. It is worth noting that the winners so far are taking the money and plowing it back into their ongoing and still very valuable work.
There is a new NBER working paper (by economists) on Covid-19:
We use anonymized and aggregated data from Facebook to show that areas with stronger social ties to two early COVID-19 “hotspots” (Westchester County, NY, in the U.S. and Lodi province in Italy) generally have more confirmed COVID-19 cases as of March 30, 2020. These relationships hold after controlling for geographic distance to the hotspots as well as for the income and population density of the regions. These results suggest that data from online social networks may prove useful to epidemiologists and others hoping to forecast the spread of communicable diseases such as COVID-19.
That is by Theresa Kuchler, Dominic Russell, and Johannes Stroebel.
Our illustrative exercise implies a year-on-year contraction in U.S. real GDP of nearly 11 percent as of 2020 Q4, with a 90 percent confidence interval extending to a nearly 20 percent contraction. The exercise says that about 60 percent of the forecasted output contraction reflects a negative effect of COVID-induced uncertainty.
Here is much more, a full paper from Scott R. Baker, Nicholas Bloom, Steven J. Davis, and Stephen J. Terry, an all-star team for this project.
There is a new paper by Ivan Korolev:
This paper studies the SEIRD epidemic model for COVID-19. First, I show that the model is poorly identified from the observed number of deaths and confirmed cases. There are many sets of parameters that are observationally equivalent in the short run but lead to markedly different long run forecasts. Second, I demonstrate using the data from Iceland that auxiliary information from random tests can be used to calibrate the initial parameters of the model and reduce the range of possible forecasts about the future number of deaths. Finally, I show that the basic reproduction number R0 can be identified from the data, conditional on the clinical parameters. I then estimate it for the US and several other countries, allowing for possible underreporting of the number of cases. The resulting estimates of R0 are heterogeneous across countries: they are 2-3 times higher for Western countries than for Asian countries. I demonstrate that if one fails to take underreporting into account and estimates R0 from the cases data, the resulting estimate of R0 will be biased downward and the model will fail to fit the observed data.
And here is a further paper on the IMHE model, by statisticians from CTDS, Northwestern University and the University of Texas, excerpt from the opener:
- In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
- The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)
Again, I am very happy to present counter evidence to these arguments. I readily admit this is outside my area of expertise, but I have read through the paper and it is not much more than a few pages of recording numbers and comparing them to the actual outcomes (you will note the model predicts New York fairly well, and thus the predictions are of a “train wreck” nature).
Let me just repeat the two central findings again:
- In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
- The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)
So now really is the time to be asking tough questions about epidemiology, and yes, epidemiologists. I would very gladly publish and “signal boost” the best positive response possible.
And just to be clear (again), I fully support current lockdown efforts (best choice until we have more data and also a better theory), I don’t want Fauci to be fired, and I don’t think economists are necessarily better forecasters. I do feel I am not getting straight answers.
Some 72% of Americans polled said they would not attend if sporting events resumed without a vaccine for the coronavirus. The poll, which had a fairly small sample size of 762 respondents, was released Thursday by Seton Hall University’s Stillman School of Business.
When polling respondents who identified as sports fans, 61% said they would not go to a game without a vaccine. The margin of error is plus-or-minus 3.6%.
Only 12% of all respondents said they would go to games if social distancing could be maintained, which would likely lead to a highly reduced number of fans, staff and media at games.
I doubt if that poll is extremely scientific, but the key fact here is that people go to NBA games, and most other public entertainments, in groups. Fast forward a bit and see how the group negotiations will go. Of a foursome, maybe three people would go to the game and one would not. That group is likely to end up doing something else altogether different, without 19,000 other cheering fans screaming and breathing into their faces.
If half the people say they will go, that does not mean you get half the people. It means you hardly get anybody.
By the way, what percentage of the American population will refuse or otherwise evade this vaccine, assuming we come up with one of course?
Here is the ESPN story link.
This is all from my correspondent, I won’t do any further indentation and I have removed some identifying information, here goes:
“First, some background on who I am. After taking degrees in math and civil engineering at [very very good school], I studied infectious disease epidemiology at [another very, very good school] because I thought it would make for a fulfilling career. However, I became disillusioned with the enterprise for three reasons:
- Data is limited and often inaccurate in the critical forecasting window, leading to very large confidence bands for predictions
- Unless the disease has been seen before, the underlying dynamics may be sufficiently vague to make your predictions totally useless if you do not correctly specify the model structure
- Modeling is secondary to the governmental response (e.g., effective contact tracing) and individual action (e.g., social distancing, wearing masks)
Now I work as a quantitative analyst for [very, very good firm], and I don’t regret leaving epidemiology behind. Anyway, on to your questions…
What is an epidemiologist’s pay structure?
The vast majority of trained epidemiologists who would have the necessary knowledge to build models are employed in academia or the public sector; so their pay is generally average/below average for what you would expect in the private sector for the same quantitative skill set. So, aside from reputational enhancement/degradation, there’s not much of an incentive to produce accurate epidemic forecasts – at least not in monetary terms. Presumably there is better money to be made running clinical trials for drug companies.
On your question about hiring, I can’t say how meritocratic the labor market is for quantitative modelers. I can say though that there is no central lodestar, like Navier-Stokes in fluid dynamics, that guides the modeling framework. True, SIR, SEIR, and other compartmental models are widely used and accepted; however, the innovations attached to them can be numerous in a way that does not suggest parsimony.
How smart are epidemiologists?
The quantitative modelers are generally much smarter than the people performing contact tracing or qualitative epidemiology studies. However, if I’m being completely honest, their intelligence is probably lower than the average engineering professor – and certainly below that of mathematicians and statisticians.
My GRE scores were very good, and I found epidemiology to be a very interesting subject – plus, I can be pretty oblivious to what other people think. Yet when I told several of my professors in math and engineering of my plans, it was hard for me to miss their looks of disappointment. It’s just not a track that driven, intelligent people with a hint of quantitative ability take.
What is the political orientation of epidemiologists? What is their social welfare function?
Left, left, left. In the United States, I would be shocked if more than 2-5% of epidemiologists voted for Republicans in 2016 – at least among academics. At [aforementioned very very good school], I’d be surprised if the number was 1%. I remember the various unprompted bashing of Trump and generic Republicans on political matters unrelated to epidemiology in at least four classes during the 2016-17 academic year. Add that to the (literal) days of mourning after the election, it’s fair to say that academic epidemiologists are pretty solidly in the left-wing camp. (Note: I didn’t vote for Trump or any other Republican in 2016 or 2018)
I was pleasantly surprised during my time at [very, very good school] that there was at least some discussion of cost-benefit analysis for public health actions, including quarantine procedures. Realistically though, there’s a dominant strain of thought that the economic costs of an action are secondary to stopping the spread of an epidemic. To summarize the SWF: damn the torpedoes, full steam ahead!
Do epidemiologists perform uncertainty quantification?
They seem to play around with tools like the ensemble Kalman filter (found in weather forecasting) and stochastic differential equations, but it’s fair to say that mechanical engineers are much better at accounting for uncertainty (especially in parameters and boundary conditions) in their simulations than epidemiologists. By extension, that probably means that econometricians are better too.”
TC again: I am happy to pass along other well-thought out perspectives on this matter, and I would like to hear a more positive take. Please note I am not endorsing these (or subsequent) observations, I genuinely do not know, and I will repeat I do not think economists are likely better. It simply seems to me that “who are these epidemiologists anyway?” is a question now worth addressing, and hardly anyone is willing to do that.
As an opening gambit, I’d like to propose that we pay epidemiologists more. (And one of my correspondents points out they are too often paid on “soft money.”) I know, I know, this plays with your mood affiliation. You would like to find a way of endorsing that conclusion, without simultaneously admitting that right now maybe the quality isn’t quite high enough.
Richard Lowery emails me this:
I saw your post about epidemiologists today. I have a concern similar to point 4 about selection based what I have seen being used for policy in Austin. It looks to me like the models being used for projection calibrate R_0 off of the initial doubling rate of the outbreak in an area. But, if people who are most likely to spread to a large number of people are also more likely to get infected early in an outbreak, you end up with what looks kind of like a classic Heckman selection problem, right? In any observable group, there is going to be an unobserved distribution of contact frequency, and it would seem potentially first order to account for that.
As far as I can tell, if this criticism holds, the models are going to (1) be biased upward, predicting a far higher peak in the absence of policy intervention and (2) overstate the likely severity of an outcome without policy intervention, while potentially understating the value of aggressive containment measures. The epidemiology models I have seen look really pessimistic, and they seem like they can only justify any intervention by arguing that the health sector will be overwhelmed, which now appears unlikely in a lot of places. The Austin report did a trick of cutting off the time axis to hide that total infections do not seem to change that much under the different social distancing policies; everything just gets dragged out.
But, if the selection concern is right, the pessimism might be misplaced if the late epidemic R_0 is lower, potentially leading to a much lower effective spread rate and the possibility of killing the thing off at some point before it infects the number of people required to create the level of immunity the models are predicted require. This seems feasible based on South Korea and maybe China, at least for areas in the US that are not already out of control.
I do not know the answers to the questions raised here, but I do see the debate on Twitter becoming more partisan, more emotional, and less substantive. You cannot say that about this communication. From the MR comments this one — from Kronrad — struck me as significant:
One thing both economists and epidemiologists seem to be lacking is an awareness for the problems of aggregation. Most models in both fields see the population as one homogenous mass of individuals. But sometimes, individual variation makes a difference in the aggregate, even if the average is the same.
In the case of pandemics, it makes a big difference how that infection rate varies in the population. Most models assume that it is the same for everyone. But in reality, human interactions are not evenly distributed. Some people shake hands all day, while others spend their days mostly alone in front of a screen. This uneven distribution has an interesting effect: those who spread virus the most are also the most likely to get it. This means that the infection rate looks very higher in the beginning of a pandemic, but sinks once the super spreaders has the disease and got immunity. Also, it means herd immunity is reached much earlier: not after 70% of the population is immune, but after people who are involved in 70% of all human interactions are immune. At average, this is the same. But in practice, it can make a big difference.
I did a small simulation on this and came to the conclusion that with recursively applied Pareto-distribution where 1/3 of all people are responsible for 2/3 of all human interaction, herd immunity is already reached when 10% of the population had the virus. So individual variation in the infection rate can make an enormous difference that are be captured in aggregate models.
My quick and dirty simulation can be found here:
See also Robin Hanson’s earlier post on variation in R0. C’mon people, stop your yapping on Twitter and write some decent blog posts on these issues. I know you can do it.
I have had fringe contact with more epidemiology than usual as of late, for obvious reasons, and I do understand this is only one corner of the discipline. I don’t mean this as a complaint dump, because most of economics suffers from similar problems, but here are a few limitations I see in the mainline epidemiological models put before us:
1. They do not sufficiently grasp that long-run elasticities of adjustment are more powerful than short-run elasticites. In the short run you socially distance, but in the long run you learn which methods of social distance protect you the most. Or you move from doing “half home delivery of food” to “full home delivery of food” once you get that extra credit card or learn the best sites. In this regard the epidemiological models end up being too pessimistic, and it seems that “the natural disaster economist complaints about the epidemiologists” (yes there is such a thing) are largely correct on this count. On this question economic models really do better, though not the models of everybody.
2. They do not sufficiently incorporate public choice considerations. An epidemic path, for instance, may be politically infeasible, which leads to adjustments along the way, and very often those adjustments are stupid policy moves from impatient politicians. This is not built into the models I am seeing, nor are such factors built into most economic macro models, even though there is a large independent branch of public choice research. It is hard to integrate. Still, it means that epidemiological models will be too optimistic, rather than too pessimistic as in #1. Epidemiologists might protest that it is not the purpose of their science or models to incorporate politics, but these factors are relevant for prediction, and if you try to wash your hands of them (no pun intended) you will be wrong a lot.
3. The Lucas critique, namely that agents within a model, knowing the model, will change how the model itself operates. Epidemiologists seem super-aware of this, much more than Keynesian macroeconomists are these days, though it seems to be more of a “I told you that you should listen to us” embodiment than trying to find an actual closed-loop solution for the model as a whole. That is really hard, either in macroeconomics or epidemiology. Still, on the predictive front without a good instantiation of the Lucas critique again a lot will go askew, as indeed it does in economics.
The epidemiological models also do not seem to incorporate Sam Peltzman-like risk offset effects. If you tell everyone to wear a mask, great! But people will feel safer as a result, and end up going out more. Some of the initial safety gains are given back through the subsequent behavioral adjustment. Epidemiologists might claim these factors already are incorporated in the variables they are measuring, but they are not constant across all possible methods of safety improvement. Ideally you may wish to make people safer in a not entirely transparent manner, so that they do not respond with greater recklessness. I have not yet seen a Straussian dimension in the models, though you might argue many epidemiologists are “naive Straussian” in their public rhetoric, saying what is good for us rather than telling the whole truth. The Straussian economists are slightly subtler.
4. Selection bias from the failures coming first. The early models were calibrated from Wuhan data, because what else could they do? Then came northern Italy, which was also a mess. It is the messes which are visible first, at least on average. So some of the models may have been too pessimistic at first. These days we have Germany, Australia, and a bunch of southern states that haven’t quite “blown up” as quickly as they should have. If the early models had access to all of that data, presumably they would be more predictive of the entire situation today. But it is no accident that the failures will be more visible early on.
And note that right now some of the very worst countries (Mexico, Brazil, possibly India?) are not far enough along on the data side to yield useful inputs into the models. So currently those models might be picking up too many semi-positive data points and not enough from the “train wrecks,” and thus they are too optimistic.
On this list, I think my #1 comes closest to being an actual criticism, the other points are more like observations about doing science in a messy, imperfect world. In any case, when epidemiological models are brandished, keep these limitations in mind. But the more important point may be for when critics of epidemiological models raise the limitations of those models. Very often the cited criticisms are chosen selectively, to support some particular agenda, when in fact the biases in the epidemiological models could run in either an optimistic or pessimistic direction.
Which is how it should be.
Now, to close, I have a few rude questions that nobody else seems willing to ask, and I genuinely do not know the answers to these:
a. As a class of scientists, how much are epidemiologists paid? Is good or bad news better for their salaries?
b. How smart are they? What are their average GRE scores?
c. Are they hired into thick, liquid academic and institutional markets? And how meritocratic are those markets?
d. What is their overall track record on predictions, whether before or during this crisis?
e. On average, what is the political orientation of epidemiologists? And compared to other academics? Which social welfare function do they use when they make non-trivial recommendations?
f. We know, from economics, that if you are a French economist, being a Frenchman predicts your political views better than does being an economist (there is an old MR post on this somewhere). Is there a comparable phenomenon in epidemiology?
g. How well do they understand how to model uncertainty of forecasts, relative to say what a top econometrician would know?
h. Are there “zombie epidemiologists” in the manner that Paul Krugman charges there are “zombie economists”? If so, what do you have to do to earn that designation? And are the zombies sometimes right, or right on some issues? How meta-rational are those who allege zombie-ism?
i. How many of them have studied Philip Tetlock’s work on forecasting?
Just to be clear, as MR readers will know, I have not been criticizing the mainstream epidemiological recommendations of lockdowns. But still those seem to be questions worth asking.
Hey people, what is up with this?
Via John V. And in the meantime, the virus has now affected 70% of New Jersey’s long-term care centers.
Here is the link, it is about one hour long, with questions interspersed throughout, the title is “The future social and political implications of COVID-19.” Self-recommending!
Since COVID-19 can be transmitted through close proximity to affected individuals, public health organizations have identified contact tracing as a valuable tool to help contain its spread. A number of leading public health authorities, universities, and NGOs around the world have been doing important work to develop opt-in contact tracing technology. To further this cause, Apple and Google will be launching a comprehensive solution that includes application programming interfaces (APIs) and operating system-level technology to assist in enabling contact tracing. Given the urgent need, the plan is to implement this solution in two steps while maintaining strong protections around user privacy.
First, in May, both companies will release APIs that enable interoperability between Android and iOS devices using apps from public health authorities. These official apps will be available for users to download via their respective app stores.
Second, in the coming months, Apple and Google will work to enable a broader Bluetooth-based contact tracing platform by building this functionality into the underlying platforms. This is a more robust solution than an API and would allow more individuals to participate, if they choose to opt in, as well as enable interaction with a broader ecosystem of apps and government health authorities. Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.
Here is the full story. I cannot help but wonder if this would have happened sooner if not for a) antitrust concerns, and b) fears of existential risk due to attacks on the privacy issue. But I am pleased to see it is proceeding, and one hopes the risks on the legal side will not turn out to be too high.